2015 – Ashburn Consulting LLC

Blogs

Use your WAN not the Carrier for your Remote Site

Replace your Carrier Remote or Mini Carrier Remote with an Avaya Survivable Media Gateway (SMG). The SMG connects to your main PBX through the corporate WAN. Furthermore, the SMG will operate in survival mode during a WAN failure. During a network failure, the SMG will support phone operation by engaging its own CPU. This is something your Carrier Remote cannot do. In addition, the SMG is IP based which means it will register with your main-site PBX. Therefore, if you are paying a T1 carrier in order to support your Carrier Remote or Mini Carrier Remote Cabinet, upgrade your main site to Avaya release 7.6 and install a Survivable Media Gateway.

“Survivable Media Gateway enhances the reliability of Avaya Communication Server 1000E (Avaya CS 1000E) systems by allowing the provisioning of up to 50 geographically remote Secondary Call Servers to a Primary Call Server. You can configure each Secondary Call Server as Alternate Call Server 1 or Alternate Call Server 2 for the devices assigned to it. Survivable Media Gateway provides two levels of redundancy. If the Primary Call Server fails, local and remote resources register with the Secondary Call Server configured as Alternate Call Server 1. If the Primary and Secondary Call Servers both fail, or the WAN fails, local resources register with the Secondary Call Server that is installed at the local site and configured as Alternate Call Server 2.”

Reference: NN43001-507_06_01_System_Redundancy_Fundamentals

nortelcertifiedsupportexpert

 

Enabling AACC Shadowing After 24 Hours

If your standby server has been out of synchronization for more than 24 hours, you will need to complete the following process to restart AACC 6.4 shadowing. Otherwise, the standby server will fail to synchronize with the active AACC server. Execute one restore at a time; i.e. ADMIN, CCMS, and CCT.

The procedure listed below avoids corrupting the standby server’s Backup Locations Directory by restoring one database at a time:

  • Make sure you have a recent backup from your active server; preferable located somewhere off the active and standby servers.
  • Map the active AACC server backup database folder onto the standby server.
  • On the active AACC server, run a full database backup.
  • Copy the UNC backup path on the active server into the standby server’s restore UNC path (\\Computer Name (or IP address) \ Backup Location).
  • Navigate to the standby server’s Database Utility / Database Maintenance window.*** Do not attempt to restore all three “ADMIN, CCMS, and CCT databases at the same time. This can cause corruption within the standby server’s Backup Locations directory. ***
  • Restore the standby server by first running the ADMIN database only.
  • Once the ADMIN restore is complete, verify the UNC path is correct. Adjust as necessary.
  • Restore the CCMS database. Note: the CCMS database is the largest out of three restores.
  • *** The CCMS database will restore the active server’s AACC name and IP information into your standby server. However, it will not change the actual IP address in the standby server’s network properties located in the control panel. ***
  • Open the standby server’s “Server Configuration” window located in Manger Server.
  • Adjust the standby server’s local settings to the correct name and IP addressing. *** Do Not Restart the Standby Server. ***
  • Click on the Licensing tab to correct the location of your license file on the standby server. . *** Do Not Restart the Standby Server. ***
  • Finally yet importantly, run the CCT database restore on the standby server.
  • Do not restart server.

You can now open the High Availability window on the standby server, double click System Control, select Shadowing, click start, and save. The log file will now show actively shadowing from the active server. You will notice a continuous flash on both servers if your server’s NIC cards contain LED lights.

Reference: https://support.avaya.com/documents/Avaya Aura Contact Center

nortelcertifiedsupportexpert

 

 

If additional assistance is required, please call 301.873.0080 or email at mnicholson@ashburnconsulting.com

VTC Video Conferencing Rules for Palo Alto Firewall

If you have a Cisco Telepresence VCS Expressway or a legacy Tandberg Border Controller or even an MCU behind a Palo Alto Firewall there are several Application based objects needed to be in your Outbound and Inbound Security policy.

  • rtp-base
  • rtcp
  • h.225
  • h.245
  • h.323
  • sip
  • rtp

Normally the logs will show which ports are being denied by the clean up rule. Depending on the type of Firewall, you might need to create an object with a certain udp range. There are also cases where a VTC endpoint is configured to use static ports that’s out of range from the standard protocols and applications built in. Making VTC sessions work behind a newly deployed Firewall can be challenging at first. Simple trial and error and gathering firewall connection logs is key. I’d be careful allowing a big range of ports though to Inbound Firewall rules.

Security Today

Security Today is extremely complex and yet simple to bypass for a willing mind with enough time and computer power to exploit vulnerabilities that may or may not be accounted for per your internal security or IT operations team.

The multi-layered threat, with many mutations within the attack surface, has increased significantly from the Virus age to nowadays application-focused threats. With this in mind we ought to think that the tools protecting us at the endpoint and through our network security devices should have enhanced accordantly, but they have not. We live in a time where awareness is our best way to prepare for most cyber attacks.

The main players today are no longer interested on simply affecting your infrastructure performance with a DDoS attack for example. Today these are attacks targeted to delay specific areas in your organization with the intent to lower your customer’s confidence in the services provided by slowing down your key production services.  These targets could be  a call center with SaaS based applications or an old, large and fully protected physical infrastructure with security controls and mitigation processes in place.  There is an opportunity to have these controls be more adaptive to the threat and more dynamic in reporting and mitigation controls.

As per our attackers, let us establish that if we understand the principles of the Intrusion Kill Chain, the attackers must be successful on every one of the 7 steps on the Kill Chain, we, as security professionals, need to address just one of these 7 phases, that’s good news but we must be aware of all the threats, and vulnerabilities in order to be successful protecting against one of these phases, and to effectively thwart the attack.  The attacker, however, needing to be successful in every one of them to compromise an organization’s data structure, may loose interested against a well protected organization because of the increase in time and costs for their attack to be profitable.

For years, we thought that a port based, fully stateful, and packet based firewall would protect us against most of these threats, and whatever wouldn’t be caught by them, we could easily find on IPS/IDS devices inline or surrounding our main security points. Using extended log servers with behavioral analyses were a good composition for our defense against the “threat attack-surface”. Much of this approach has changed but some principals continue to be just an item on a checklist necessary to deploy a “secure basis” in several organization’s compartments.

Software attack surface

Increasingly, the software development community understands that more needs to be done to properly develop software that is not only efficient, but is also secure.

More is being invested on new web applications that are mission-critical touching several data compartments, and yet some basic security concepts are not being applied at the development phase of these web applications.

Most of the known attacks these days are using the same old techniques such as URL injection and Cross-Site-Scripting (XSS).

For some poorly developed applications, a non-parameterized query is all that is needed for a successful attack to pass through at least one of the 7 kill chain phases. Many other code vulnerabilities have been cataloged, and a common security guideline exists today for these application developers to follow (OWASP top 10)
https://www.owasp.org/index.php/Category:OWASP_Top_Ten_Project

If all corporations and organizations start adopting the OWASP guidelines, they would have increased security awareness and the gain would be to increase the cost of creating or pursuing exploits on these systems.  As we increase the cost to the attackers, we reduce their ability to continue their exploits on our applications.

Penetration tests and application validation procedures are always important, but if developers were more aware of the risks and knowledgeable about the Kill chain, I believe they could embed more security “snippets” to their core code and enforce security postures from within their application.

A good starting point would be if developers would disable the amount of code running in the background during core data access tasks.

Reducing the amount of features enabled to users with high privilege access to core data, and disabling these features completely to users with low privileged credentials could be another good approach. Finally, would be interesting if, as part of a common guideline, developers would limit their own code to access data or perform network tasks if the application is made aware of a vulnerability being exploited on their own system at that specific time, some self security boundaries check from within their applications acknowledging other security devices.

Some developers already use the defense-in-depth architecture model and collaborate with Next-Gen security devices that exchange information via XML or an API capable of providing them (developers and their code) with valuable application analysis during run-time.

Network attack surface

Probably the busiest attack surface, and the most popular, is the network attack vector. Even with all standards and RFCs in place and mitigations to all published CVEs (https://cve.mitre.org/cve/), or endpoint IOC databases (http://www.openioc.org/), there are still a lot going on into a TCP packet that needs to be inspected and sometimes blocked.

We have VPNs, SSH connections, and a series of tunnels such as a point to point (PPtP) that are still not enough to contain the threats encapsulated on a TCP/UDP packet and sent through our networks. Not every user wants to be aware of the attack surface or even security in general. Not all Network Engineers and Security Engineers are willing to keep processes and security controls in check. We have created dual factor authentication, and then we created Single Sign On (SSO) and the combination of the 2 make a very secure and complex password portal.  This helps to increase the password difficult level and there are Domain administrators applying changes every other 6 months but besides all that, some users are careless to eyes dropping, social phishing and other security key capture techniques used by attackers today.

Many organizations do not enforce job (position) rotation, separation of duties (SoD), or mandatory vacations, as they should according to several standards for detective and administrative controls.  These organization’s financial departments are made well known of high costs of these risks, and rather use another administrative security control, they transfer or accept the risk to partially patch the problem.  This approach could be effective when a well-balanced finance and security department are in place, communicate, constantly update the risk environment and exchange security controls updates as needed. But this is not always the case and often leads these organizations to take misguided risks which cannot be transferred (to a insurance company for instance) and have to be absorbed by the corporation.  The company’s reputation should never be an asset never available for gambling and CSOs and CISOs are often misinformed, not on the risks, but on how they are measuring them and how other departments are electing to mitigate or accept risk.  Another detective administrative control that’s not always, or as frequently as it should be applied: The security reviews and audits.

The Network Surface is constantly and dynamically changing.  Security controls and user awareness should be as present as common as Human Resources training in place on all these corporations or organizations today.

Human attack surface

To stress why training and security awareness is important is becoming redundant. This  should be “second nature” to all users and be made paramount to them how important their data, their jobs, and their own physical safety security are. Today, employees on all levels, especially those in high profile organizations, need to be protected from external groups that are aiming at their organizations.  These attackers are investing in diverse network intrusion tactics as well as kidnapping, temporarily hijacking devices, and copying user activity from their devices from their own homes.   This can be done by following an unsuspecting user to their home,  shadowing their network services while in their houses whether wired or wireless.  It is much easier to tap on their cable provider egress than break their SSL key or IPSEC tunnels from the internet.

Many groups copy or tap into user’s home service providers and are able to gather credentials and valuable paths for real data, even if the user does not have high privileged access, to a user connecting to a non-segmented network.  For instance, a Pivot Attack would be easier from the user’s remote location than to infiltrate through the corporation’s secure inbound from the public networks.

Installing software, to later be used as a pivot attack source, into that organization is a common practice, but finding the proper people within the targeted organization is key to follow their email threats and work activity. That is enough to open several doors within the organization that other group members can exploit later with a serious agenda and a profitable contract at hand.

So, back to my initial point, in my opinion, security today is relevant, if we all don’t agree on being aware of the risks we are exposed, or we are involuntarily promoting to our employers, and generally some employees think that “its NOT our business”.   I would say: Think again, maybe it is NOW our business!

The security of your job, and your company’s data, reputation and its clients, is everyone’s responsibility, even (if not mostly) when employees are not at their work place.

 

Cisco TMS Telepresence Management Suite Upgrade & Install from 13.0 to 14.6

If you have an old version of TMS (13.0) running on an unsupported Windows Server 2003, here is a very extensive procedure to upgrade it to TMS version 13.2, do Database recovery, restore and conversion, and migration to a new Windows 2012 R2 system running on SQL Server Express 2012. The final step was to upgrade to the TMS version 14.6.

First big issue I ran into was the 2008 SQL Express server “SA” password provided by the previous SQL Administrator was incorrect. The numerous attempts to access the database server locked up the account and the SQL service couldn’t be started from SQL Server Configuration manager. Even if you go to Control Panel and Services, manually start the SQL service, it doesn’t start.

tms1

error code 17058 appears from the Event Logs and also look into SQL logs to get more information on the cause of the SQL service not able to start. It revealed that it went through the proper startup procedures and during the last step, there was a message that the supplied account login to the database failed.

tms2

First, you have to fix the SQL service startup issue to even be able to do anything.

You need to login as a user with admin rights on the Server. Open SQL Server Configuration Manager ==> click on SQL Server Services ==> then right click on SQL Server ==> choose Properties. A Window show below will appear:

tms3

Under the Logon tab, the SQL service was previously configured for “Network Service” which failed to start the SQL service. Under “Built-in Account”, choose “Local System”. You will need to know the local system account username and password. Provide the local user name and password. Now start the service manually and it should work fine.

SQL SA Password Reset and Recovery

Now that SQL service has started, next step is to reset the DB server SA account password. Do the following from the MS-DOS prompt (provide a Strong SA Password):

tms4

 

tms6

tms6

tms7

Make TMS Connect to the TMSNG Database

After changing the SA password, you can either use TMS Tools to reconnect TMS to the DB again or since I needed to upgrade to TMS 13.2, I went ahead and ran the install of TMS 13.2. This is the last version of TMS that supports Windows server 2003. The install will ask for your DB connection settings and this is where you enter the new SQL SA password. After the installation is complete, verify TMS functionality by pulling it up from a Web Browser.

Install TMS Provisioning Extension (TMSPE)

If you are using TMS agent legacy, we need to install TMS Provisioning Extension (TMSPE) in TMS 13.2. TMSPE is required when moving up to the 14.x versions of TMS. Without TMSPE, TMS 14.x install will fail during the install procedure. This will also require a Java Runtime Environment upgrade to at least Java 6 build 33. For more info, please go to Cisco’s Website and see the TMSPE Installation guide.

  • Download Java from Oracle site and Install.
  • Run the TMSPE Installation.
  • After Installation is complete, check the option to run the “Migration Tool” to build the database for Provisioning Extension.

Performing the installation and migration

  1. Close all open applications and disable virus scanning software.
  2. Extract the Cisco TMSPE installer from the zip archive to the Cisco TMS server.
  3. Run the Cisco TMSPE installer.
  4. Follow the setup instructions:

a.Click Next to initiate the setup.

  1. Accept the terms in the license agreement and click Next.
  2. Enter the Username and Password of the user that Cisco TMSPE will use to connect to Cisco TMS. This user must be a member of the Site Administrators group in Cisco TMS. Click Next.
  3. The installer detects where the TMS SQL database (tmsng) is installed. We recommend installing the Cisco TMSPE SQL database (tmspe) to the same location and instance.
  4. Confirm or enter the appropriate SQL Server Name and Instance Name. If deploying in a redundant setup, make sure both installations are pointing to the same database location. ii.
  5. Fill in the necessary credentials.
  6. Click Next.
  7. Click Install to begin the installation. Click Back to review or change installation settings.
  8. When the installation is complete, check Run Migration Tool and click Finish to exit the Setup window. The Migration Tool window opens.

Click Start Migration. Depending on the size of the database, the migration process may take several minutes to complete. When the migration process is complete, the Migration Tool window displays the

results of the migration and provides a migration log.

 

Enabling Cisco TMSPE

After completing the installation:

  1. In Cisco TMS, go to Administrative Tools > Configurations > General Settings, set the field Provisioning Mode to Provisioning Extension and click Save. You may need to refresh the browser window or empty the browser cache after making this selection.

 

  1. Go to Administrative Tools > Activity Status to verify that the switch is completed.

 

  1. Verify that Cisco TMSPE features are now available and functioning.
  2. Browse to the following pages in Cisco TMS: Systems > Provisioning > Users. If this page reports a problem connecting to User Repository, the database connection is not working.

Systems > Provisioning > FindMe

Systems > Provisioning > Devices

Administrative Tools > Configuration > Provisioning Extension Settings

 

Go to Administrative Tools > Provisioning Extension Diagnostics, look for any alarms raised and click Run Health Check. If any alarms are raised, click them to see details and perform the corrective actions described. See Troubleshooting the installation [p.71] for further information.

 

  1. When browsing to all of the above Cisco TMSPE pages is successful and no alarms are reported in Provisioning Extension Diagnostics

 

Perform TMS Database backup 

You will now notice that the TMS Database size has doubled. This is because of the Provisioning Extension install. It’s now time to backup this database. The easiest way is by command line:

tms8

You will now find the DB backup file in this directory: C:\Program Files\Microsoft SQL Server\MSSQL10.SQLTMS\MSSQL\Backup

 

New 2012 R1 Server Buildout

Size the server hardware by checking Cisco’s deployment guide first. Here are the software requirements to install before installing TMS 14.6:

  • .NET framework 3.5 Full (extended)
  • .NET framework 4.5.0 Full (extended)
  • Microsoft IIS for Windows Server 2012 R2: IIS 8.5
  • Apply windows updates
  • Microsoft SQL Server 2012 Express Edition (free) if installing a small deployment size of less than 200 endpoints/vtc systems. The install includes SQL Management Studio.

Once SQL server express is installed, do a database restore from the TMS backup file using SQL Management Studio. The db import will do the conversion to SQL 2012 structure.

tms9

tms10

tms11

tms12

Once Database is created, you can now run the TMS 14.6 installation.

 

Creating or upgrading the database

  • If the installer does not find an existing Cisco TMS database, but locates a local installation of SQL Server, enter the username and password to allow the installer to create a new database. Click Next.
  • If using an external SQL Server, which is required for large deployments, enter all connection details. Click Next.

 

  • If the installer finds an existing Cisco TMS database, the dialog will be pre-populated with the previously specified SQL Server. When prompted, enter the username and password and click Next.
  • Click Yes to upgrade the existing database to the current version and retain the existing information.
  • We recommend that you back up the database before it is upgraded using the appropriate tools.
  • If clicking No, you must proceed to stop the installer and manually remove the database if you wish to use the same SQL Server, before you can install a new Cisco TMS database.

tms13

Adding release keys and pre-configuring the network settings

The Release and Option Keys dialog is now displayed, and any existing keys are shown if upgrading.

tms14

  • A new release key is required if performing a new installation or upgrading to a new major release. If no release key is entered, an evaluation version of Cisco TMS will be installed. This includes support for three systems.
  • Option keys enable additional systems, extensions, or features. They may also be added post installation by going to Administrative Tools > Configuration > General Settings.
  • For questions regarding release or option keys, contact your Cisco Reseller or Cisco Support.
  • Enter the release key if necessary.
  • The release key must be entered before adding option keys.
  • Enter each option key, then click Add Option.
  • Option keys are validated as they are added.
  • When done adding keys, click Next.
  • The Network Settings screen is displayed.

You can now pre-configure default settings to allow Cisco TMS to immediately start working with a basic network configuration. The settings can be changed after installation.

If upgrading, values from the existing database are displayed.

These should be the major install steps and just follow through the wizard until the install is complete.

Make sure you enable SNMP service. It is disabled by default for new installations of TMS.

Access Cisco TMS for the first time.

Test all General Functionalities.

 

Next Generation Firewall Overview with Glimpse of Application Identification

“I am not an advocate for frequent changes in laws and constitutions, but laws and institutions must go hand in hand with the progress of the human mind. As that becomes more developed, more enlightened, as new discoveries are made, new truths discovered and manners and opinions change, with the change of circumstances, institutions must advance also to keep pace with the times. We might as well require a man to wear still the coat which fitted him when a boy as a civilized society to remain ever under the regimen of their barbarous ancestors.”

-Excerpted from a letter from Thomas Jefferson , July 12, 1816

Interestingly, Thomas Jefferson’s quote could not be more appropriate when assessing todays state in IT Network Security. When people hear IT Network Security one of the first devices that come to mind is a firewall. Early firewalls started out as not much more than an extended access list. As security requirements grew Stateful packet inspect was introduced by Check Point Software to address several security issues with most notably being certain types of man in the middle attacks (MITM). As stateful inspection began to take off many network engineers found difficulties in implementation among some issues asymmetric routing became an issue that required engineers to control network paths with a fine tooth comb to confirm forward and reverse traffic were taking the same route back and forth or it would be dropped due to stateful inspection. Part of the reason for this introduction was to reduce the capability for a man in the middle attack to be intercepting traffic and sending it back without the end user being aware their network traffic had been compromised.

Fast forward back to the present and Next Generation Firewalls (NGFW) and layer 7 inspection are introducing a new evolution in IT security whose functionality is critical to todays’ IT security landscape. With many vendors introducing their own take on layer 7 inspection the importance of this new upgrade in capabilities is more apparent than ever akin to when stateful packet inspect was originally introduced. Most firewalls previously are port based in the sense that port 80 and 443 may be open outbound to allow web and ssl traffic.

With that knowledge a malicious user could exploit this port based firewall by running any application over these ports even though they were originally designed with the intent for web browsing and ssl traffic. A malicious user could run any application they desired over open ports without any restriction including unwanted chat and bit torrent clients up to and including applications like nmap without the restriction of being blocked by a firewall without another security applicance on the network capable of application inspection.

NGFW’s enable network security engineers to not only restrict ports but also applications and ports, they do this by inspecting IP packets for what data is inside the packet header beyond the traditional source/destination and port. This visibility into traffic is critical to enabling an engineer the capability to not only restrict port but also only allow web-browsing over port 80 or only ssl traffic over 443, and disabling any traffic that does not have web-browsing requests or ssl in the IP headers and blocking a malicious users ability to run other applications like torrents or nmap over that port or even save bandwidth by stopping applications like youtube google-hangouts, facetime, minecraft and other applications that may be unwanted on the network.  In figure 1, in addition to source/destination you see a user viewing youtube over port 443 which is traditionally open, the functionality of this firewall will enable a Network Security Engineer to only allow ssl and web-browsing while blocking youtube. It is important to note that encrypted ssl traffic can be decrypted however that will be touch on in a separate article.

app-id-youtube

This new capacity requires specific configurations to implement correctly. During migrations many engineers lacking knowledge of why some traffic may stop working due to new application layer rules may cause openings in security to increase usability by removing some of the application layer configurations. This trade off is not using this new technology and functionality correctly and is effectively putting a child’s coat on an adult, it does not fit according to how the IT security landscape has evolved. Implementing this technology is critical to providing secure and trusted network traffic transactions and assist in disabling a malicious users ability to abuse legacy firewall functionality.

ICMP Security

This is a draft guide to handling ICMP securely.

Guide Analysis to Handling ICMP protocol

Summary:

This guide is an attempt to help answer common questions related to the handling of ICMP protocol in a secure and effective manner. Comments and feedback is always welcomed. This article is meant to cover the major area in which there may be questions on how to handle ICMP and what specifically should we allow in each particular condition which will also allow for effective risk mitigation. If you need specifics on ICMP codes with in each ICMP type please refer to the reference URLs below.

Major ICMP Protocol Types:

– 0: Echo Reply

– 3: Destination Unreachable

– 4: Source Quench

– 5: Redirect (change a route)

– 8: Echo Request

– 9: Router Advertisement

– 10: Router Solicitation

– 11: Time Exceeded for a Datagram

– 12: Parameter Problem on a Datagram

– 13: Timestamp Request

– 14: Timestamp Reply

– 17: Address Mask Request

– 18: Address Mask Reply

Areas of Affect:

Perimeter

Outbound: Echo Reply (0), Echo Request (8) (For Troubleshooting)

Deny Type: All except (TTL Exceed (11) & (Type 3, Code 4) From Limited External Testing Devices.

Interior (Corporate Network)

Internal Deny:  Should be handled on a case by case basis, however when permissible squelch Redirect (5), Router Advertisement (9), Router Solicitation (10), Timestamp Request (13), Timestamp Reply (14). Address Mask Request (17), and Address Mask Reply (18). The usefulness of the ICMP message types are deprecated by DHCP and NTP.

Internal Allow: Echo Reply (0), Destination Unreachable (3 Code 4), Echo Request (8), Time Exceeded (11)

Remote Access & Site to Site VPN

VPN Allow: Echo Reply (0), Destination Unreachable (3, Code 4), and Echo Request (8).

VPN Deny: Everything Else

Intranet to Intranet / Partner to Partner

Intranet to Intranet Allow:  Echo Reply (0), Destination Unreachable (3 Code 4), Echo Request (8), Time Exceeded (11)

Intranet to Intranet Deny: Everything Else

References:

PMTU

http://www.tcpipguide.com/free/t_IPDatagramSizetheMaximumTransmissionUnitMTUandFrag-4.htm

ICMP

http://www.tcpipguide.com/free/t_ICMPv4TimestampRequestandTimestampReplyMessages-3.htm

University of Syracuse ICMP Lecture Notes

Layer 2 Tracing for (6500, 7609, 4500) Cisco Switches

In a 6509, 7609 or any Chassis based Cisco switch, to determine where the switch forwards a Source and Destination pair to an actual port in a Port-channel/Etherchannel do the following commands:

Note: Doesn’t apply to Nexus switches.

First enter console for switch:

port-channel hash
Switch# remote login switch
Trying Switch ...
Entering CONSOLE for Switch

Then enter the following command:

port-channel hash
Switch-SP# test etherchannel load-balance interface port-channel 1 ip 10.1.1.1 10.1.1.2
Computed RBH: 0x6
Would select Gi2/1 of Po1

Based on the hash computation, the switch forwards traffic of the Src Dst pair to port Gi2/1.

This is a good tool to use if for some reason a particular port is dropping packets between the src and dst pairs.

Great SMTP DNS and Troubleshooting tool

mxtoolbox

Go to http://www.mxtoolbox.com

This test will list MX records for a domain in priority order. The MX lookup is done directly against the domain’s authoritative name server, so changes to MX Records should show up instantly. You can click Diagnostics , which will connect to the mail server, verify reverse DNS records, perform a simple Open Relay check and measure response time performance. You may also check each MX record (IP Address) against 147 DNS based blacklists . (Commonly called RBLs, DNSBLs)

ABOUT BLACKLIST CHECK

This test will check a mail server IP address against 147 DNS based email blacklists. (Commonly called Realtime blacklist, DNSBL or RBL).  If your mail server has been blacklisted, some email you send may not be delivered.  Email blacklists are a common way of reducing spam. If you don’t know your mail server’s address, start with a MX Lookup.   Or, just send an email to ping@mxtoolbox.com

ABOUT SMTP DIAGNOSTICS

This test will connect to a mail server via SMTP, perform a simple Open Relay Test and verify the server has a reverse DNS (PTR) record.  It will also measure the response times for the mail server.  If you don’t know your mail server’s address, start with a MX Lookup.

ABOUT EMAIL HEADERS and Analyzer

This tool will make email headers human readable by parsing them according to RFC 822.  Email headers are present on every email you receive via the Internet and can provide valuable diagnostic information like hop delays, anti-spam results and more. If you need help getting copies of your email headers, just read this tutorial.

ABOUT SPF RECORDS

Sender Policy Framework (SPF) records allow domain owners to publish a list of IP addresses or subnets that are authorized to send email on their behalf.  The goal is to reduce the amount of spam and fraud by making it much harder for malicious senders to disguise their identity.

ABOUT DNS LOOKUP

This test will list DNS records for a domain in priority order. The DNS lookup is done directly against the domain’s authoritative name server, so changes to DNS Records should show up instantly. By default, the DNS lookup tool will return an IP address if you give it a name (e.g. www.example.com) If you give it an IP address it will return a hostname based on the reverse DNS lookup.

Cisco ISR Platform feature by Ashburn Consulting

A video presentation about the Cisco ISR platform from Cisco’s Solutions Architect, Randy Benn. IPICS, Video Distribution and Management, POE switch modules, IP Camera termination are all integrated in one ISR platform. Interview conducted by Amante Bustamante.