Skip to main content
Windows Server Threat Detection

Detecting Windows Server Security Threats with Advanced Event Log Analyzers

Windows Server Threat DetectionWindows Servers stand as prime targets for hackers and malicious actors due to their widespread usage and historical vulnerabilities. These systems often serve as the backbone for critical business operations, housing sensitive data and facilitating essential services. However, their prevalence also makes them vulnerable to cyber threats, including ransomware attacks, distributed denial-of-service (DDoS) assaults and more.

Windows Servers have a documented history of vulnerabilities and exploits, which further intensifies their attractiveness to attackers seeking to exploit weaknesses for unauthorized access or data theft. Consequently, it is paramount for organizations to prioritize mitigating these risks and safeguarding the integrity and continuity of operations within Windows Server environments.

Fortunately, tools like EventLog Analyzer offer robust capabilities for automatically identifying and countering such threats, uplifting the security posture of Windows Server setups. To effectively leverage these defenses, it's imperative to understand the nature of common Windows server threats and how they manifest. In this document, we delve into several prevalent threats targeting Windows servers and outline strategies for their detection and mitigation.

Furthermore, implementing robust security measures, such as regular patching, network segmentation, intrusion detection systems, and data encryption, Windows VM Backups, is essential to fortify Windows Servers against potential threats and ensure the resilience of critical business functions.

Key Topics:

Common Windows Server Threats

Windows Server, Ransomware, Threats, Phishing, DoS, Attacks, EventLog, Log, Detection

Continue reading

  • Hits: 5179
Event Log Monitoring System

Event Log Monitoring System: Implementation, Challenges & Standards Compliance. Enhance Your Cybersecurity Posture

eventlog analyzerAn event log monitoring system, often referred to as an event log management, is a critical component to IT security & Management, that helps organizations strengthen their cybersecurity posture. It’s a sophisticated software solution designed to capture, analyze, and interpret a vast array of event logs generated by various components within an organization's IT infrastructure such as firewalls (Cisco ASA, Palo Alto etc), routers, switches, wireless controllers, Windows servers, Exchange server and more.

These event logs can include data on user activities, system events, network traffic, and security incidents and more. By centralizing and scrutinizing these logs in real-time, event log monitoring systems play a pivotal role in enhancing an organization's security posture, enabling proactive threat detection, and facilitating compliance with regulatory requirements.

Key Topics:

Event Log Categories

Event Log Monitoring Systems empowers organizations to identify and respond to potential security threats, operational issues, and compliance breaches promptly, making it an indispensable tool for maintaining the integrity and reliability of modern digital ecosystems.

All logs contain the following basic information:

Continue reading

  • Hits: 10560

How to Perform TCP SYN Flood DoS Attack & Detect it with Wireshark - Kali Linux hping3

wireshark logoThis article will help you understand TCP SYN Flood Attacks, show how to perform a SYN Flood Attack (DoS attack) using Kali Linux & hping3 and correctly identify one using the Wireshark protocol analyser. We’ve included all necessary screenshots and easy to follow instructions that will ensure an enjoyable learning experience for both beginners and advanced IT professionals.

DoS attacks are simple to carry out, can cause serious downtime, and aren’t always obvious. In a SYN flood attack, a malicious party exploits the TCP protocol 3-way handshake to quickly cause service and network disruptions, ultimately leading to an Denial of Service (DoS) Attack. These type of attacks can easily take admins by surprise and can become challenging to identify. Luckily tools like Wireshark makes it an easy process to capture and verify any suspicions of a DoS Attack.

Key Topics:

There’s plenty of interesting information to cover so let’s get right into it.

How TCP SYN Flood Attacks Work

When a client attempts to connect to a server using the TCP protocol e.g (HTTP or HTTPS), it is first required to perform a three-way handshake before any data is exchanged between the two. Since the three-way TCP handshake is always initiated by the client it sends a SYN packet to the server.

 tcp 3 way handshake

The server next replies acknowledging the request and at the same time sends its own SYN request – this is the SYN-ACK packet. The finally the client sends an ACK packet which confirms both two hosts agree to create a connection. The connection is therefore established and data can be transferred between them.

Read our TCP Overview article for more information on the 3-way handshake

In a SYN flood, the attacker sends a high volume of SYN packets to the server using spoofed IP addresses causing the server to send a reply (SYN-ACK) and leave its ports half-open, awaiting for a reply from a host that doesn’t exist:

Performing a TCP SYN flood attack

In a simpler, direct attack (without IP spoofing), the attacker will simply use firewall rules to discard SYN-ACK packets before they reach him. By flooding a target with SYN packets and not responding (ACK), an attacker can easily overwhelm the target’s resources. In this state, the target struggles to handle traffic which in turn will increase CPU usage and memory consumption ultimately leading to the exhaustion of its resources (CPU and RAM). At this point the server will no longer be able to serve legitimate client requests and ultimately lead to a Denial-of-Service.

How to Perform a TCP SYN Flood Attack with Kali Linux & hping3

However, to test if you can detect this type of a DoS attack, you must be able to perform one. The simplest way is via a Kali Linux and more specifically the hping3, a popular TCP penetration testing tool included in Kali Linux.

Alternatively Linux users can install hping3 in their existing Linux distribution using the command:

# sudo apt-get install hping3

In most cases, attackers will use hping or another tool to spoof IP random addresses, so that’s what we’re going to focus on.  The line below lets us start and direct the SYN flood attack to our target (192.168.1.159): 

# hping3 -c 15000 -d 120 -S -w 64 -p 80 --flood --rand-source 192.168.1.159

Let’s explain in detail the above command:

We’re sending 15000 packets (-c 15000) at a size of 120 bytes (-d 120) each. We’re specifying that the SYN Flag (-S) should be enabled, with a TCP window size of 64 (-w 64). To direct the attack to our victum’s HTTP web server we specify port 80 (-p 80) and use the --flood flag to send packets as fast as possible. As you’d expect, the --rand-source flag generates spoofed IP addresses to disguise the real source and avoid detection but at the same time stop the victim’s SYN-ACK reply packets from reaching the attacker.

How to Detect a SYN Flood Attack with Wireshark

Now the attack is in progress, we can attempt to detect it. Wireshark is a little more involved than other commercial-grade software. However, it has the advantage of being completely free, open-source, and available on many platforms.

In our lab environment, we used a Kali Linux laptop to target a Windows 10 desktop via a network switch. Though the structure is insecure compared to many enterprise networks, an attacker could likely perform similar attacks after some sniffing. Recalling the hping3 command, we also used random IP addresses, as that’s the method attackers with some degree of knowledge will use.

Even so, SYN flood attacks are quite easy to detect once you know what you’re looking for. As you’d expect, a big giveaway is the large amount of SYN packets being sent to our Windows 10 PC.

Straight away, though, admins should be able to note the start of the attack by a huge flood of TCP traffic. We can filter for SYN packets without an acknowledgment using the following filter:  tcp.flags.syn == 1 and tcp.flags.ack == 0

tcp syn flood attack detection with wireshark

As you can see, there’s a high volume of SYN packets with very little variance in time. Each SYN packet shows it’s from a different source IP address with a destination port 80 (HTTP), identical length of 120 and window size (64). When we filter with tcp.flags.syn == 1 and tcp.flags.ack == 1 we can see that the number of SYN/ACKs is comparatively very small. A sure sign of a TCP SYN attack.

tcp syn flood attack detection with wireshark

We can also view Wireshark’s graphs for a visual representation of the uptick in traffic. The I/O graph can be found via the Statistics>I/O Graph menu. It shows a massive spike in overall packets from near 0 to up to 2400 packets a second.

tcp syn flood attack wireshark graph

By removing our filter and opening the protocol hierarchy statistics, we can also see that there has been an unusually high volume of TCP packets:

tcp syn flood attack wireshark protocol hierarchy stats

All of these metrics point to a SYN flood attack with little room for interpretation. By use of Wireshark, we can be certain there’s a malicious party and take steps to remedy the situation.

Summary

In this article we showed how to perform a TCP SYN Flood DoS attack with Kali Linux (hping3) and use the Wireshark network protocol analyser filters to detect it. We also explained the theory behind TCP SYN flood attacks and how they can cause Denial-of-Service attacks.

  • Hits: 294679

How to Detect SYN Flood Attacks with Capsa Network Protocol Analyzer & Create Automated Notification Alerts

Network Hacker Executing a SYN Flood AttackThis article explains how to detect a SYN Flood Attack using an advanced protocol analyser like Colasoft Capsa. We’ll show you how to identify and inspect abnormal traffic spikes, drill into captured packets and identify evidence of flood attacks. Furthermore we’ll configure Colasoft Capsa to automatically detect SYN Flood Attacks and send automated alert notifications .

Denial-of-Service (DoS) attacks are one of the most persistent attacks network admins face due to the ease they can be carried out. With a couple of commands, an attacker can create a DoS attack capable of disrupting critical network services within an organization.

There are a number of ways to execute a DoS attack, including ARP poisoning, Ping Flood, UDP Flood, Smurf attack and more but we’re going to focus on one of the most common: the SYN flood (half-open attack). In this method, an attacker exploits the TCP handshake process.

In a regular three-way TCP handshake, the user sends a SYN packet to a server, which replies with a SYN-ACK packet. The user replies with a final ACK packet, completing the process and establishing the TCP connection be established after which data can be transferred between the two hosts:

tcp 3 way handshake

However, if a server receives a high volume of SYN packets and no replies (ACK) to its SYN-ACK packets, the TCP connections remain half-open, assuming natural network congestion:

syn flood attack

By flooding a target with SYN packets and not responding (ACK), an attacker can easily overwhelm the target’s available ports. In this state, the target struggles to handle traffic which in turn will increase CPU usage and memory consumption ultimately leading to the exhaustion of its resources (CPU and RAM). At this point the server will no longer be able to serve legitimate clients requests and ultimately lead to a Denial-of-Service.

Detecting & Investigating Unusual Network Traffic

Fortunately, there are a number of software that can detect SYN Flood attacks. Wireshark is a strong, free solution, but paid versions of Colasoft Capsa make it far easier and quicker to detect and locate network attacks. Graph-oriented displays and clever features make it simple to diagnose issues.

As such, the first point of call for detecting a DoS attack is the dashboard. The overview of your network will make spikes in traffic quickly noticeable. You should be able to notice an uptick in the global utilization graph, as well as the total traffic by bytes:

tcp syn flood attack packet analyzer dashboardClick to enlarge

However, spikes in network utilization can happen for many reasons, so it’s worth drilling down into the details. Capsa makes this very easy via its Summary tab, which will show packet size distribution, TCP conversation count, and TCP SYN/SYN-ACK sent.

In this case, there’s an abnormal number of packets in the 128-255 range, but admins should look out for strange distributions under any heading as attackers can specify a packet size to suit their needs. However, a more telling picture emerges when looking at TCP SYN Sent, which is almost 4000 times that of SYN-ACK:

tcp syn flood attack packet analysisClick to enlarge

Clearly, there’s something wrong here, but it’s important to find the target of the SYN packets and their origin.

There a couple of ways to do this, but the TCP Conversation tab is easiest. If we sort by TCP, we can see that the same 198-byte packet is being sent to our victim PC on port 80:

tcp syn flood attack packet analysisClick to enlarge

After selecting one of these entries and decoding the packets, you may see the results below. There have been repeated SYN packets and the handshake isn’t performed normally in many cases:

tcp syn flood flow analysisClick to enlarge

The attack becomes most clear when viewing IP Conversation in Capsa’s Matrix view, which reveals thousands of packets sent to our victim PC from random IP addresses. This is due to the use of IP spoofing to conceal their origin. If the attacker isn’t using IP spoofing, Capsa’s Resolve address will be able to resolve the IP address and provide us with its name. If they are, finding the source is likely far more trouble than it’s worth:

tcp syn flood attack matrixClick to enlarge

At this point, we can be certain that an SYN flood attack is taking place, but catching such attacks quickly really pays. Admins can use Capsa’s Alarm Explorer to get an instant notification when unusual traffic is detected:

tcp syn flood attack alarm creation

A simple counter triggers a sound and email when a certain number of SYN packets per second are detected. We set the counter to 100 to test the functionality and Capsa immediately sent us an alert once we reached the configured threshold:

tcp syn flood attack alarm

Capsa also lets users set up their own pane in the dashboard, where you can display useful graphs like SYN sent vs SYN-ACK, packet distribution, and global utilization. This should make it possible to check for a SYN flood at a glance when experiencing network slowdowns:

tcp syn flood attack packet analysis dashboard

Alternatively, Capsa’s Enterprise Edition lets admins start a security analysis profile, which contains a dedicated DoS attack tab. This will automatically list victims of an SYN flood attack and display useful statistics like TCP SYN received and sent. It also allows for quick access to TCP conversation details, letting admins decode quickly and verify attacks:

tcp syn flood attack tab

Click to enlarge

Together, these techniques should be more than enough to catch SYN floods as they start and prevent lengthy downtime.

Summary

This article explained how SYN Flood Attacks work and showed how to detect SYN Flood attacks using Colasoft Capsa. We saw different ways to identify abnormal traffic spikes within the network, how to drill into packets and find evidence of possible attacks. Finally we showed how Capsa can be configured to automatically detect SYN Flood Attacks and create alert notifications.

  • Hits: 11499

Advanced Network Protocol Analyzer Review: Colasoft Capsa Enterprise 11

Firewall.cx has covered Colasoft Capsa several times in the past, but its constant improvements make it well worth revisiting. Since the last review, the version has bumped from 7.6.1 to 11.1.2+, keeping a similar interface but scoring plenty of new features. In fact, its change is significant enough to warrant a full re-evaluation rather than a simple comparison.

For the unfamiliar, Colasoft Capsa Enterprise is a widely respected network protocol analyzer that goes far beyond free packet sniffers like Wireshark. It gives users detailed information about packets, conversations, protocols, and more, while also tying in diagnosis and security tools to assess network health. It was named as a visionary in Gartner’s Magic Quadrant for Network Performance Monitoring and Diagnostics in 2018, which gives an idea of its power. Essentially, it’s a catch-all for professionals who want a deeper understanding of their network.

Installing Capsa Enterprise 11

The installation of Capsa Enterprise is a clear merit, requiring little to no additional configuration. The installer comes in at 84 MB, a very reasonable size that will be quick to download on most connections. From there, it’s a simple case of pressing Next a few times.

However, Colasoft does give additional options during the process. There’s the standard ability to choose the location of the install, but also choices of a Full, Compact, or Custom install. It lets users remove parts of the network toolset as required to reduce clutter or any other issues. Naturally, Firewall.cx is looking at the full capabilities for the purpose of this review.

capsa enterprise v11 installation options

The entire process takes only a few minutes, with Capsa automatically installing the necessary drivers. Capsa does prompt a restart after completion, though it can be accessed before then to register a serial number. The software offers both an online option for product registration and an offline process that makes use of a license file. It’s a nice touch that should appease the small percentage of users without a connection.

Using Capsa Enterprise 11

After starting Capsa Enterprise for the first time, users are presented with a dashboard that lets them choose a network adapter, select an analysis profile, or load packet files for replay. Selecting an adapter reveals a graph of network usage over time to make it easier to discern the right one. A table above reveals the speed, number of packets sent, utilization, and IP address to make that process even easier.

capsa enterprise v11 protocol analyzer dashboard

 However, it’s after pressing the Start button that things get interesting. As data collection begins, Capsa starts to display it in a digestible way, revealing live graphs with global utilization, total traffic, top IP addresses, and top application protocols.

capsa enterprise v11 dashboard during capture

Users can customize this default screen to display most of the information Capsa collects, from diagnoses to HTTP requests, security alarms, DNS queries, and more. Each can be adjusted to update at an interval from 1 second to 1 hour, with a choice between area, line, pie, and bar charts. The interface isn’t the most modern we’ve seen, but it’s hard to ask for more in terms of functionality.

Like previous versions, Capsa Enterprise 11 also presents several tabs and sub-tabs that provide deeper insights. A summary tab gives a full statistical analysis of network traffic with detailed metadata. A diagnosis tab highlights issues your network is having on various layers, with logs for each fault or performance issue.

In fact, the diagnosis tab deserves extra attention as it can also detect security issues. It’s a particular help with ARP poisoning attacks due to counts of invalid ARP formats, ARP request storms, and ARP scans. After clicking on the alert, admins can see the originating IP and MAC address and investigate.

capsa enterprise v11 diagnosis tab

When clicking on the alert, Capsa also gives possible causes and resolutions, with the ability to set up an alarm in the future via sound or email. An alarm explorer sub-menu also gives an overview of historic triggers for later review. To reduce spam, you can adjust your alarms or filter specific errors out of the diagnosis system.

capsa enterprise v11 analysis profile setting

Naturally, this is a great help, and the ability to define such filters is present in every aspect of the software. You can filter by IP, MAC address, and issue type, as well as more complex filters. Admins can remove specific traffic either at capture or afterward. Under Packet Analysis, for example, you can reject specific protocols like HTTP, Broadcast, ARP, and Multicast.

capsa enterprise v11 packet analysis filters

If you filter data you’ve already captured, it gets even more powerful, letting you craft filters for MAC addresses in specific protocols, or use an advanced flowchart system to include certain time frames. The massive level of control makes it far easier to find what you’re looking for.

After capture is complete, you can also hit the Conversation Filter button, a powerful tool that lets you accept/reject data in the IP, TCP, and UDP Conversations tabs. Again, it takes advantage of a node-based editor plus AND/OR/NOT operators for easy creation. You can even export the filters for use on a different PC.

capsa enterprise v11 adding conversation filter

When you begin a capture with conversation filters active, Capsa will deliver a pop-up notification. This is a small but very nice touch that should prevent users wondering why only certain protocols or locations are showing.

capsa enterprise v11 packet capture filter us traffic

Once enabled, the filter will begin adjusting the data in the tab of the selected conversation type. Admins can then analyze at will, with the ability to filter by specific websites and look at detailed packet information.

capsa enterprise v11 ip conversation tab

The packet analysis window gives access to further filters, including address, port, protocol, size, pattern, time, and value. You can also hit Ctrl+F to search for specific strings in ASCII, HEX, and UTF, with the ability to choose between three layout options.

capsa enterprise v11 packet capture filter analysis

However, though most of your time will be spent in Capsa’s various details, its toolbar is worth a mention. Again, there’s a tabbed interface, the default being Analysis. Here you’ll see buttons to stop and start capture, view node groups, set alarms for certain diagnoses, set filters, and customize the UI.

capsa enterprise v11 dashboard v2

However, most admins will find themselves glancing at it for its pps, bps, and utilisation statistics. These update every second and mean you can get a quick overview no matter what screen you’re on. It combines with a clever grid-based display for packet buffer, which can be quickly exported for use in other software’s or replays.

Another important section is the Tools tab, which gives access to Capsa’s Base64 Codec, Ping, Packet Player, Packet Builder, and MAC Scanner applications. These can also be accessed via the file menu in the top left but having them for quick access is a nice touch.

capsa enterprise v11 tools

Finally, a Views tab gives very useful and quick access to a number of display modes. These enable panels like the alarm view and let you switch between important options like IP/MAC address only or name only modes.

capsa enterprise v11 views tab

 In general, Colosoft has done a great job of packing a lot of information into one application while keeping it customizable. However, there are some areas where it really shines, and its Matrix tab is one of those. With a single click, you can get a visual overview of much of the conversations on a network, with Top 100 MAC, MAC Node, IP Conversation, and IP Node views:

capsa enterprise v11 top 100 mac matrix

Firewall.cx has praised this feature before and it remains a strong highlight of the software. Admins are able to move the lines of the diagrams around at will for clarity, click on each address to view the related packets, and quickly make filters via a right click interface.

capsa enterprise v11 matrix

The information above is from a single PC, so you can imagine how useful it gets once more devices are introduced. You can select individual IP addresses in the node explorer on the left-hand side to get a quick overview of their IP and MAC conversations, with the ability to customize the Matrix for a higher maximum node number, traffic types, and value.

capsa enterprise v11 modify matrix

Thanks to its v7.8 update, Capsa also has support for detailed VoIP Analysis. Users can configure RTP via the System>Decoder menu, with support for multiple sources and destination addresses, encoding types, and ports.

capsa enterprise v11 rtp system decoder

Once everything is configured correctly, admins will begin to see the VoIP Call tab populate useful information. A summary tab shows MOS_A/V distribution with ratings between Good (4.24-5.00) and Bad (0.00-3.59). A status column shows success, failure, and rejection, and a diagnosis tab keeps count of setup times, bandwidth rejects, and more. While our test environment didn't contain VoIP traffic we still included the screesnhot below to help give readers the full picture.

capsa enterprise v11 voip traffic analysis

In addition, a window below keeps track of packets, bytes, utilization, and average throughput, as well as various statistics. Finally, the Call tab lists numbers and endpoints, alongside their jitter, packet loss, codec, and more. Like most aspects of Capsa, this data can be exported or turned into a custom report from within the software.

Capsa Enterprise 11 creates a number of these reports by default. A global report gives an overview of total traffic with MAC address counts, protocol counts, top MAC/IP addresses, and more. There are also separate auto-generated reports for VoIP, Conversation, Top Traffic, Port, and Packet.

capsa enterprise v11 reporting capabilities

You can customize these with logo and author name, but they’re missing many of the features you’d see in advanced reporting software. There’s no option for a pie chart, for example, though they can be created via the node explorer and saved as an image.

Conclusion

Capsa Enterprise 11 is a testament to Colasoft’s consistent improvements over the years. It has very few compromises, refusing to skimp on features while still maintaining ease of use. Capsa comes in two different flavors – Enterprise version or the Standard version, making it an extremely affordable & robust toolset with the ability to reduce the downtime and make troubleshooting an enjoyable process.

Though its visual design and report features look somewhat dated, the layout is incredibly effective. Admins will spend much of their time in the matrix view but can also make use of very specific filters to deliver only the data they want. It got the Firewall.cx seal of approval last time it was reviewed, and we feel comfortable giving it again.

  • Hits: 14758

Detect Brute-Force Attacks with nChronos Network Security Forensic Analysis Tool

colasoft-nchronos-brute-force-attack-detection-1Brute-force attacks are commonly known attack methods by which hackers try to get access to restricted accounts and data using an exhaustive list/database of usernames and passwords. Brute-force attacks can be used, in theory, against almost any encrypted data.

When it comes to user accounts (web based or system based), the first sign of a brute-force attack is when we see multiple attempts to login to an account, therefore allowing us to detect a brute-force attack by analyzing packets that contain such events. We’ll show you how Colasoft’s nChronos can be used to identify brute-force attacks, and obtain valuable information that can help discover the identity of the attacker plus more.

For an attacker to obtain access to a user account on a website via brute force, he is required to use the site’s login page, causing an alarming amount of login attempts from his IP address. nChronos is capable of capturing such events and triggering a transaction alarm, warning system administrators of brute-force attacks and when the triggering condition was met.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

Creating A Transaction Analysis & Alarm In nChronos

First, we need to create a transaction analysis to specify the pattern/behavior we are interested in monitoring:

From the nChronos main page, first select the server/IP address we want to monitor from the Server Explorer section.

Next, from the Link Properties, go to the Application section and then the Analysis Settings as shown below:

colasoft-nchronos-brute-force-attack-detection-2a

Figure 1. Creating a Transaction Analysis in nChronos (click to enlarge)

Now click the button of New Web Application (second green button at the top) to set a Web Application, input Name and HTTP Hostname, then check the box labeled Enable Transaction Analysis and add a transaction with URL subpath e.g “/login.html”.

At this point we’ve created the necessary Transaction Analysis. All that’s required now is to create the Transaction Alarm.

To create the alarm, click Transaction Alarms in the left window, input the basic information and choose the parameter of Transaction Statistics in Type, and then set a Triggering Condition as needed, for example, 100 times in 1 minute. This means that the specific alarm will activate as soon as there are 100 or more logins within a minute:

colasoft-nchronos-brute-force-attack-detection-3aFigure 2. Creating a Transaction Alarm (click to enlarge)

Finally, you can choose Send to email box or Send to SYSLOG to send the alarm notification. Once complete, the transaction alarm for detecting brute-force attack is set. When the alarm triggering condition is met an email notification is sent.

Note that the specific alarm triggering condition does not examine the amount of logins per IP address, which means the alarm condition will be met regardless if the 100 login attempts/min is from one or more individual IP addresses. This can be manually changed from the Transaction Analysis so that it shows the login attempt times of each individual IP address.

Below is a sample output from an alarm triggered:

colasoft-nchronos-brute-force-attack-detection-3aFigure 3. nChronos Brute-Force alarm triggered – Overall report (click to enlarge)

And below we see the same alarm with a per-IP address analysis:

colasoft-nchronos-brute-force-attack-detection-4a

Figure 4. nChronos Brute-Force alarm triggered – IP breakdown (click to enlarge)

The article shows how nChronos can be used to successfully detect a Brute-Force attack against any node on a network or even websites, and at the same time alert system administrators or IT managers of the event.

  • Hits: 15490

Introducing Colasoft Unified Performance Management

Introduction to Colasoft Unified Performance ManagementColasoft Unified Performance Management (UPM) is a business-oriented network performance management system, which analyzes network performance, quality, fault, and security issues based on business. By providing visual analysis of business performances, Colasoft UPM helps users promote business-oriented proactive network operational capability, ensure the stable running of businesses, and enhance troubleshooting efficiency.

Colasoft UPM contains two parts: Chronos Server as a frontend device and UPM Center as the analysis center.

Frontend devices are deployed at the key nodes of the communication link of business systems, which capture business communication data by switch port-mirroring or network TAP. The frontend collects and analyzes the performance index parameters and application alarm information in real-time, and uploads to the UPM Center via the management interface for overall analysis.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

UPM Center is deployed at the headquarters to collect the business performance indexes and alarm information uploaded by frontend devices, and display the analysis results.

The start page of Colasoft UPM is shown below:

 introduction-to-unified-performance-management-1Figure 1. Unified Performance Management Homepage (click image to enlarge)

The statistics information of business and alarm in a period of time is shown in this page.

Hovering the mouse over a business sensor (lower left area), we can see there are several options such as “Analyze”, “Query”, “Edit” and “Delete”:

introduction-to-unified-performance-management-2Figure 2. Adding or analyzing a Business logic sensor to be analyzed (click image to enlarge)

We can clickAnalyze” to check the business logic diagram and detailed alarm information.

introduction-to-unified-performance-management-3Figure 3. Analyzing a business logic and checking for service alarms (click to enlarge)

ClickQuery” to check the index parameters to analyze network performance:

introduction-to-unified-performance-management-4Figure 4. Analyzing performance of a specific application or service (click to enlarge)

We can also clickIntelligent Application” in the homepage, to review the relationship of the nodes in the business system:

introduction-to-unified-performance-management-5

Figure 5. The Intelligent Application section reveals the relationship of nodes in the business system

In short, Colasoft UPM helps users easily manage network performance by providing visual analysis based on business, which greatly enhances troubleshooting efficiency and reduces human resource cost.

  • Hits: 6102

How to Detect P2P (peer-to-peer) File Sharing, Torrent Traffic & Users with a Network Analyzer

capsa-network-analyzer-detect-p2p-file-sharing-torrent-traffic-1aPeer-to-Peer file sharing traffic has become a very large problem for many organizations as users engage in illegal (most times) file sharing processes that not only consumes valuable bandwidth, but also places the organization in danger as high-risk connections are made from the Internet to the internal network and malware, pirated or copyrighted material or pornography is downloaded into the organization’s systems. The fact is that torrent traffic is responsible for over 29% of US Internet's traffic in North America, indicating how big the problem is.

To help network professionals in the P2P battle, we’ll show how Network Analyzers such as Colasoft Capsa, can be used to identify users or IP addresses involved the file sharing process, allowing IT departments to take the necessary actions to block users and similair activities.

While all network analyzers capture and display packets, very few have the ability to display P2P traffic or users creating multiple connections with remote peers - allowing network administrators to quickly and correctly identify P2P activity.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

One of the main traffic characteristics of P2P host traffic is that they create many connections to and from hosts on the Internet, in order to download from multiple sources or upload to multiple destinations.

Apart from using the correct tools, network administrators and engineers must also ensure they capture traffic at strategic areas within their network. This means that the network analyzer must be placed at the point where all network traffic, to and from the Internet, passes through it.

The two most common places network traffic is captured is at the router/firewall connecting the organization to the Internet or the main switch where the router/firewall device connects to. To learn how to configure these devices and enable the network analyzer to capture packets, visit the following articles:

While capturing commences, data will start being displayed in Capsa, and thanks to the Matrix display feature, we can quickly identify hosts that have multiple conversations or connections with peer hosts on the Internet.

By selecting the Matrix tab and hovering the mouse on a host of interest (this also automatically selects the host), Capsa will highlight all conversations with other IP addresses made by the selected host, while at the same time provide additional useful information such as bytes sent and received by the host, amount of peer connections (extremely useful!) and more:

Figure 1. Using the Capsa Matrix feature to highlight conversations of a specific host suspected of P2P traffic

In most cases, an excessive amount of peer connections means that there is a P2P application running, generating all the displayed traffic and connections.

Next, to drill into to the host's traffic, simply click on the Protocol tab to automatically show the amount of traffic generated by each protocol. Here we will happily find the BitTorrent & eMule protocol listed:

capsa-network-analyzer-detect-p2p-file-sharing-torrent-traffic-2

Figure 2. Identifying P2P Traffic and associated hosts in Capsa Network Analyzer

The IP Endpoint tab below provides additional useful information such as IP address, bytes of traffic associated with the host, number of packets, total amount of bytes and more.

By double-clicking on the host of interest (under IP EndPoint), Capsa will open a separate window and display all data captured for the subject host, allowing extensive in-depth analysis of packets:

capsa-network-analyzer-detect-p2p-file-sharing-torrent-traffic-3

Figure 3. Diving into a host’s captured packets with the help of Capsa Network Analyzer

Multiple UDP conversations through the same port, indicate that there may be a P2P download or upload in progress.

Further inspection of packet information such as info hash, port, remote peer(s), etc. in ASCII decoding mode will confirm the capture traffic is indeed from P2P traffic.

This article demonstrated how Capsa network analyser can be used to detect Peer-to-Peer (P2P) traffic in a network environment. We examined the Matrix feature of Capsa, plus its ability to automatically identify P2P/Torrent traffic, making it easier for network administrators to track down P2P clients within their organization.

  • Hits: 37017

Improve Network Analysis Efficiency with Colasoft's Capsa New Conversation Colorization Feature

how-to-improve-network-analysis-with capsa-colorization-feature-0Troubleshooting network problems can be a very difficult and challenging task. While most IT engineers use a network analyzer to help solve network problems, when analyzing hundreds or thousands of packets, it can become very hard to locate and further research conversations between hosts. Colasoft’s Capsa v8 now introduces a new feature that allows us to highlight-colorize relevant IP conversations in the network based on their MAC address, IP Addresses, TCP or UDP conversations.

This great new feature will allow IT engineers to quickly find the related packets of the conversations they want to analyze emphatically, using just a few clicks.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

As shown in the screenshot below, users can colorize any Conversation in the MAC Conversation View, IP Conversation View, TCP Conversation View and UDP Conversation View. Packets related to that Conversation will be colorized automatically with the same color.

Take TCP conversation for example, choose one conversation, right-click it and choose "Select Conversation Color" in the pop-up menu:

how-to-improve-network-analysis-with capsa-colorization-feature-01

Figure 1. Selecting a Conversation Color in Capsa v8.0

Next, select the color you wish to use to highlight the specific conversation:

how-to-improve-network-analysis-with capsa-colorization-feature-02

Figure 2. Selecting a color

Once the color has been selected, Capsa will automatically find and highlight all related packets of this conversation using the same background color:

how-to-improve-network-analysis-with capsa-colorization-feature-03

Figure 3. Colasoft Capsa automatically identifies and highlights the conversation

The relevance between a conversation and its packets is enhanced by colorizing packets which greatly improves analysis efficiency.

  • Hits: 12510

How To Detect ARP Attacks & ARP Flooding With Colasoft Capsa Network Analyzer

ARP attacks and ARP flooding are common problems small and large networks are faced with. ARP attacks target specific hosts by using their MAC address and responding on their behalf, while at the same time flooding the network with ARP requests. ARP attacks are frequently used for 'Man-in-the-middle' attacks, causing serious security threats, loss of confidential information and should be therefore quickly identified and mitigated.

During ARP attacks, users usually experience slow communication on the network and especially when communicating with the host that is being targeted by the attack.

In this article, we will show you how to detect ARP attacks and ARP flooding using a network analyzer such as Colasoft Capsa.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

Colasoft Capsa has one great advantage – the ability to identify and present suspicious ARP attacks without any additional processing, which makes identifying, mitigating and troubleshooting much easier.

The Diagnosis tab provides real-time information and is extremely handy in identifying potential threats, as shown in the screenshot below:

capsa-network-analyzer-discover-arp-attacks-flooding-1

Figure 1. ARP Scan and ARP Storm detected by Capsa's Diagnosis section.

Under the Diagnosis tab, users can click on the Events area and select any suspicious events. When these events are selected, analysis of them (MAC address information in our case) will be displayed on the right as shown above.

In addition to the above analysis, Capsa also provides a dedicated ARP Attack tab, which is used to verify the offending hosts and type of attack as shown below:

capsa-network-analyzer-discover-arp-attacks-flooding-2

Figure 2. ARP Attack tab verifies the security threat.

We can extend our investigation with the use of the Protocol tab, which allows us to drill into the ARP protocol and see which hosts MAC addresses are involved in heavy ARP protocol traffic:

capsa-network-analyzer-discover-arp-attacks-flooding-3

Figure 3. Drilling into ARP attacks.

Finally, double-clicking on a MAC address in the ARP Protocol section will show all packets related to the selected MAC address.

When double-clicking on a MAC address, Capsa presents all packets captured, allowing us to drill-down to more useful information contained in the ARP packet.

capsa-network-analyzer-discover-arp-attacks-flooding-4

Figure 4. Drilling-down into the ARP attack packets.

By selecting the Source IP, in the lower window of the selected packet, we can see the fake IP address 0.136.136.16. This means that any host on the network responding to this packet will be directed to an incorrect and non-existent IP address, indicating an ARP attack of flood.

If you're a network administrator, engineer or IT manager, we strongly suggest you try out Colasoft Capsa today and see how easy you can troubleshoot and resolve network problems and security threats such as ARP Attacks and ARP Flooding.

  • Hits: 16534

How to Reconstruct HTTP Packets/Data & Monitor HTTP User Activity with NChronos

HTTP reconstruction is an advanced network security feature offered by nChronos version 4.3.0 and later. nChronos is a Network Forensic Analysis application that captures packets/data around the clock. With HTTP reconstruction, network security engineers and IT managers can uncover suspicious user web activity and check user web history to examine specific HTTP incidents or HTTP data transferred in/out of the corporate network.

Now let's take a look at how to use this new feature with Colasoft nChronos.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

The HTTP reconstruction feature can be easily selected from the Link Analysis area. We first need to carefully select the time range required to be examined e.g 9th of July between 13:41 and 13:49:15. Once the time range is selected, we can move to the bottom window and select the IP Address tab to choose the IP address of interest:

nchronos-how-to-reconstruct-monitor-http-data-packets-captured-1Figure 1. Selecting our Time-Range, and IP Address of interest from Link Analysis

nChronos further allows us to filter internal and external IP addresses, to help quickly identify the IP address of interest. We selected External IP and then address 173.205.14.226.

All that's required at this point is to right-click on the selected IP address and choose HTTP Packet Reconstruction from the pop-up menu. Once HTTP Packet Reconstruction is selected, a new tab will open and the reconstruction process will begin as shown below:


nchronos-how-to-reconstruct-monitor-http-data-packets-captured-2Figure 2. nChronos HTTP Reconstruction feature in progress.

A progress bar at the top of the window shows the progress of the HTTP Reconstruction. Users are able to cancel the process anytime they wish and once the HTTP Reconstruction is complete, the progress bar disappears.

The screenshot below shows the end result once the HTTP Reconstruction has successfully completed:

nchronos-how-to-reconstruct-monitor-http-data-packets-captured-3Figure 3. The HTTP Reconstruction process completed

As shown in the above screenshot, nChronos fully displays the reconstructed page in an easy-to-understand manner. Furthermore, all HTTP requests and commands are included to ensure complete visibility of the HTTP protocol commands sent to the remote web server, along with the user's browser and all other HTTP parameters.

nChronos's HTTP reconstruction feature can prove to be an extremely important security tool for network engineers, administrators and IT Managers who need to keep an eye on incoming/outgoing web traffic. This new feature surpasses web proxy reporting and other similar tools as it is able to completely reconstruct the webpage visited, data exchanged between the server and client, plus help identify/verify security issues with hijacked websites.

  • Hits: 13199

How to Use Multi-Segment Analysis to Troubleshoot Network Delay, Packet Loss and Retransmissions with Colasoft nChronos

network-troubleshooting-multi-segment-analysis-with-nchronos-00Troubleshooting network problems can be a very intensive and challenging process. Intermittent network problems are even more difficult to troubleshoot as the problem occurs at random times with a random duration, making it very hard to capture the necessary information, perform troubleshooting, identify and resolve the network problem.
 
While Network Analyzers help reveal problems in a network data flow, they are limited to examining usually only one network link at a time, thus seriously limiting the ability to examine multiple network segments continuously.

nChronos is equipped with a neat feature called multi-segment analysis, providing an easy way for IT network engineers and administrators to compare the performance between different links. IT network engineers can improve network performance by enhancing the capacity of the link according to the comparison.

Let’s take a look how we can use Colasoft nChronos’s multi-segment analysis feature to help us detect and deal effectively with our network problems.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

Multi-segment analysis provides concurrent analysis for conversations across different links, from which we can extract valuable information on packet loss, network delay, data retransmission and more.

To being, we open nChronos Console and select a portion of the trend chart in the Link Analysis window, then from the Summary window below, we right-click one conversation under the IP Conversation or TCP Conversation tab. From the pop-up menu, select Multi-Segment Analysis to open the Multi-Segment Analysis window:

network-troubleshooting-multi-segment-analysis-with-nchronos-01
Figure 1. Launching Multi-Segment Analysis in nChronos

In the Multi-Segment Analysis window, select a minimum of two and maximum of three links, then choose the stream of interest for multi-segment analysis:

 network-troubleshooting-multi-segment-analysis-with-nchronos-02
Figure 2. Selecting a stream for multi-segment analysis in nChronos

When choosing a conversation for multi-segment analysis, if any of the other selected network links has the same conversation, it will be selected and highlighted automatically. In our example, the second selected link does not have the same data from the primary selected conversation and therefore there is no data to display in the lower section of the analysis window.

Next, Click Start to Analyze to open the Multi-Segment Detail Analysis window, as shown in the figure below:

 network-troubleshooting-multi-segment-analysis-with-nchronos-03
Figure 3. Performing Multi-Segment analysis in nChronos

The Multi-Segment Detail Analysis section on the left provides a plethora of parameter statistics (analyzed below), a time sequence chart, and there’s a packet decoding pane on the lower right section of the window.

The left pane provides statistics on uplink and downlink packet loss, uplink and downlink network delay, uplink and downlink retransmission, uplink and downlink TCP flags, and much more.

The time sequence chart located at the top, graphically displays the packet transmission between the network links, with the conversation time displayed on the horizontal axis.

When you click on a packet on the time sequence chart, the packet decoding pane will display the detailed decoding information for that packet.

Using the Multi-Segment Analysis feature, Colasoft’s nChronos allows us to quickly compare the performance between two or more network links.

  • Hits: 16660

How to Detect Routing Loops and Physical Loops with a Network Analyzer

how-to-detect-routing-and-physical-loops-using-a-network-analyzer-01aWhen working with medium to large scale networks, IT departments are often faced dealing with network loops and broadcast storms that are caused by user error, faulty network devices or incorrect configuration of network equipment.  Network loops and broadcast storms are capable of causing major network disruptions and therefore must be dealt with very quickly.

There are two kinds of network loops and these are routing loops and physical loops.

Routing loops are caused by the incorrect configuration of routing protocols where data packets sent between hosts of different networks, are caught in an endless loop travelling between network routers with incorrect route entries.

A Physical loop is caused by a loop link between devices. A common example is two switches with two active Ethernet links between them. Broadcast packets exiting the links on one switch are replicated and sent back from the other switch. This is also known as a broadcast storm.

Both type of loops are capable of causing major network outages, waste of valuable bandwidth and can disrupt network communications.

We will show you how to detect routing loop and physical loop with a network analyzer such as Colasoft Capsa or Wireshark.

Note: To capture packets on a port that's connected to a Cisco Catalyst switch, users can also read our Configuring SPAN On Cisco Catalyst Switches - Monitor & Capture Network Traffic/Packets

If there are routing loops or physical loops in the network, Capsa will immediately report them in the Diagnosis tab as shown below. This makes troubleshooting easier for network managers and administrators:

how-to-detect-routing-and-physical-loops-using-a-network-analyzer-01 
Figure 1. Capsa quickly detects and displays Routings and Physical Loops

Further examination of Capsa’s findings is possible by simply clicking on each detected problem. This allows us to further check the characteristics of the related packets and then decide what action must be taken to rectify the problem.

Visit our Network Protocol Analyzer Section for high-quality technical articles covering Wireshark topics, detecting and creating different type of network attacks plus many more great security articles.

Drilling Into Our Captured Information

Let’s take a routing loop for example. First, find out the related conversation using Filter (red arrow) in the MAC Conversation tab. MAC addresses can be obtained easily from the notices given in the Diagnosis tab:

how-to-detect-routing-and-physical-loops-using-a-network-analyzer-02

Figure 2. Obtaining more information on a Routing Loop problem

Next, Double-click the conversation to load all related packets and additional information. Click on Identifier, to view the values of all packets under the Decode column, which in our case are all the same, This effectively means that the packets captured in our example is the same packet which is continuously transiting our network because its caused in a loop.  For example, Router-A might be sending it to Router-B, which in turn sends it back to Router-A.

 how-to-detect-routing-and-physical-loops-using-a-network-analyzer-03
Figure 3. Decoding packets caught in a routing loop

Now click on the Time To Live section below, and you’ll see the Decode value reduces gradually. It is because that TTL value will decreased by 1 after transiting a routing device. When TTL reaches the value of 1, the packet will be discarded, to help avoid ICMP packets travelling indefinitely in case of a routing loop in the network. More information on the ICMP protocol can be found in our ICMP Protocol page:

 how-to-detect-routing-and-physical-loops-using-a-network-analyzer-04
Figure 4. Routing loop causing ICMP TTL to decrease

The method used to analyze physical loops is almost identical, but the TTL values of all looped packets remain the same, instead of decreasing as we previously saw. Because the packet is trapped in our local network, it doesn’t traverse a router, therefore the TTL does not change.

Below we see a DNS Query packet that is trapped in a network loop:

how-to-detect-routing-and-physical-loops-using-a-network-analyzer-05
Figure 5. Discovering Network loops and why their TTL values do not decrease

Advanced network analyzers allows us to quickly detect serious network problems that can cause network outages, packet loss, packet flooding and more.

  • Hits: 76254

3CX Unified Communications New Web Client IP Phone, Web Meetings, Click-to-Call & More with V15.5

3cx video conferenceThe developers of the popular software PBX, 3CX, have announced another major update to their unified communications solution! The latest release, 3CX v15.5, makes the phone system faster, more secure and more reliable with a number of improvements and brand new features. 

 Notably, v15.5 brings with it a totally new concept for the PBX system, a completely web-based softphone client that can be launched straight from any open-standards browser. The web client has an attractive, modern interface which makes it incredibly user-friendly, allowing tasks such as call transferring, deskphone control and more to be carried out in a single click.

3CX’s Web-Client provides leading features packed in an easy-to-use GUIWeb-Client provides leading features packed in an easy-to-use GUI

Unified Communications IP PBX That Can Be Deployed Anywhere

Furthering their commitment to providing an easy to install and manage PBX, 3CX has also made deployment easier and more flexible. 3CX can be deployed on MiniPC appliances of leading brands such as Intel, Zotac, Shuttle and Gigabyte meaning that businesses on a budget can ensure enterprise level communications at a fraction of the price.

Additionally, 3CX has ensured more freedom of choice when it comes to deploying the PBX in the cloud. With more supported hosters, such as 1&1, and an easy to use 8 step wizard that allows customers and resellers to have a fully configured PBX up and running in minutes. 

IP PBX With Integrated Web Conferencing

The brand new web client includes integrated web conferencing completely free of charge without any additional licensing or administration. Video conferences are held directly from the browser with no additional downloads or plugins, and most importantly, this applies to remote participants as well!

3CX: IP PBX Web Client with integrated Web Conferencing Free of Charge! IP PBX Web Client with integrated Web Conferencing Free of Charge!

More Reliable, Easier to Control Your Deskphone or Smartphone

By implementing the uaCSTA standard for deskphones, 3CX has significantly improved remote control of phones. This has ensured more reliable control of IP phones regardless of the location of the extension or whether or not the PBX is being run on-premise or in the cloud. Moreover, the 3CX smartphone clients for Android and iOS can now also be remote controlled.

3CX’s Click-to-Call Feature from any Web page or CRMClick-to-Call Feature from any Web page or CRM

Additional Improvements & Features Include:

  • Click2Call Chrome Extension to dial from any web page or CRM
  • Integrated Hotel Module
  • Support for Google Firebase PUSH
  • Achieve PCI compliance in financial environments
  • Hits: 14992

3CX’s Unified Communications IP PBX enhanced to includeNew Web Client, Rich CTI/IP Phone Control, Free Hotel Module & Fax over G.711 - Try it Today for Free!

3CX has done it again! Working on its multi-platform, core v15 architecture, the UC solution developers have released the latest version of its PBX in Alpha, v15.5. The new build includes some incredibly useful features including a web client - a completely new concept for this product.

3CX has made a big efforts to ensure its IP PBX product remains one of the Best Free UC IP PBX systems available!

The new 3CX Intuitive web client that leaves competitors miles behind

The new 3CX Intuitive web client that leaves competitors miles behind

User-Friendly & Feature-Rich

The 3CX Web Client, built on the latest web technology (angular 4), currently works in conjunction with the softphone client for calls, and allows users to communicate and collaborate straight from the browser. The modern, intuitive interface combines key 3CX features including video conferencing, chat, switchboard and more, improving overall usability.

Improved CTI/IP Phone Control

3CX IP PBX cti ip phone call

Desktop call control has been massively improved. Even if your phone system is running in the cloud, supported phones can be reliably controlled from the desktop client. This improvement follows the switch to uaCTSA technology. Moreover, a new Click 2 Call Chrome extension makes communication seamless across the client and browser.

Reintroduction Of The Hotel Module Into 3CX

The Hotel Module has been restored into 3CX and is now included free of charge for all PRO/Enterprise licenses - great news for those in the hospitality industry.

Additionally, 3CX now supports Google’s FIREBASE push, and fax over G711 has been added amongst various other improvements and features.

  • Hits: 9782

How to Get a Free Fully Functional Cloud-Based Unified Communications PBX with Free Trial Hosting on Google Cloud, Amazon or OVH!

3cx ip pbx client consoleCrazy as it might sound there is one Unified Communications provider who is giving out free fully functional cloud-based PBX systems without obligation from its users/customers.

3CX, a leader in Unified Communications, has just announced the availability of its new PBX Express online wizard designed to easily deploy a PBX in your own cloud account

3CX’s Advanced Unified Communications features were recently covered in our article The Ultimate Guide to IP PBX and VoIP Systems - The Best Free IP PBXs For Businesses. In the article we examined the common components of a modern Unified Communications platform and how they are all configured to work together enabling free real-time communications and presence for its users no matter where they are in the world.

Now Free Cloud-based services are added to the list and the features are second to none plus they provide completely Free Trial Hosting, Domain Name, associated SSL certificates and much more!

3CX’s intuitive dashboard allows quick & easy administration with zero prior experience!

3CX’s intuitive dashboard allows quick & easy administration with zero prior experience!

Here’s what the Free Unified Communications PBX includes:

  • Free fully-functional Unified Communications PBX
  • Up to 8 simultaneous calls
  • Ability to make/receive calls on your SIP phones or mobile devices via IP
  • Full Support for iPhone and Android devices
  • Full support for iPads and Tablet devices
  • Presence Services (See who’s online, availability, status etc.)
  • Instant Messaging
  • Video conferencing
  • Desktop Sharing
  • Zero Maintenance – Everything is taken care of for you!
  • Free Domain Name selection (over 20 countries to select from!)
  • Free Trial Hosting on Google Cloud – Amazon Web Services or OVH!
  • SSL Certificate
  • Fast deployment- no previous experience required
  • Super-easy administration
  • …and much more!

3CX’s Unified Communications PBX system is an advanced, flexible PBX that can be run locally in your office at no cost which is why thousands of companies are switching to 3CX. With the choice of an on-premises solution that supports Windows and Linux operating systems and now the free cloud-based hosting – it has become a one-way solution for companies seeking to move to an advanced Unified Communications system but at the same time seeking to dramatically cut telecommunication costs.

3cx ip pbx smartphone iphone clientThanks to its support for any SIP-based IP phone and mobile device (iPhone, Android, iPad, Tablet etc.) the 3CX IP PBX has quickly become the No.1 preferred solution.

3CX’s commitment to its customers and product is outstanding with regular updates covering its main UC PBX product but also mobile device clients - ensuring customers are not left with long outstanding problems or bugs. 3CX recently announced a number of bug fixes and enhancements for the 3CX Client for Android but also the 3CX Client for Mac confirming once again that it’s determined not to leave customers in the dark and continually improve its services and product’s quality.

Read The Ultimate Guide to IP PBX and VoIP Systems - The Best Free IP PBXs For Businesses article for more information on the 3CX UC solution.

  • Hits: 10166

3CX Unified Communication Leading Free IP PBX Brings Linux Edition On-Par with Windows Edition

3CX Free IP PBX Unified Communications Solution3CX, developer of the software-based unified communications solution, has announced the release of 3CX V15 Service Pack 5 which brings the final Linux version of the PBX. The update achieves complete feature parity with the popular Windows version of the software. The company also reports that SP5 has added further automation of admin management tasks and made hosting of the system in the cloud easier with leading cloud providers.

3CX Unified Communication Suite and Capabilities

Read our Ultimate Guide to IP PBX - Unified Communications - The Best Free IP PBXs for Today's Businesses

Improvements to Auto Updates / Backups

  • Automatic uploading of backups to a Google Drive Account.
  • Automatic restoration of backups to another instance with failover.
  • Easier configuration of failover.
  • Automatic installation of OS security updates for Debian.
  • Automatic installation of 3CX tested product updates.
  • Automatic downloads of tested phone firmwares and alerts for outdated firmware.
  • A Labs feature to test upcoming updates released for BETA.
  • Digital receptionists can be configured as a wake-up call service extension.
  • GMail or Office 365 accounts can be more easily configured for notification emails from the PBX.
  • Improved DID source identification.
  • Windows and Mac clients are now bundled with the main install.
  • Automatic push of chat messages to the iOS and Android smartphone clients.
  • Hits: 9583

The Ultimate Guide to IP PBX and VoIP Systems. The Best Free IP PBXs For Businesses

3CX Unified CommunicationsVoIP/ IP PBXs and Unified Communication systems have become extremely popular the past decade and are the No.1 preference when upgrading an existing or installing a new phone system. IP PBXs are based on the IP protocol allowing them to use the existing network infrastructure and deliver enhanced communication services that help organizations collaborate and communicate from anywhere in the world with minimum or no costs.

This article explains the fundamentals of IP PBX systems, how IP PBXs work, what are their critical VoIP components, explains how they can connect to the outside world and shows how companies can use their IP PBX – Unified Communications system to save costs. We also take a look at the best Free VoIP PBX systems and explain why they are suitable for any size small-to-medium organization.

VOIP PBX – The Evolution of Telephone Systems

Traditional, Private Branch Exchange (PBX) telephone systems have changed a lot since the spread of the internet. Slowly but surely, businesses are phasing out analogue systems and replacing them with IP PBX alternatives.

A traditional PBX system features an exchange box on the organization’s premises where analogue and digital phones connect alongside external PSTN/ISDN lines from the telecommunication company (telco). It gives the company full ownership, but is expensive to setup and most frequently requires a specialist technician to maintain, repair and make changes.

Analogue-Digital PBX with phones and two ISDN PRI lines 

A typical Analogue-Digital PBX with phones and two ISDN PRI lines

Upgrading to support additional internal extensions would usually translate to additional hardware cards being installed in the PBX system plus more telephone cabling to accommodate the new phones. When a company reached its PBX maximum capacity (either phones or PSTN/ISDN lines) it would need to move to a larger PBX, resulting in additional costs.

IP PBXs, also known as VoIP systems or Unified Communication solutions, began penetrating the global PBX market around 2005 as they offered everything a high-end PBX offered, integrated much better with desktop applications and software (e.g outlook, CRMS etc) and supported a number of important features PBXs were not able to deliver. IP PBX and Unified Communication systems such as 3CX are able to deliver features such as:

  • Integration with existing network infrastructure
  • Minimizing the cost of upgrades
  • Using existing equipment such as analogue phones, faxes etc.
  • Desktop/mobile softphones that replaced the need for physical phone devices
  • Delivering full phone services to remote offices without requiring separate PBX
  • Allowing mobile users to access their internal extension via VPN or other secure means
  • User-friendly Web-based management interface
  • Support for virtualized-environments that increased redundancy level and dramatically decreased backup/redundancy costs
  • Supported third party software and hardware devices via non-proprietary communication protocols such as Session Initiation Protocol (SIP)
  • Using alternative Telecommunication providers via the internet for cheaper call rates

The features offered by IP PBXs made them an increasingly popular alternative for organizations that were seeking to reduce telecommunication cost while increasing productivity and moving away from the vendor-proprietary solutions.

Why Businesses are Moving to IP PBX solutions

According to surveys made back in 2013, 96% of Australian businesses were already using IP PBXs. Today it’s clear that the solution has huge advantages. IP PBX offers businesses increased flexibility, reduced running costs, and great features, without a premium. There are so many advantages that it’s difficult for organizations to justify traditional analogue/digital PBXs. Even market leaders in the PBX market such as Siemens, Panasonic, Alcatel and others had to adapt to the rapidly changing telecommunications market and produce hybrid models that supported IP PBX features and IP phones, but these were still limited when compared with a full IP PBX solution.

When an IP PBX is installed on-site it uses the existing LAN network, resulting in low latency and less room for interference. It’s also much easier to install than other PBX systems. Network engineers and Administrators can easily configure and manage an IP PBX system as most distributions come with a simple user interface. This means system and phone settings, call routing, call reporting, bandwidth usage and other settings can be seen and configured in a simple browser window. In some cases, employees can even configure their own preferences to suit their workflow.

Once installed, an IP PBX can run on the existing network, as opposed to a whole telephone infrastructure across business premises. That means less cable management and the ability to use existing Ethernet cables, resulting in smaller starting costs. This reduction in starter costs can be even more significant if the company has multiple branches in different places. Internet Leased Lines with unlimited usage plans mean voice calls can be transmitted over WAN IP at no extra cost.

In addition, firms can use Session Initiation Protocol (SIP) trunking to reduce phone bills for most calls. Communications are routed to the Telco using a SIP trunk via the IP PBX directly or a Voice Gateway. SIP is an IP-based protocol which means the Telco can either provide a dedicated leased line directly into the organization’s premises or the customer can connect to a Telco’s SIP server via the internet. Usually main Telco lines are provided via a dedicated IP-based circuit to ensure line stability and low latency.

With SIP trunks Telco providers usually offer heavily reduced prices over traditional methods such as PSTN or ISDN circuits. This is especially true for long-distance calls, where communication can be made for a fraction of a price when compared to older digital circuits.

Savings on calls via SIP trunk providers can be so significant that many companies with old Legacy PBXs have installed an IP PBX that acts as a Voice Gateway, which routes calls to a SIP provider as shown in the diagram below:

Connecting an Analogue-Digital PBX with a SIP Provider via a Voice Gateway 

Connecting an Analogue-Digital PBX with a SIP Provider via a Voice Gateway

In this example an IP PBX with Voice Gateway (VG) capabilities is installed at the organization. The Voice Gateway connects on one end with the Analogue - Digital PBX using an ISDN BRI interface providing up to 2 concurrent calls while at the other end it connects with a SIP provider via IP.

The SIP provider can be reached via the internet, usually using a dedicated internet connection, or even a leased line if the SIP provider has such capabilities. The Analogue - Digital PBX is then programmed to route all local and national calls via the current telco while all international calls are routed to the SIP provider via the Voice Gateway.

The organization is now able to take advantage of the low call costs offered by the SIP provider.

The digital nature of IP PBX makes it more mobile. Softphone applications support IP PBX and let users make calls over the internet from their smartphone or computer. This allows for huge portability while retaining the same extension number. Furthermore, this often comes at a flat rate, avoiding per-minute fees. Advanced Softphones support a number of great features such as call recording, caller ID choice, transfer, hold, voice mail integration, corporate directory, just to name a few.

A great example is 3CX’s Free Windows softphone, which is a great compact application under continuous development that delivers everything a mobile desktop user would need to communicate with the office and customers while on the road or working from home:

3CX windows softphone client & Presence

3CX Windows Softphone and Presence application

IP PBX, being a pure-IP based solution, means that users are able to relocate between offices or desks without requiring changes to the cabled infrastructure. IP phones can be disconnected from their current location and reconnected at their new one. With the help of a DHCP server the IP phone will automatically reconfigure and connect to the IP PBX with the user’s internal extension and settings.

A technology called Fixed Mobile Convergence or Follow-me can even allow employees to make a landline call on their mobile using WiFi, then move to cellular once they get out of range. The cellular calls can be routed through the IP PBX when on-site through their IP phone or local network. When users are off-site the mobility client automatically registers with the organization’s IP PBX via the internet extending the user’s internal extension to the mobile client. Calls are automatically routed to the mobile client without the caller or co-workers being aware.

Another big advantage is the unification of communications. Rather than a separate hardware phone, email, voicemail and more, companies can roll them into one system. In many cases, softphones can be integrated into the existing software such as Outlook, CRM, ERP and more. What’s more, employees can receive voicemails and faxes straight to their email inbox.

That’s not to say VoIP is without flaws. For a start, it relies heavily on the network, so issues can bring the call system down if a backup isn’t implemented or there are big network problems. It’s also less applicable for emergency services because support for such calls is limited. A lot of VoIP providers offer inadequate functionality and the communications are often untraceable. Though an IP PBX is the best solution for most businesses, it depends on the individual circumstances.

Main Components of a Modern Unified Communication IP PBX

A Unified Communication IP PBX system is made from a series of important components. Firstly, you have the computer needed to run the IP PBX software. This is the Call Control server that manages all endpoint devices, Call routing, voice gateways and more.

The IP PBX software is loaded on the server and configured by the Network Administrator. Depending on the vendor the IP PBX can be integrated into a physical device such as a router e.g Cisco CallManager Express or it might be a software application that can be installed on top of the server’s operating system e.g 3CX IP PBX.

In 3CX’s case, the IP PBX software can run under the Windows platform (workstation or server) or even the Linux platform. 3CX also supports Hyper-V and VMWare virtualization platforms helping dramatically increase availability and redundancy at no additional cost.

IP PBX & VoIP Network Components

IP PBX & VoIP Network Components

VoIP Gateways, aka Voice Gateways or Analogue Telephony Adaptor (ATA), play a dual role – they act as an interface between older analogue devices such as phones, faxes etc and the newer VoIP network allowing them to connect to the VoIP network. The VoIP Gateway in this case is configured with the extensions assigned to these devices and registers to the IP PBX on their behalf using the SIP protocol. When an extension assigned to an analogue device is called, the IP PBX will send the signal to the VoIP Gateway which will produce the necessary ringing signal to the analogue device and make it ring. As soon as the phone is picked up, the VoIP Gateway will connect the call acting as a “router” between the analogue device and VoIP network. ATA is usually the term used to describe a VoIP Gateway that connects a couple of analogue devices to the VoIP network.

VoIP Gateways are also used to connect an IP PBX to the Telco, normally via an ISDN (BRI or PRI) or PSTN interface. Incoming and outgoing calls will traverse the VoIP Gateway connecting the IP PBX with the rest of the world.

IP phones are the physical devices used to make and accept phone calls. Depending on the vendor and model, these can be simple phones without a display or high end devices with colour multi-touch displays and enhanced functions such as multiple line support, speed dials, video conferencing and more. Popular vendors in this field include Cisco, GrandStream, Yealink and others. All IP phones communicate using the non-propriatery SIP protocol. This makes it easy for organizations to mix and match different hardware vendors without worrying about compatibility issues.

In the case of a softphone the application runs on a desktop computer or smartphone and is capable of providing all services similar to those of an IP phone plus a lot more. Users can also connect a headset, microphone, or speakers if needed.

3CX Android and iPhone softphone SIP client

3CX’s free SIP-based softphone for Android (left) and iPhone (right) both provide a wealth of functions no matter where users are located

However, the key part of a Unified Communication IP PBX is its ability to use this existing hardware and software to bring multiple mediums together intuitively. Outlook allows you to make softphone calls straight from the email interface, removing the need for a long lists of details.

This is combined with the integration of instant messaging so that call takers can correspond with other staff if they’re giving tech support. It can be further enhanced by desktop sharing features to see exactly what a user is doing, as well as SMS, fax, and voicemail.

More advanced Unified Communications platforms use speech recognition for automatic, searchable transcriptions of calls. Large organizations are even implementing artificial intelligence in their workflow. Microsoft’s virtual support assistant looks at what employees are doing and provides relevant advice, information, and browser pages. The ultimate goal is for an employee to obtain everything they need with minimal effort.

How an IP PBX Works

It’s important to understand how each of these components work to form a cohesive whole. Each IP phone is registered with the IP PBX server, which is usually just a specially configured PC running the Windows or Linux operating system. This OS can also be run on a virtual machine.

Advanced IP PBX systems such as 3CX support both Windows and Linux operating systems but can also be hosted on virtualized platforms such as Hyper-V and VMware, offering great value for money.

The IP PBX server creates a list that contains the Session Initiation Protocol addresses (SIP) of each phone. For the unfamiliar, SIP is the most popular protocol for transmitting telephone data over networks. It sits on top of the application layer of the OSI model, and integrates elements from HTTP and SMTP. As such, the identifying SIP addresses look like a mash-up of an email address and a telephone number.

SIP Accounts

SIP endpoint accounts (IP Phones, softphones, VoIP Gateways) are configured on the IP PBX with their extension and credentials. Similarly the endpoint devices are configured with the IP PBX’s IP address and their previously configured accounts. Once the SIP endpoint device registers to the IP PBX it is ready to start placing and receiving phone calls.

SIP Endpoint Registering to an IP PBX System 

SIP Endpoint Registering to an IP PBX System

Once a user places a call, the system can determine if the call is going to a phone on the same system or externally. Internal calls are detected via the SIP address and routed straight to each other over LAN. External calls are routed to the Telco Provider via the Voice Gateway or a SIP trunk depending on the setup.

Naturally, these calls are made from the hardware and softphones mentioned earlier. Hardware IP phones connect to the network using a standard RJ-45 connector, replacing the older RJ-11 connectors used by the analogue telephones.

Voice Codecs – G.711, G.729, G.722

Audio signals from the IP phones must be converted into a digital format before it can be transmitted. This is done via a codec, which compresses it and then decodes as it's replayed. There are several different types of codecs, and what you use decides both the audio quality and the amount of bandwidth used.

SIP endpoints located on the LAN network almost always use G.711 codec which has a 1:2 compression and a 64Kbps bitrate plus 23.2Kbps for the IP overhead resulting in a bitrate of 87.2Kbps. It delivers high, analogue telephone quality but comes with a significant bandwidth cost which is not a problem for the local network speeds which average 1Gbps.

When a SIP endpoint is on the road away from the office, moving to a less bandwidth-intensive codec at the expense of voice quality is usually desirable. The most commonly used codec for these cases is G.729, which provides an acceptable audio quality for just 31.2Kbps bitrate that breaks down to 8Kbps plus 23.2Kbps for the IP overhead. It’s similar to the call quality of your average cell phone.

G.711 vs G.729 Call - Bandwidth Requirements per call

G.711 vs G.729 Call - Bandwidth Requirements per call

G.722 delivers a better call quality than even PSTN, but is best for high bandwidth scenarios or when great audio quality is essential.

SIP Trunks

Finally, SIP Trunks are also configured with codecs for incoming and outgoing phone calls. This is also why, when connecting to an internet-based SIP provider, special consideration must be taken to ensure there is enough bandwidth to support the number of simultaneous calls desired. For example, if we wanted to connect to a SIP provider and support up to 24 simultaneous calls using G.711 codec for high-quality audio, we would require 87.2Kbps x 24 = 2092.8Kbps of bandwidth or 2.043Mbps during full line capacity.

Voicemail with IP PBXs

Voicemail works differently to that in a traditional phone environment. A voicemail server was typically a standalone unit or an add-in card. In IP PBX systems, voicemail is integrated into the solution and stored in a digital format. This has several advantages, including the ability to access voicemail via a web browser or mobile phone, forward voicemails to an email account, forward a voicemail to multiple recipients via email and many more.

In addition, some IP PBXs can be configured to automatically call the person for which voicemail was left and play any messages in their voicemail.

How an IP PBX Can Help Save Money

Once you understand the fundamental differences between an IP PBX and legacy analogue/digital PBXs, it becomes clearer how an organization can save.

Because IP PBX runs on existing network infrastructure, there’s no need for separate cabling. This negates a significant chunk of the setup costs, as does the simplicity of installation. The initial investment can be up to ten times less than traditional PSTN systems. It means a huge reduction in service costs. The lack of physical separate wires means no chance for damage that can be costly to repair and maintain. Moving between offices is now an easy task as no cable patching is required from the IP PBX to the IP Phone. All that’s required is a data port to connect the IP phone or an access point in case of a wireless client (laptop, mobile device) with a softphone.

Maintenance of the underlying systems is also far easier. Most IP PBX systems run on either Linux or Windows, systems that technicians are usually intimately familiar with. This means technical problems often don’t need to be outsourced. When a patch or upgrade is available from the vendor the administrator can quickly create a snapshot of the IP PBX system via the virtualization environment and proceed with the installation. In the unlikely event the system doesn’t behave as expected he can roll back to the system’s previous state with the click of a button.

Upgrading the IP PBX to further extend its functionality is far more cost and time efficient compared to older PBXs. In most cases, new features are just a matter of purchasing an add-on or plugin and installing it. This scalability extends to the reach of the system itself. Traditional phone systems only have a certain number of ports that phones can be connected to. Once you reach that limit it will cost a significant amount to replace the existing system. With IP PBX, this isn’t an issue. IP phones connect via the network and aren’t limited by the same kind of physical factors.

As already noted, some IP PBX providers support running on a virtual platform. 3CX is one of the most popular and stable solutions that officially support both Hyper-V and VMWare. This functionality means you can create low-cost backups of the system.

The savings are even more prominent when you consider the price of VoIP compared to traditional PBX. SIP trunking can result in huge monthly savings of around 40%, depending on usage. If the business regularly makes calls abroad, there’s room for even more savings as it won’t be subject to hefty international fees.

3cx easy-to-use IP PBX management consoleThe 3CX Management Console is packed with funtionality, settings, call analysis plus monitoring. (clkick to enlarge)

Furthermore, extending the number of maximum simultaneous calls on a SIP trunk is an easy process usually only requiring changes to the SIP account and additional bandwidth available toward the SIP provider. These changes can generally be made in a few days. With traditional ISDN or PSTN lines the organization would need to order the additional lines from the Telco and wait up to a few weeks to have the new lines physically installed. Then there is the additional monthly service fee charged by the Telco regardless of the new lines usage. Most of these costs do not exist with SIP providers and SIP trunks, making them a much cheaper and faster solution. Most US, UK and Australian based Telco providers are now moving from the ISDN protocol to SIP trunking making it only a matter of time until ISDN is no longer offered as a standard.

Companies can make the choice to use codecs such as G.729 instead of G.711 with their SIP provider. This means that they can choose to sacrifice voice quality and reduce their SIP trunking bandwidth requirements by 70%. For example, a SIP trunk using G.711 codec and supporting up to 24 simultaneous calls requires 87.2Kbps x 24 = 2092.8Kbps of bandwidth or 2.043Mbps during full line capacity.

ISDN T1 Bandwidth requirements - G.711 vs G.729

ISDN T1 Bandwidth requirements - G.711 vs G.729

With G.729 the same SIP trunk would require 31.2Kbps x 24 = 748.8Kbps of bandwidth or 0.73Mbps during full line capacity!

In addition to these direct savings, the advanced features offered by IP PBXs and the flexibility they have can result in a huge increase in productivity. The ability to commutate efficiently with colleagues and customers often results in higher satisfaction, increased work output and more profit.

All of this adds up to some huge cost savings, with estimates of up to 80% over an extended period. Not only are IP PBX systems cheaper to set up, they’re cheaper to maintain, upgrade, scale and remove.

Free IP PBXs vs Paid

It’s often tempting to cut costs further by opting for a free IP PBX solution. However, these often lack the support and features of a paid alternative. Most providers put a limit on outgoing calls, how the absence of important VoIP and Unified Communication features are usually the main problem which servely limit the system's functionality. Solutions such as 3CX offer full product funtionality with up to 8 simultaneous calls and no cost making it the ideal VoIP system for startup and small companies.

The security of some free providers has been brought into question. Asterisk has been hacked on several occasions, though security has been hardened significantly now. Though no system is completely secure, paid providers often have dedicated security teams and ensure systems are hard to penetrate by default, rather than requiring extra configuration or expertise that the end customer might not have.

Low-cost editions come with a multitude of other features. Integration of applications is a big one, 3CX’s pro plan offers Outlook, Salesforce, Microsoft Dynamics, Sugar CRM, Google Contact and more.

It’s a must for unified communications features such as video calls, conferencing and integrated fax servers. The number of participants that can join a conference call is also higher with subscription-based versions of 3CX.

These advanced features extend to calls, with inbuilt support for call recording, queuing and parking. 3CX even offers a management suite for call recordings, saving the need to set up additional software. In paid versions, functionality like this is more likely to extend to Android, iOS, and other platforms.

However, perhaps the most important advantage is the amount of support offered by subscription-based services. Higher profits mean they can offer prompt, dedicated support, against the often slow and limited services of free providers. Though a paid service isn’t always essential, the extra productivity and support they bring is usually well worth the price – especially when considering the negative impact a technical IP telephony issue can have on the organization.

Popular Free/Low-Cost IP PBX Solutions

That said, small businesses can probably get away with a free IP PBX solution. There are reputable, open-source solutions out there completely free of charge. The biggest, most popular one is Asterisk. The toolkit has been going for years, and has a growing list of features that begins to close the gap between free and subscription-based versions.

Asterisk supports call distribution, and interactive voice menu, voicemail, automatic call distribution, and conference calling. It’s still a way off premium services due to many of the reasons above, but it’s about as good as it gets without shelling out.

Despite that, there are still some notable competitors. Many of them started as branches of Asterisk, which tends to happen in the open source community. Elastix is one of these and provides a unified communications server software with email, IM, IP PBX, collaborating and faxing. The interface is a bit simpler than its grandfather, and it pulls in other open source developments such as Openfire, HylaFax and Postfix to offer a more well-rounded feature line-up.

SIP Foundry, on the other hand, isn’t based on Asterisk, and is considered as much as a competitor as there can be. Its feature list is much the same as Asterisk, but is marketed more towards businesses looking to build their own bespoke system. That’s where SIP Foundry’s business model comes in, selling support to companies for a substantial $495 US per month for 100 users.

Other open source software has a focus on security. Kamailio has been around for over fifteen years, and supports asynchronous TCP, UDP and TLS to secure VoIP video text, and WebRTC. This combines with authentication and authorization as well as load balancing and routing fail-over to deliver a very secure experience. The caveat is that Kamailio can be more difficult to configure, and admins need considerable knowledge of SIP.

Then there’s 3CX. The company provides a well-featured free experience that has more than enough to get someone started with IP PBX. All the essential features are there, from call logging, to voicemail, to one-click conferencing. However, 3CX also offers more for those who want it, including some very powerful tools. The paid versions of 3CX are still affordable, but offer the same features of some of the most expensive solutions on the market. It offers almost unprecedented application integration and smart call centre abilities at a reasonable price.

3CX also supports a huge range of IP phones, VoIP Gateways, and any SIP Trunk provider. The company works with a huge list of providers across the world to create pre-configured SIP Trunk templates for a plug and play setup. These templates are updated and tested with every single release, ensuring the user has a problem-free experience. What’s more, powerful, intuitive softphone technology is built straight into the switchboard, including drag and drop calls, incoming call management, and more.

Unified Communications features include mobility clients with advanced messaging and presence features that allow you to see if another user is available, on-call or busy. Click-to-call features can be embedded on the organization’s website allowing visitors to call the company with a click of a button through their web browser. Advanced Unified Communications features such as 3CX WebMeeting enables video calling directly from your organization’s website. Website visitors initiate a video call to your sales team with a click of a button.

 3cx web conferencing

3CX WebMeeting enables clientless video conferencing/presentation from any web browser

Employees can also use 3CX WebMeeting to communicate with colleagues in different physical locations and perform presentations, share videos, PowerPoint presentations, Word document, Excel spreadsheet, desktop or any other application. Many of these features are not even offered in larger high-end enterprise solutions or would cost thousands of dollars to purchase and maintain.

3CX has also introduced VoIP services and functionality suitable for hotels making their system an ideal Hotel-Based VoIP system.

Downloading the free 3CX IP PBX system is well-worth the time and effort for organizations that are seeking to replace or upgrade their PBX system at minimal or no-cost.

Summary

IP PBXs offer so many advantages over traditional PBX that implementation is close to a no-brainer. IP PBX is cheaper in almost every way, while still giving advanced features that just aren’t possible with other systems. The ability to intelligently manage incoming and outgoing calls, create conference calls on the fly, advanced mobility features that make it possible to work from home are almost essential in this day and age. Add to that the greatly reduced time and resources needed to upgrade, and you have a versatile, expandable system which won’t fall behind the competition.

Though some of these benefits can be had with completely free IP PBX solutions, paid services often come with tools that can speed up workflow and management considerably. The returns gained from integration of Microsoft Dynamics, Office 365, Salesforce and Sugar CRM are often well worth the extra cost.

However, such functionality doesn’t have to be expensive. Low-cost solutions like 3CX offer incredible value for money and plans that can be consistently upgraded to meet growing needs. The company lets you scale from a completely free version to a paid one, making it one of the best matches out there for any business size.

  • Hits: 31915

7 Security Tips to Protect Your Websites & Web Server From Hackers

digital eyeRecent and continuous website security breaches on large organizations, federal government agencies, banks and thousands of companies world-wide, has once again verified the importance of website and web application security to prevent hackers from gaining access to sensitive data while keeping corporate websites as safe as possible. Though many encounter a lot of problems when it comes to web application security; it is a pretty heavy filed to dig into.

Some security professionals would not be able to provide all the necessary steps and precautions to deter malicious users from abusing your web application. Many web developers will encounter some form of difficulty while attempting to secure their website, which is understandable since web application security is a multi-faceted concept, where an attacker could make use of thousands of different exploits that could be present on your website.

Although no single list of web security tips and tricks can be considered as complete (in fact, one of the tips is that the amount of knowledge, information and precautions that you can implement is never enough), the following is as close as you can get. We have listed six concepts or practices to aid you in securing your website which, as we already mentioned, is anything but straightforward. These points will get you started and nudge you in the right direction, where some factors in web application security are considered to be higher priority to be secured than others.

1. Hosting Options

web hostingWithout web hosting services most websites would not exist. The most popular methods to host web applications are:regular hosting, where your web application is hosted on a dedicated server that is intended for your website only, and shared hosting, where you share a web server with other users who will in turn run their own web application on the same server.

There are multiple benefits to using shared hosting. Mainly this option is cheaper than having your own dedicated server which, therefore, generally attracts smaller companies preferring to share hosting space. The difference between shared and dedicated hosting will seem irrelevant when looking at this from a functionality point of view, since the website will still run, however, when discussing security we will need to look at it from a completely different perspective.

The downside of shared hosting trumps any advantages that it may offer. Since the web server is being shared between multiple web applications, any attacks will also be shared between them. For example, if you share your web server with an organisation that has been targeted by attackers who have launched Denial of Service attacks on its website, your web application will also be affected since it is being hosted on the same server while using resources from the same resource pool. Meanwhile, the absence of complete control over the web server itself will allow the provider to take certain decisions that may place your web application at risk of being exploited. If one of the websites being hosted on the shared server is vulnerable, there is a chance that all the other websites and the web server itself could be exploited. Read more about web server security.

2. Performing Code Reviews

code review checkMost successful attacks against web applications are due to insecure code and not the underlying platform itself. Case in point, SQL Injection attacks are still the most common type of attack even though the vulnerability itself has been around for over 20 years. This vulnerability does not occur due to incorrect input handling by the database system itself, it is entirely related to the fact that input sanitization is not implemented by the developer, which leads to untrusted input being processed without any filtering.

This approach only applies for injection attacks and, normally, inspecting code would not be this straightforward. If you are making use of a pre-built application, updating to the latest version would ensure that your web application does not contain insecure code, although if you are using custom built apps, an in depth code review by your development team will be required. Whichever application type you are using, securing your code is a critical step or else the very base of the web application will be flawed and therefore vulnerable.

3. Keeping Software Up To Date

software updateWhen using software that has been developed by a third party, the best way to ensure that the code is secure would be to apply the latest updates. A simple web application will make use of numerous components that can lead to successful attacks if left unpatched. For example, both PHP and MySQL were vulnerable to exploits at a point in time but were later patched, and a default Linux webserver installation will include multiple services all of which need to be updated regularly to avoid vulnerable builds of software being exploited.

The importance of updating can be seen from the HEARTBLEED exploit discovered in OpenSSL, which is used by most web applications that serve their content via HTTPS. That being said, patching these vulnerabilities is an easy task once the appropriate patch has been released, you will simply need to update your software. This process will be different for every operating system or service although, just as an example to see how easy it is, updating services in Debian based servers will only require you to run a couple of commands.

4. Defending From Unauthorised Intrusions

defending against intrusionsWhile updating software will ensure that no known vulnerabilities are present on your system, there may still be entry points where an attacker can access your system that have been missed in our previous tips. This is where firewalls come into play. A firewall is necessary as it will limit traffic depending on your configuration and can also be found on most operating systems by default.

That being said, a firewall will only be able to analyse network traffic, which is why implementing a Web Application Firewall is a must if you are hosting a web application. WAFs are best suited to identifying malicious requests that are being sent to a web server. If the WAF identifies an SQL Injection payload in a request it will drop that request before it reaches the web server. Meanwhile if a WAF is not able to intercept these requests, you may also set up custom rules depending on the requests that need to be blocked. If you are wondering which requests you can block even before your WAF can, take a look at our next tip.

5. Performing Web Vulnerability Scans

web vulnerability scansNo amount of code reviews and updates can ensure that the end product is not vulnerable and cannot be exploited. Code reviews are limited since the executed code is not being analysed, which is why web vulnerability scanning is essential. Web scanners will view the web application as a black box, where they will be analysing the finished product, which is not possible with white box scanning or code reviews. Meanwhile, some scanners will also provide you with the option to perform grey box scanning, by combining website scans and a backend agent that can analyse code.

As complex and large as web applications are nowadays, it would be easy to miss certain vulnerabilities while performing a manual penetration test. Web vulnerability scanners will automate this process for you, thereby being able to cover a larger website in less time, while being able to detect most known vulnerabilities. One notorious vulnerability that is difficult to identify is DOM-based XSS, although web scanners are still able to identify such vulnerabilities. Web vulnerability scanners will also provide you with requests that you need to block on your Web Application Firewall (WAF), while you are working to fix these vulnerabilities.

6. Importance Of Monitoring

application monitoring alertsIt is imperative to know if your web application has been subjected to an attack. Monitoring the web application, and the server hosting it, would be the best way to ensure that even if an attacker gets past your defence systems, at least you will know how, when and from where it happened. There may be cases when a website is brought offline due to an attack and the owner would not even know about the incident but will find out after precious time has passed.

To avoid this you can monitor server logs, for example enabling notifications to be triggered when a file is deleted or modified. This way, if you had not modified that particular file, you will know that someone else has unauthorised access to your server. You can also monitor uptime which comes in handy when the attack is not as stealthy as modifying files, such as when your web server is subject to a Denial of Service attack. Such utilities will notify you as soon as your website is down, without having to discover the incident from users of your website.

The worst thing you can do when implementing monitoring services would be to base them on the same web server that is to be monitored. If this server was knocked down, the monitoring service will not be available to notify you.

7. Never Stop Learning

Finally, whatever you currently know about web security it’s never enough. Never stop learning about improving your web application’s security because literally every day brings a new exploit that may be used against your website. Zero day attacks happen out of the blue, which is why keeping yourself updated with any new security measures that you can implement is imperative. You can find such information from multiple web security blogs that detail how a website administrator should enforce their website’s security.

  • Hits: 27281

WordPress Audit Trail: Monitor Changes & Security Alerts For WordPress Blogs, Websites, e-Shops - Regulatory Compliance

wordpress-audit-trail-log-site-security-alerts-1aMonitoring, Auditing and obtaining Security Alerts for websites and blogs based on popular CMS systems such as WordPress, has become a necessity. Bugs, security exploits and security holes are being continuously discovered for every WordPress version making monitoring and auditing a high security priority. In addition, multi-user environments are often used for large WordPress websites, making it equally important to monitor WordPress user activity.

Users with different privileges can login to the website’s admin pages and publish content, install a plugin to add new functionality to the website, or change a WordPress theme to change the look and feel of the website. From the admin pages of WordPress users can do anything, including taking down the website for maintenance, depending on their privileges.

The Need to Keep a Log of What is Happening on Your WordPress

Every type of multi-user software keeps an audit trail that records all user activity on the system. And, since modern business websites have become fully blown multi-user web applications, keeping a WordPress audit trail is a critical and must do task. A default installation of WordPress does not have an audit trail, but the good news is that there are plugins such as WP Security Audit Log that allow you to keep an audit trial of everything that is happening on your WordPress.

wordpress-audit-trail-log-site-security-alerts-1Figure 1. Plugins like WP Security Audit Log provide detail tracking of all necessary events (click to enlarge)

There are several advantages to keeping track of all the changes that take place on your WordPress website in an audit trail. Here are just a few:

Keep Track Of Content & Functionality Changes On Your WordPress

By keeping a WordPress audit trail you can find out who did what on your WordPress website. For example; who published an article, or modified existing and already published content of an article or a page, installed a plugin, changed the theme or modified the source code of a file.

 Searching for specific events in WordPress Security Audit Log

Figure 2. Searching for specific events in WordPress Security Audit Log (click to enlarge)

Be Alerted to Suspicious Activity on Your WordPress

By keeping a WordPress Audit trail you can also be alerted to suspicious activity on your WordPress at an early stage, thus thwarting possible hack attacks. For example, when a WordPress is hacked, typically the attackers reset a user’s password or create a new account to login to WordPress. By using an add-on such as Email Notifications you can create specific rules so when important changes happen on your WordPress they are logged and you are notified via email.

wordpress-audit-trail-log-site-security-alerts-3 Figure 3. WP Security Audit Log: Creating customized email alerts for your WordPress site

Ensure the Productivity of Your Users & Employees

Nowadays many businesses employ remote workers. As much as businesses benefit by employing remote workers, there are disadvantages. For example, while the activity of employees who work from the office can be easily tracked, that of remote workers cannot. Therefore if your business website is powered by WordPress, when you install a WordPress audit trail plugin you can keep track of everything your web team is doing on the website, including the login and logout times, and location.

Ensure Your Business WordPress Websites Meet Mandatory Regulatory Compliance Requirements

If you have an online business, or if you are any sort of business via your WordPress website, there is a number of regulatory compliance requirements your website needs to adhere to, such as the PCI DSS. One common requirement these regulatory compliance requirements have is logs. As a website owner you should keep a log, or audit trail, of all the activity that is happening on your website.

Ease WordPress Troubleshooting

If you already have experience managing a multi-user system, you know that if something breaks down users will never tell you what they did. This is common, especially when administering customers’ websites. The customer has administrative access to WordPress. Someone installs a plugin, the website goes haywire yet it is no one’s fault. By keeping a WordPress audit trail you can refer to it and easily track any website changes that took place, thus making troubleshooting really easy.

Keep A WordPress Audit Trail

There are several other advantages when you keep a WordPress audit trail to keep a record of all the changes that take place on your WordPress site, such as having the ability to generate reports to justify your charges. The list of advantages can be endless but the most important one is security. Typically overlooked, logging also helps you ensure the long term security of your WordPress website.

 

  • Hits: 19408

Understanding SQL Injection Attacks & How They Work. Identify SQL Injection Code & PHP Caveats

Introduction-to-SQL-Injection-01SQL Injections have been keeping security experts busy for over a decade now as they continue to be one of the most common type of attacks against webservers, websites and web application servers. In this article, we explain what a SQL injection is, show you SQL injection examples and analyse how these type of attacks manage to exploit web applications and webservers, providing hackers access to sensitive data.

Additional interesting Web Hacking and Web Security content:

What Is A SQL Injection?

Websites operate typically with two sides to them: the frontend and backendThe frontend is the element we see, the rendered HTML, images, and so forth.  On the backend however, there are layers upon layers of systems rendering the elements for the frontend. One such layer, the database, most commonly uses a database language called SQL, or Structured Query Language. This standardized language provides a logical, human-readable sentence to perform definition, manipulation, or control instructions on relational data in tabular form. The problem, however, is while this provides a structure for human readability, it also opens up a major problem for security.

Typically, when data is provided from the frontend to the backend of a website – e.g. an HTML form with username and password fields – this data is inserted into the sentence of a SQL query. This is because rather than assign that data to some object or via a set() function, the data has to be concatenated into the middle of a string. As if you were printing out a concatenated string of debug text and a variable’s value, SQL queries work in much the same way. The problem, however, is because the database server, such as MySQL or PostgreSQL, must be able to lexically analyse and understand the sentence’s grammar and parse variable=value definitions. There must exist certain specific requirements, such as wrapping string values in quotes. A SQL injection vulnerability, therefore, is where unsanitized frontend data, such as quotation marks, can disrupt the intended sentence of a SQL query.

How Does A SQL Injection Work?

So what does “disrupt the intended sentence of a SQL query” mean? A SQL query reads like an English sentence:

Take variable foo and set it to ‘bar’ in table foobar.
Notice the single-quotes around the intended value, bar. But if we take that value, add a single quote and some additional text, we can disrupt the intended sentence, creating two sentences that change the entire effect. So long as the database server can lexically understand the sentence, it is none the wiser and will happily complete its task.  So what would this look like?

If we take that value bar and change it to something more complex – bar’ in table foobar. Delete all values not equal to ‘ – it completely disrupts everything. The sentence is thus changed as follows:

Take variable foo and set it to ‘bar’ in table foobar. Delete all values not equal to ‘’ in table foobar.

Notice how dramatically this disrupts the intended sentence? By injecting additional information, including syntax, into the sentence, the entire intended function and result has been disrupted to effectively delete everything in the table, rather than just change a value.

What Does A SQL Injection Look Like?

In code form, a SQL injection can find itself in effectively any place a SQL query can be altered by the user of a web application. This means things like query strings e.g: example.com/?this=query_string, form content (such as a comments section on a blog or even a username & password input fields on a login page), cookie values, HTTP headers (e.g. X-FORWARDED-FOR), or practically anything else.  For this example, consider a simple query string in PHP:

Request URI: /?username=admin
 
1.  $user = $_GET['username'];
2.  mysql_query("UPDATE tbl_users SET admin=1 WHERE username='$user'");

First, we will break this down a bit.

On line #1, we set the value of the username field in the query string to the variable $user.

On line #2, we insert that variable’s value into the query string’s sentence. Substituting the variable for the value admin in the URI, the database query would ultimately be parsed as follows by MySQL:

UPDATE tbl_users SET admin=1 WHERE username='admin'

However, a lack of basic sanitization opens this query string up to serious consequences. All an attacker must do is put a single quote character in the username query string field in order to alter this sentence and inject whatever additional data he or she would like.

Here is an example of what this would look like:

Request URI: /?username=admin' OR 'a'='a
 
1.  $user = $_GET['username'];
2.  mysql_query("UPDATE tbl_users SET admin=1 WHERE username='$user'");

Now, with this altered data, here is what MySQL would see and attempt to evaluate:

UPDATE tbl_users SET admin=1 WHERE username='admin' OR 'a'='a'

Notice, now, that if the letter A equals the letter A (basically true=true), all users will be set to admin status.

Ensuring Code is Not Vulnerable to SQL Injection Vulnerabilities

If we were to add a function, mysql_real_escape_string() for example, on line #1, that would prevent this particular variable from being vulnerable to a SQL injection. In practice, it would look like this:

Request URI: /?username=admin' OR 'a'='a                                                                                                                                                            1.  $user = mysql_real_escape_string($_GET['username']);
2.  mysql_query("UPDATE tbl_users SET admin=1 WHERE username='$user'");

This function escapes certain characters dangerous to MySQL queries, by prefixing those characters with backslashes. Rather than evaluate the single quote character literally, MySQL understands this prefixing backslash to mean do not evaluate the single quote. Instead, MySQL treats it as part of the whole value and keeps going.  The string, to MySQL, would therefore look like this:


UPDATE tbl_users SET admin=1 WHERE username='admin\' OR \'a\'=\'a'

Because each single quote is escaped, MySQL considers it part of the whole username value, rather than evaluating it as multiple components of the SQL syntax. The SQL injection is thus avoided, and the intention of the SQL sentence is thus undisrupted.

Caveat: For these examples, we used older, deprecated functions like mysql_query() and mysql_real_escape_string() for two reasons:

1.    Most PHP code still actively running on websites uses these deprecated functions;
2.    It allows us to provide simple examples easier for users to understand.

However, the right way to do it is to use prepared SQL statements. For example, the prepare() functions of the MySQLi and PDO_MySQL PHP extensions allow you to format and assemble a SQL statement using directive symbols very much like a sprintf() function does. This prevents any possibility of user input injecting additional SQL syntax into a database query, as all input provided during the execution phase of a prepared statement is sanitized.  Of course, this all assumes you are using PHP, but the idea still applies to any other web language.

SQL Injection Is The Most Widely Exploited Vulnerability

Even though it has been more than sixteen years since the first documented attack of SQL Injection, it is still a very popular vulnerability with attackers and is widely exploited. In fact SQL Injection has always topped the OWASP Top 10 list of most exploited vulnerabilities.

  • Hits: 14039

Web Application Security Interview on Security Weekly – Importance of Automated Web Application Security

netsparker-importance-of-automated-web-application-scannerA few weeks back Security Weekly interviewed Ferruh Mavituna, Netsparker’s CEO and Product Architect. Security Weekly is a popular podcast that provides free content within the subject matter of IT security news, vulnerabilities, hacking, and research and frequently interviews industry leaders such as John Mcafee, Jack Daniel and Bruce Schneier.

During the 30 minutes interview, Security Weekly’s host Paul Asadoorian and Ferruh Mavituna highlight how important it is to use an automated web application security scanner to find vulnerabilities in websites and web applications. They also briefly discuss web application firewalls and their effectiveness, and how Netsparker is helping organizations improve their post scan process of fixing vulnerabilities with their online web application security scanner Netsparker Cloud.

Paul and Ferruh covered several other aspects of web application security during this interview, so if you are a seasoned security professional, a developer or a newbie it is a recommended watch.  

To view the interview, click on the image below:

netsparker-ceo-interview-importance-of-automated-web-application-scanner
Figure 1. Netsparker CEO explains the importance of automated web application security scanners

  • Hits: 8568

WordPress DOM XSS Cross-site Scripting Vulnerability Identified By Netsparker

netsparker-discovery-wordpress-dom-xss-scripting-vulnerability-18th of May 2015, Netsparker annouced yesterday the discovery of critical security vulnerability contained an HTML file found on many WordPress themes, including WordPress.org hosted websites. As reported by Netsparker the specific HTML file is vulnerable to cross-site scripting attacks and session hijack. WordPress.org has already issued an official annoucement and patch (v4.2.2) and recommends WordPress administrators update their website files and themes.

The Genericons icon font package, which is used in a number of popular themes and plugins, contained an HTML file vulnerable to a cross-site scripting attack. All affected themes and plugins hosted on WordPress.org (including the Twenty Fifteen default theme) have been updated yesterday by the WordPress security team to address this issue by removing this nonessential file. To help protect other Genericons usage, WordPress 4.2.2 proactively scans the wp-content directory for this HTML file and removes it. Reported by Robert Abela of Netsparker.

By exploiting a Cross-site scripting vulnerability the attacker can hijack a logged in user’s session. This means that the malicious hacker can change the logged in user’s password and invalidate the session of the victim while the hacker maintains access. As seen from the XSS example in Netsparker's article, if a web application is vulnerable to cross-site scripting and the administrator’s session is hijacked, the malicious hacker exploiting the vulnerability will have full admin privileges on that web application.

Related Security Articles

  • Hits: 10791

Choosing a Web Application Security Scanner - The Importance of Using the Right Security Tools

choosing-web-application-security-scanner-1In the world of information security there exist many tools, from small open source products to full appliances to secure a system, a network, or an entire corporate infrastructure.  Of course, everyone is familiar with the concept of a firewall – even movies like Swordfish and TV shows like NCIS have so very perfectly described, in riveting detail, what a firewall is.  But there are other, perhaps less sexy utilities in a security paradigm.

Various concepts and security practices – such as using complex passphrases, or eschewing passphrases entirely, deeply vetting email sources, safe surfing habits, etc. – are increasingly growing trends among the general workforce at large, especially with the ubiquity of computers at every desk.  But security in general is still unfortunately looked at as an afterthought, even when a lack thereof begets massive financial loss at a seemingly almost daily level.

Security engineers are all too often considered an unnecessary asset, simply a menial role anybody can do; A role that can be assumed as yet another hat worn by developers, system administrators, or, well, perhaps just someone who only shows a modest capability with Excel formulas.  Whatever the reason for such a decision, be it financial or otherwise, the consequences can be severe and long-lasting.  Sony underestimated the value of a strong and well-equipped security team multiple times, choosing to forego a powerful army in lieu of a smaller, less outfitted and, thus, thinner stretched but cheaper alternative.  This, in turn, yielded among the largest multiple security breaches to ever be seen, especially by a single corporation.  Were their security department better outfitted with the right tools, it is quite possible those events would have played out entirely different.

Using The Right Security Tools

So, what constitutes “the right tools”?  Many things.  A well-populated team of capable security engineers certainly can be considered a valuable tool in building a strong security posture within an infrastructure.  But, more specifically and very critically, it is what assets those engineers have at their disposal that may mean the difference between a minor event that never even makes it outside the corporate headquarters doors, and a major event that results in a corporation paying for identity theft protection for millions of customers.  Those tools of course vary widely depending on the organization, but one common element they all do – or at least absolutely should – share is a web application security scanner.

What Is A Web Application Security Scanner?

A website that accepts user input in any form, be it URL values or submitted content, is a complex beast.  Not only does the content an end user provides change the dynamics of the website, but it even has the potential to cripple that website if done maliciously and left unprotected against.  For every possibility of user content, the amount of potential attack vectors increases on a magnitude of near infinity.  It is literally impossible for a security engineer, or even team thereof, to account for all these possibilities by hand and, especially, test them for known or unknown vulnerabilities.

Web scanners exist for this very purpose, designed carefully to predict potential and common methods of attack, then brute-force test them to find any possibility of an existing vulnerability.  And they do this at a speed impossible for humans to replicate manually.  This is crucial for many reasons, namely that it saves time, it is thorough and comprehensive, and, if designed well, adaptive and predictive to attempt clever methods that even the most skilled security engineer may not immediately think of.  Truly, not using a web security scanner is only inviting potentially irreparable harm to a web application and even the company behind it.  But the question remains: Which web scanner works the best?

Options Galore - How To Choose Which Web Scanner Is Right For You

choosing-web-application-security-scanner-2Many websites and web applications are like a human fingerprints, with no two being alike.  Of course, many websites may use a common backend engine – Wordpress, an MVC framework like Laravel or Ruby on Rails, etc. – but the layers on top of those engines, such as plugins or custom coded additions, are often a quite unique collection. 

The backend engine is also not the only portion to be concerned with.  Frontend vulnerabilities may exist with each of these layers, such as cross-site scripting, insecurely implemented jQuery libraries, and add-ons, poor sanitization against AJAX communication models, and many more.  Each layer presents another nearly endless array of input possibilities to test for vulnerabilities.

A web scanner needs to be capable of digging through these unique complexities and provide accurate, reliable findings.  False positives can waste an engineer’s time, or worse, send a development team on a useless chase to perform unit tests, wasted looking for a falsely detected vulnerability.  And if the scanner is difficult to understand or provides little understanding of the detected vulnerabilities, it makes for a challenging or undesirable utility that may go unused.  Indeed, a well-designed web security scanner that delivers on all fronts is an important necessity for a strong security posture and a better secured infrastructure.

Final Thoughts

There is no one perfect solution that will solve all problems and completely secure your website such that it becomes impenetrable.  Further, a web security scanner will only be as effective as the security engineers or developers fixing all flaws it finds.  A web security scanner is only the first of many, many steps, but it indeed is an absolutely critical one for a powerful security posture.

Indeed, we keep returning to that phrase – security posture – because it is a perfectly analogous way to look at web application, system, and infrastructure security for both what it provides and what is required for good posture: a strong backbone.  Focused visibility and a clear view of paths over obstructions is not possible with a slouched posture.  Nothing will provide that vision as clearly as a web security scanner will, and no backbone is complete without a competent and useful web security scanning solution at its top.

  • Hits: 16176

Comparing Netsparker Cloud-based and Desktop-based Security Software solutions – Which solution is best for you?

If you are reading this you have heard about the Cloud Computing. If not, I would be worried! Terms such as Cloud Computing, Software as a Service, Cloud Storage has become a permanent fixture in adverts, marketing content and technical documentation.

Many Windows desktop software applications have moved to the “cloud”. Though, even though the whole industry wants you and your data in the cloud, have you ever looked into the pros and cons of the cloud? Does it make sense to go into that direction?

Let’s use web application security scanners as an example, software that is used to automatically identify vulnerabilities and security flaws in websites and web applications. Most, if not all of the industry leading vendors have both a desktop edition and an online service offering. In fact Netsparker just launched their all new service offering; Netsparker Cloud, the online false positive free web application security scanner. In such case which one should you go for?

As clearly explained in Netsparker Desktop VS Netsparker Cloud both web security solutions are built around the same scanning engine, hence their vulnerability detection capabilities are the same. The main differences between both of them are the other non-scan related features, which also define the scope of the solution.

cloud-based-vs-desktop-based-security-solutions-1Figure 1. Netsparker Cloud-based Security Sanner (Click to enlarge)

For example Netsparker Desktop is ideal for small teams, or security professionals who work on their own and have a small to medium workload. On the other hand Netsparker Cloud is specifically designed for organizations which run and manage a good number of websites and maybe even have their own team of developers and security professionals. It is a multi–user platform, has a vulnerability tracking solution (a system that is similar to a normal bug tracking solution but specifically designed for web application vulnerabilities) and it is fully scalable, to accommodate the simultaneous scanning of hundreds and thousands of web applications.

cloud-based-vs-desktop-based-security-solutions-2Figure 2. Netsparker Desktop-based Security Sanner (Click to enlarge)

Do not just follow the trend, inform yourself. Yes, your readings might be flooding with cloud related terms, the industry is pushing you to move your operations to the cloud as it is cheaper and more reliable, but as clearly explained in the desktop vs cloud web scanner comparison, both solutions still have a place in today’s industry.

  • Hits: 16526

The Importance of Automating Web Application Security Testing & Penetration Testing

automation-web-application-security-testing-1Have you ever tried to make a list of all the attack surfaces you need to secure on your networks and web farms? Try to do it and there will be one thing that will stand out; keeping websites and web applications secure. We have firewalls, IDS and IPS systems that inspect every packet that reaches our servers and are able to drop it should it be flagged as malicious, but what about web applications?

Web application security is different than network security. When configuring a firewall you control who accesses what, but when it comes to web application security you have to allow everybody in, including the bad guys and expect that everyone plays by the rules. Hence web applications should be secure; web application security should be given much more attention and considering the complexity of today’s web applications, it should be automated.

Let’s dig in deep in this subject and see why it needs to be automated.

Automated Web Security Testing Saves Time

Also known as Penetration Testing or “pen testing”, this is the process by which a security engineer or “pen tester” applies a series of injection or vulnerability tests against areas of a website that accept user input to find potential exploits and alert the website owner before they get taken advantage of and become massive headaches or even financial losses. Common places for this can include user data submission areas such as authentication forms, comments sections, user viewing configuration options (like layout selections), and anywhere else that accepts input from the user. This can also include the URL itself, which may have a Search Engine Optimization-friendly URI formatting system.

Most MVC frameworks or web application suites like WordPress offer this type of URI routing. (We differentiate a URL and URI. A URL is the entire address, including the http:// portion, the entire domain, and everything thereafter; whereas the URI is the portion starting usually after the domain (but sometimes including, for context), such as /user/view/123 or test.com/articles/123.)

For example, your framework may take a URI style as test.com/system/function/data1/data2/, where system is the controlling system you wish to invoke (such as an articles system), function is the action you wish to invoke (such as read or edit), and the rest are data values, typically in assumed positions (such as year/month/article-title).

Each of these individual values require a specific data type, such as a string, an integer, a certain regular expression match, or infinite other possibilities. If data types are not strictly enforced, or – sadly as often as this really does happen – user-submitted data is properly sanitized, then a hacker can potentially gain information to get further access, if not even force direct backdoor access via a  SQL injection or a remote file inclusion. Such vulnerabilities are such a prevalent and consistent threat, that for example SQL Injection has made it to the OWASP Top 10 list for over 14 years.

There exist potentially millions, billions, or more combinations of various URIs in your web application, including ones it may not support by default or even to your knowledge. There could be randomphpinfo(); scripts publicly accessible that mistakenly got left in by a developer, an unchecked user input somewhere, some file upload system that does not properly prevent script execution – any random number of possibilities. No security engineer or his team can reasonably assume for or test all of these possibilities. And black-hat hackers know all this too, sometimes better than those tasked to protect against these threats.

Automation Isn’t Just Used By The Good Guys

automation-web-application-security-testing-2Many automated security tools exist not to test and find security holes, but to exploit them when found. Black-hat hackers intent on disrupting your web application possess automated suites as well, because they too, know a manual approach is a waste of time (that is, until they find a useful exploit, and by then it’s sometimes too late).

Some utilities, like Slowloris, exist to exploit known weaknesses in common web services, like the Apache web server itself. Others pray on finding opportunity in the form of insecure common web applications – older versions of Wordpress, phpBB, phpMyAdmin, cPanel, or other frequently exploited web applications. There exist dozens of categorical vulnerabilities, each with thousands or millions of various attack variants. Looking for these is a daunting task.

As quickly as you can spin up a web application, a hacker can automatically scan it and possibly find vulnerabilities. Leveraging an automated web application vulnerability scanner like Netsparker or Netsparker Cloud provides you the agility and proactivity to find and prevent threats before they become seriously damaging problems. This holds especially true for complex web applications such as large forum systems, blogging platforms and custom web applications. The more possibility for user submitted data and functionality, the more opportunity for vulnerabilities to exist and be exploited. And remember, this changes again for every new version of the web application you install. A daunting task, indeed.

Without automation of web application security testing, a true strong security posture is impossible to achieve. Of course, many other layers ultimately exist – least-privilege practice, segregated (jail, chroot, virtual machine) systems, firewalls, etc. – but if the front door is not secure, what does it matter if the walls are impenetrable? With the speed afforded by automation, a strong and capable web vulnerability scanner, and of course patching found flaws and risks, security testing guarantees as best as reasonably possible that the front door to your web application and underlying infrastructure remains reinforced and secure.

  • Hits: 18451

Statistics Highlight the State of Security of Web Applications - Many Still Vulnerable to Hacker Attacks

state-of-security-of-web-applications-1Netsparker use open source web applications such as Twiki for a total different purpose than what they were intended for. They used them to test their own web application security scanners.

Netsparker need to ensure that their scanners are able to crawl and identify attack surfaces on all sort of web applications, and identify as much vulnerabilities as possible. Hence they frequently scan open source web applications. They use open source web applications as a test bed for their crawling and scanning engine.

Thanks to such exercise Netsparker are also helping developers ship more secure code, since they report their findings to the developers and sometimes also help them remediate the issue. When such web application vulnerabilities are identified Netsparker release an advisory and between 2011 and 2014 Netsparker published 87 advisories.

state-of-security-of-web-applications-2

A few days ago Netsparker released some statistics about the 87 advisories they published so far. As a quick overview, from these statistics we can see that cross-site scripting is the most common vulnerability in the open source web applications that were scanned. Is it a coincidence? Not really.

The article also explains why most probably many web applications are vulnerable to this vulnerability, which made it to the OWASP Top 10 list ever since.

The conclusion we can draw up from such statistics is quite predictable, but at the same time shocking. There is still a very long way to go in web application security, i.e. web applications are still poorly coded, making them an easy target for malicious hacker attacks.

  • Hits: 15644

The Implications of Unsecure Webservers & Websites for Organizations & Businesses

implications-of-unsecure-webservers-websites-1Long gone are the days where a simple port scan on a company’s webserver or website was considered enough to identify security issues and exploits that needed to be patched. With all the recent attacks on websites and webservers which caused millions of dollars in damage, we thought it would be a great idea to analyze the implications vulnerable webservers and websites have for companies, while providing useful information to help IT Departments, security engineers and application developers proactively avoid unwanted situations.

Unfortunately companies and webmasters turn their attention to their webservers and websites, after the damage is done, in which case the cost is always greater than any proactive measures that could have been taken to avoid the situation.

Most Security Breaches Could Have Been Easily Prevented

Without doubt, corporate websites and webservers are amongst the highest preference for hackers. Exploiting well-known vulnerabilities provides them with easy-access to databases that contain sensitive information such as usernames, passwords, email addresses, credit & debit card numbers, social security numbers and much more.

The sad part of this story is that in most cases, hackers made use of old exploits and vulnerabilities to scan their targets and eventually gain unauthorized access to their systems.

Most security experts agree that if companies proactively scanned and tested their systems using well-known web application security scanner tools e.g Netsparker, the security breach could have been easily avoided. The Online Trust Alliance (OTA) comes to also confirm this as they analyzed thousands of security breaches that occurred in the first half of 2014 and concluded that these could have been easily prevented. [Source: OTA Website]

Tools such as Web Application Vulnerability Scanners are used by security professionals to automatically scan websites and web applications for hidden vulnerabilities.

When reading through recent security breaches, we can slowly begin to understand the implications and disastrous effects these had for companies and customers. Quite often, the figure of affected users who’s information was compromised, was in the millions. We should also keep in mind that in many cases, the true magnitude of any such security incident is very rarely made known to the public.

Below are a few of the biggest security data breaches which exposed an unbelievable amount of information to hackers:

 eBay.com – 145 Million Compromised Accounts

implications-of-unsecure-webservers-websites-2In late February – early March 2014, the eBay database that held customer names, encrypted passwords, email addresses, physical addresses, phone numbers, dates of birth and other personal information, was compromised, exposing sensitive information to hackers. [Source:  bgr.com website]

JPMorgan Chase Bank – 76 Million Household Accounts & 7 Million Small Business

implications-of-unsecure-webservers-websites-3In June 2014, JPMorgan Chase bank was hit badly and had sensitive personal and financial data exposed for over 80 million accounts. The hackers appeared to obtain a list of the applications and programs that run on the company’s computers and then crosschecked them with known vulnerabilities for each program and web application in order to find an entry point back into the bank’s systems.
[Source: nytimes.com website]

Find security holes on your websites and fix them before they do by scanning your websites and web applications with a Web Application Security Scanner.

Forbes.com – 1 Million User Accounts

implications-of-unsecure-webservers-websites-4In February 2014, the Forbes.com website was succumbed to an attack that leaked over 1 million user accounts that contained email addresses, passwords and more.  The Forbes.com Wordpress-based backend site was defaced with a number of news posts. [Source: cnet.com website]

Snapchat.com – 4.6 Million Username Accounts & Phone numbers

implications-of-unsecure-webservers-websites-5In January 2014, Snapchat’s popular website had over 4.6 million usernames and phone numbers exposed due to a brute force enumeration attack against their Snapchat API. The information was publicly posted on several other sites, creating a major security concern for Snapchat and its users.
[Source: cnbc.com website]

USA Businesses: Nasdaq, 7-Eleven and others – 160 Million Credit & Debit Cards

implications-of-unsecure-webservers-websites-6In 2013 a massive underground attack was uncovered, revealing that over 160 million credit and debit cards were stolen during the past seven years. Five Russians and Ukrainians used advanced hacking techniques to steal the information during these years.  Attackers targeted over 800,000 bank accounts and penetrated servers used by the Nasdaq stock exchange.
[Source: nydailynews.com website]

AT&T - 114,000 iPad Owners (Includes White House Officers, US Senate & Military Officials)

implications-of-unsecure-webservers-websites-7In 2010, a major security breach on AT&T’s website compromised over 114,000 customer accounts, revealing names, email addresses and other information. AT&T acknowledged the attack on its webservers and commented that the risk was limited to the subscriber’s email address.  
Amongst the list were apparently officers from the White House, member of the US Senate, staff from NASA, New York Times, Viacom, Time Warner, bankers and many more. [Source: theguardian.com website]

Target  - 98 Million Credit & Debit Cards Stolen

implications-of-unsecure-webservers-websites-8In 2013, during the period 27th of November and 15th of December more than 98 million credit and debit card accounts were stolen from 1,787 Target stores across the United States. Hackers managed to install malware on Target’s computer systems to capture customers cards and then installed an exfiltration malware to move stolen credit card numbers to staging points around the United States in order to cover their tracks. The information was then moved to the hackers computers located in Russia.

The odd part in this security breach is that the infiltration was caught by FireEye – the $1.6 million dollar malware detection tool purchased by Target, however according to online sources, when the alarm was raised at the security team in Minneapolis, no action was taken as 40 million credit card numbers and 70 million addresses, phone numbers and other information was pulled out of Target’s mainframes!  [Source: Bloomberg website]

SQL Injections & Cross-Site Scripting are one of the most popular attack methods on Websites and Web Applications. Security tools such as Web Vulnerability Scanners allow us to uncover these vulnerabilities and fix them before hackers exploit them.

Implications for Organizations & Businesses

It goes without saying that organizations suffer major damages and losses when it comes to security breaches. When the security breaches happens to affect millions of users like the above examples, it’s almost impossible to calculate an exact dollar ($) figure.

Security Experts agree that data security breaches are among the biggest challenges organizations face today as the problem has both financial and legal implications.

Business Loss is the biggest contributor to overall data breach costs and this is because it breaks down to a number of other sub-categories, of which the most important are outlined below:

  • Detection of the data breach. Depending on the type of security breach, the business can lose substantial amounts of money until the breach is successfully detected. Common examples are defaced website, customer orders and credit card information being redirected to hackers, orders manipulated or declined.
  • Escalation Costs. Once the security breach has been identified, emergency security measures are usually put into action. This typically involves bringing in Internet security specialists, the cybercrime unit (police) and other forces, to help identify the source of the attack and damage it has caused. Data backups are checked for their integrity and everyone is on high-alert.
  • Notification Costs. Customers and users must be notified as soon as possible. Email alerts, phone calls and other means are used to get in contact with the customers and request them to change passwords, details and other sensitive information. The company might also need to put together a special team that will track and monitor customer responses and reactions.
  • Customer Attrition. Also known as customer defection. After a serious incident involving sensitive customer data being exposed, customers are more likely to stop purchasing and using the company’s services. Gaining initially a customer’s trust requires sacrifices and hard work – trying to re-gain it after such an incident means even more sacrifices and significantly greater costs. In many cases, customers choose to not deal with the company ever again, costing it thousands or millions of dollars.
  • Legal Implications. In many cases, customers have turned against companies after their personal information was exposed by a security breach. Legal actions against companies are usually followed by lengthy law suites which end up costing thousands of dollars, not to mention any financial compensation that will be awarded to the end customers.  One example is Target’s security breach mentioned previously which is now facing multiple lawsuits from customers.

As outlined previously, the risk for organizations is high and there are a lot in stake from both, financial and legal prospective.  The security breach examples mentioned in this article make a good point on how big and serious a security breach can become, but also the implications for companies and customers. Our next article will focus on guidelines that can help us prevent data breaches and help our organization, company or business to deal with them.

  • Hits: 31254

The Importance of Monitoring and Controlling Web Traffic in Enterprise & SMB Networks - Protecting from Malicious Websites - Part 1

security-protect-enterprise-smb-network-web-monitoring-p1-1This article expands on our popular security articles (Part 1 & Part 2) that covered the importance of patching enterprise and SMB network systems to protect them from hijacking, hacking attempts, unauthorized access to sensitive data and more. While patching systems is essential, another equally important step is the monitoring of Web traffic to control user activity on the web and prevent users from accessing dangerous sites and Internet resources that could jeopardize the company’s security.

The ancient maxim – prevention is better than cure – holds good in cyberspace as well, and it is prudent to detect beforehand signs of trouble, which if allowed to continue, might snowball into something uncontrollable. One of the best means of such prevention is through monitoring web traffic and to locate potential sources of trouble.

Even if attackers are unable to gain access to your network, they can still hold you to ransom by launching a Distributed Denial of Service or DDoS attack, wherein they choke the bandwidth of your network. Regular customers will not be able to gain access to your servers. Generally downtime for any company these days translates to loss of income and damage to the company’s reputation. Attackers these days might also refuse to relent until a ransom amount is paid up. Sounds a bit too far-fetched? Not really.

Live Attacks & Hacking Attempts On The Internet

It’s hard to image what really is happening right now on the Internet: How many attacks are taking place, the magnitude of these attacks, the services used to launch attacks, attack origins, attack targets and much more.  Hopefully we’ll be able to help change than for you right now…

The screenshot below was taken after monitoring the Norse network which collects and analyzes live threat intelligence from darknets in hundreds of locations in over 40 countries. The attacks are taken from a small subset of live flows against the Norse honeypot infrastructure and represent actual worldwide cyber-attacks:

security-protect-enterprise-smb-network-web-monitoring-p1-2aClick to enlarge

In around 15 minutes of monitoring attacks, we saw more than 5000 different origins launching attacks to over 5800 targets, of which 99% of the targets are located in the United States and 50% of the attack origins were from China.

The sad truth is that the majority of these attacks are initiated from compromised computer systems & servers, with unrestricted web access. All it takes today is for one system to visit an infected site and that could be enough to bring down the whole enterprise network infrastructure while at the same time launch a massive attack against Internet targets.

security-protect-enterprise-smb-network-web-monitoring-p1-3In June 2014, Evernote and Feedly, working largely in tandem, went down with a DDoS attack within two days of each other. Evernote recovered the same day, but Feedly had to suffer more. Although there were two more DDoS attacks on Feedly that caused it to lose business for another two days, normalcy was finally restored. According to the CEO of Feedly, they refused to give in to the demands of ransom in exchange for ending the attack and were successful in neutralizing the threat.

security-protect-enterprise-smb-network-web-monitoring-p1-4Domino's Pizza had over 600,000 Belgian and French customer records stolen by the hacking group Rex Mundi. The attackers demanded $40,000 from the fast food chain in exchange for not publishing the data online. It is not clear whether Domino's complied with the ransom demands. However, they reassured their customers that although the attackers did have their names, addresses and phone numbers, they however, were unsuccessful in stealing their financial and banking information. The Twitter account of the hacking group was suspended, and they never released the information.

Apart from external attacks, misbehavior from employees can cause equal if not greater damage. Employees viewing pornographic material in the workspace can lead to a huge number of issues. Not only is porn one of the biggest time wasters, it chokes up the network bandwidth with non-productive downloads, including bringing in unwanted virus, malware and Trojans. Co-workers unwillingly exposed to offensive images can find the workplace uncomfortable and this may further lead to charges of sexual harassment, dismissal and lawsuits, all expensive and disruptive.

Another major problem is data leakage via e-mail or webmailintended or by accident. Client data, unreleased financial data and confidential plans leaked through emails may cause devastating impact to the business including loss of client confidence.

Web monitoring provides answers to several of these problems. This type of monitoring need not be very intrusive or onerous, but with the right policies and training, employees easily learn to differentiate between appropriate and inappropriate use.

Few Of The Biggest Web Problems

To monitor the web, you must know the issues that you need to focus on. Although organizations differ in their values, policies and culture, there are some common major issues on the Web that cause the biggest headaches:

  • Torrents And Peer-To-Peer Networks offer free software, chat, music and video, which can be easily downloaded. However, this can hog the bandwidth causing disruptions in operation such as for video conferencing and VoIP. Moreover, such sites also contain pirated software, bootlegged movies and inappropriate content that are mostly tainted with various types of virus and Trojans.
  • Gaming sites are notorious for hogging bandwidth and wasting productive time. Employees often find these sites hard to resist and download games. Most of the games carry lethal payloads of virus and other malware, with hackers finding them a common way for SEO poisoning. Even when safe, games disrupt productivity and clog the network.
  • Fun sites, although providing a harmless means of relieving stress, may be offensive and inappropriate to coworkers. Whether your policies allow such humor sites, they can contain SEO poisoned links and Trojans, often clogging networks with their video components.
  • Online Shopping may relate to purchase of work-appropriate items as well as personal. Although the actual purchase may not take up much time, surfing for the right product is a huge time waster, especially for personal items. Individual policies may either limit the access to certain hours of the day or block these sites altogether.
  • Non-Productive Surfing can be a huge productivity killer for any organization. Employees may be obsessed with tracking shares, sports news or deals on commercial sites such as Craigslist and eBay. Company policies can block access to such sites entirely, or limit the time spent on such sites to only during lunchtime.

According to a survey involving over 3,000 employees, Salary.com found over 60% involved in visiting sites unrelated to their work every day. More than 20% spent above five hours a week on non-work related sites. Nearly half of those surveyed looked for a new job using office computers in their work time.

In the next part of our article, we will examine the importance of placing a company security policy to help avoid users visiting sites they shouldn't, stop waisting valuable time and resources on activities that can compromise the enterprise's network security and more. We also take an in-depth look on how to effectively monitor and control traffic activity on the Web in real-time, plus much more.

 

  • Hits: 18050

The Most Dangerous Websites On The Internet & How To Effectively Protect Your Enterprise From Them

whitepaper-malicious-website-contentCompanies and users around the world are struggling to keep their network environments safe from malicious attacks and hijacking attempts by leveraging services provided by high-end firewalls, Intrusion Detection Systems (IDS), antivirus software and other means.   While these appliances can mitigate attacks and hacking attempts, we often see the whole security infrastructure failing because of attacks initiated from the inside, effectively by-passing all protection offered by these systems.

I’m sure most readers will agree when I say that end-users are usually responsible for attacks that originate from the internal network infrastructure. A frequent example is when users find a link while browsing the Internet they tend to click on it to see where it goes even if the context suggests that the link may be malicious. Users are unaware of the hidden dangers and the potential damage that can be caused by clicking on such links.

The implications of following links with malicious content can vary for each company, however, we outline a few common cases often seen or read about:

  • Hijacking of the company’s VoIP system, generating huge bills from calls made to overseas destination numbers (toll fraud)
  • The company’s servers are overloaded by thousands of requests made from the infected workstation(s)
  • Sensitive information is pulled from the workstations and sent to the hackers
  • Company Email servers are used to generate and send millions of spam emails, eventually placing them on a blacklist and causing massive communication disruptions
  • Remote control software is installed on the workstations, allowing hackers to see everything the user is doing on their desktop
  • Torrents are downloaded and seeded directly from the company’s Internet lines, causing major WAN disruptions and delays

As you can see there are countless examples we can analyze to help us understand how serious the problem can become.

Download this whitepaper if you are interested to:

  • Learn which are the Top 10 Dangerous sites users visit
  • Learn the Pros and Cons of each website category
  • Understand why web content filtering is important
  • Learn how to effectively block sites from compromising your network
  • Learn how to limit the amount of the time users can access websites
  • Effectively protect your network from end-user ‘mistakes’
  • Ensure user web-browsing does not abuse your Internet line or Email servers

We apologise however the whitepaper is no longer available by the vendor.  Head to our homepage to read up on new network and security related articles.

 

Continue reading

  • Hits: 26203

Download Your Free Whitepaper: How to Secure your Network from Cyber Attacks

whitepaper-fight-cybercrime-moduleCybercriminals are now focusing their attention on small and mid-sized businesses because they are typically easier targets than large, multinational corporations.
This white paper examines the rising security threats that put small and medium businesses at risk. It also highlights important security considerations that SMBs should be aware of.

Download this whitepaper if you’re interested to:

  • Learn on how to adopt best practices and boost your business security.
  • Evaluate the SMB digital footprint.
  • Know what to look for in new security solutions.

We apologise however the whitepaper is no longer available by the vendor.  Head to our homepage to read up on new network and security related articles.

  • Hits: 17717

A Networked World: New IT Security Challenges

network-security-1This is the age of networks. Long ago, they said, ‘the mainframe is the computer’. Then it changed to ‘the PC is the computer’. That was followed by ‘the network is the computer’. Our world has been shrunk, enlightened and speeded up by this globe encapsulating mesh of interconnectivity. Isolation is a thing of the past. Now my phone brings up my entire music collection residing on my home computer. My car navigates around the city, avoiding traffic in real time. We have started living in intelligent homes where we can control objects within it remotely.

On a larger scale, our road traffic system, security CCTV, air traffic control, power stations, nuclear power plants, financial institutions and even certain military assets are administered using networks. We are all part of this great cyber space. But how safe are we? What is our current level of vulnerability?

Tower, Am I Cleared For Landing?

March 10, 1997: It was a routine day of activity at Air Traffic Control (ATC) at Worcester, Massachusetts, with flight activity at its peak. Suddenly the ground to air communications system went down. This meant that ATC could not communicate with approaching aircraft trying to land. This was a serious threat to all aircraft and passengers using that airport. All incoming flights had to be diverted to another airport to avoid a disaster.

This mayhem was caused by a 17 year old hacker named Jester. He had used a normal telephone line and physically tapped into it, giving him complete control of the airport’s entire communications system. His intrusion was via a telephone junction box, which in turn ended up being part of a high end fire backbone. He was caught when, directed by the United States Security Service, the telephone company traced the data streams back to the hacker’s parents’ house. Jester was the first juvenile to be charged under the Computer Crimes Law.

As our world becomes more and more computerised and our computer systems start interconnecting, the level of vulnerability goes up. But should this mean an end to all advancement in our lives? No. We need to make sure we are safe and the things that make our lives easier and safer are also secure.

Intruder Alert

April 1994: An US Airforce Base realised that their high level security network was not just hacked, but network-security-2secure documents were stolen. This resulted in an internal cyber man-hunt. The bait was laid and all further intrusions were monitored. A team of 50 Federal Agents finally tracked down 2 hackers who were using US based social networking systems to hack into the Airforce Base. But it was later revealed that the scope of intrusion was not just limited to the base itself: they had infiltrated a much bigger military organisation. The perpetrators were hackers with the aliases of ‘datastreamcowboy’ and ‘kuji’.

‘Datastreamcowboy’ was a 16 year old British national who was apprehended on May 4th 1994, and ‘kuji’ was a 21 year old technician named Mathew Bevan from Cardiff, Wales. ‘datastreamcowboy’ was like an apprentice to ‘kuji’. ‘datastreamcowboy’ would try a method of intrusion and, if he failed, he would go back to ‘kuji’ for guidance. ‘kuji’ would mentor him to a point that on subsequent attempts ‘datastreamcowboy’ would succeed.

What was their motive? Bragging rights in the world of hacking for being able to penetrate the security of the holy grail of all hackers: the Pentagon.

But the future might not see such benign motives at play. As command and control of military installations is becoming computerised and networked, it has become imperative to safeguard against intruders who might break into an armoury with the purpose of causing damage to it or to control and use it with malice.

Social Virus

October 2005: The social networking site MySpace was crippled by a highly infectious computer virus. The virus took control of millions of online MySpace profiles and broadcasted the hacker’s messages. The modus operandi of the hacker was to place a virus on his own profile. Whenever someone visited his profile page, he/she would be infected and their profile would show the hacker’s profile message. These new users now being infected would spread the infection through their friends on MySpace, and this created a massive chain reaction within the social network community. The mass infection caused the entire MySpace social network to grind to a halt.

Creator of this mayhem was Sammy Kamkar, a 19 year old. But his attack was not very well organised as he left digital footprints and was later caught. Banned from using a computer for 3 years, he later became a security consultant helping companies and institutions safeguard themselves against attacks.

What that showed the world was the fact that a cyber attack could come from anywhere, anytime.

In our current digital world we already know that a lot of our complex systems like Air Traffic Control, power stations, dams, etc are controlled and monitored using computers and networks. Let’s try to understand the technology behind it to gauge where the security vulnerabilities come from.

SCADA: Observer & Controller

Over the last few decades, SCADA technology has enabled us to have greater control over predominantly mechanical systems which were, by design, very isolated. But what is SCADA? What does it stand for?

SCADA is an acronym for Supervisory Control And Data Acquisition. A quick search on the internet and you would find the definition to be as follows:

SCADA (supervisory control and data acquisition) is a type of industrial control system (ICS). Industrial control systems are computer controlled systems that monitor and control industrial processes that exist in the physical world. SCADA systems historically distinguish themselves from other ICS systems by being large scale processes that can include multiple sites and large distances. These processes include industrial, infrastructure, and facility-based processes as described below:

  • Industrial processes include those of manufacturing, production, power generation, fabrication and refining, and may run in continuous, batch, repetitive, or discrete modes.
  • Infrastructure processes may be public or private and include water treatment and distribution, wastewater collection and treatment, oil and gas pipelines, electrical power transmission and distribution, wind farms, civil defence siren systems and large communication systems.
  • Facility processes occur both in public facilities and private ones, including buildings, airports, ships, and space stations. They monitor and control heating, ventilation and air conditioning systems (HVAC), access and energy consumption.

This effectively lets us control the landing lights on a runway, gates of a reservoir or a dam, connection and disconnection of power grids to a city supply.

Over the last decade all such systems have become connected to the internet. However, when SCADA was being developed no thought was given to security. No one imagined that a SCADA based system would end up on the internet. Functionality and convenience were given higher priority and security was ignored, hence SCADA carries the burden of inherent security flaws.

Tests have been performed extensively to map the vulnerabilities of a networked SCADA system. A test was done on a federal prison which used SCADA to control gates and security infrastructure. Within two weeks, a test hacker had full control of all the cell doors. The kit the hacker used was purchased from the open market for a value as low as $2500.

But, thankfully, more and more thought is given today when designing a SCADA based system which will be used over a network. Strict security policies and intrusion detection and avoidance technologies are implemented.

Where’s My Money?

The year 1994 – 1995 saw a momentous change in our financial industry: the entire financial sector was now online. Paper transactions were a thing of the past. Vast sums of money now change location in a matter of milliseconds. The share markets, along with complex monetary assets, now trade using the same cyber space which we use for social networking, shopping etc. As this involved a lot of money, money being transferred in unimaginable amounts, the financial industry, especially banks, went to great lengths to protect themselves.

As happens in our physical world with the advent of better locks thieves change their ways to adapt as well. Hackers have developed tools that can bypass encryptions to steal funds, or even hold an entire institution to ransom. Average annual loss due to cyber heist has been estimated at nearly 1.3 million dollars. Since banks hardly hold any cash in their branches your ordinary bank robbery would hardly amount to $6000 – $8000 in hard cash.

Cyber heist is a criminal industry with staggering rewards. The magnitude is in hundreds of billions of dollars. But most cyber intrusions in this industry go unreported because of its long term impact on the compromised institution’s reputation and credibility.

Your Card Is Now My Card!

network-security-credit-card-hacked2005: Miami, Florida. A Miami hacker made history in cyber theft. Alberto Gonzales would drive around Miami streets looking for unsecured wireless networks. He hooked onto the unsecure wireless network of a retailer, used it to reach the retailer’s headquarters and stole credit card numbers from its databases. He then sold these card details to Eastern European cyber criminals. In the first year, he stole 11.2 million card details. By the end of the second year he had stolen about 90 million card details.

He was arrested in July 2007 while trying to use one of these stolen cards. On subsequent interrogation it was revealed that he had stored away 43 million credit card details on servers in Latvia and Ukraine.

In recent times we know a certain gaming console organisation had its online gaming network hacked and customer details stolen. For that organisation, the security measures taken subsequent to that intrusion were ‘too little too late’, but all such companies that hold customer credit card details consequently improved their network security setup.

Meltdown By Swatting

January 2005: A hacker with the alias ‘dshocker’ was carrying out an all out attack on several big corporations in the US. He used stolen credit cards to fund his hacking activities. He managed to break through a firewall and infect large numbers of computers. This enabled him to take control of all of those machines and use their collective computing power to carry out a Denial of Service Attack on the corporation itself. The entire network went into a meltdown. Then he did something that is known today as ‘swatting’. Swatting is an action that dupes the emergency services into sending out an emergency response team. This false alarm and follow up raids would end up costing the civic authorities vast sums of money and resources.

He was finally arrested when his fraudulent credit card activities caught up with him.

Playing Safe In Today’s World

Today technology is a great equaliser. It has given the sort of power to individuals that only nations could boast of in the past. All the network intrusions and their subsequent effects can be used individually or together to bring a nation to its knees. The attackers can hide behind the cyber world and their attacks can strike anyone without warning. So what we need to do is to stay a step ahead.

We can’t abolish using the network, the cloud or the things that have given us more productivity and efficiency. We need to envelop ourselves with stricter security measures to ensure that all that belongs to us is safe, and amenities used by us everyday are not turned against us. This goes for everyone, big organisations and the individual using his home network.

At home, keep your wireless internet connection locked down with a proper password. Do not leave any default passwords unchanged. That is a security flaw that can be taken advantage of. On your PCs and desktops, every operating system comes with its own firewall. Keep it on. Turning it off for convenience will cost you more than keeping it on and allowing only certain applications to communicate safely with the internet. In your emails, if you don’t recognise a sender’s email, do not respond or click on any of the links it may carry. These can be viruses ready to attack your machines and create a security hole through which the hacker will enter your home network. And for cyber’s sake, please, you haven’t won a lottery or inherited millions from a dead relative. So all those emails telling you so are just fakes. They are only worth deleting.

The simple exercise of keeping your pop-up blocker turned on will keep your surfing through your browser a lot safer. Your operating system, mainly Windows and Linux, lets you keep a guest account so whenever a ‘guest’ wants to check his/her emails or surf the web have them use this account instead of your own. Not that you don’t trust your guest but they might innocently click on something while surfing and not know what cyber nastiness they have invited into your machine. The guest account has  minimal privileges for users so it can be safe. Also, all accounts must have proper passwords. Don’t let your machine boot up to an administrator account with no password set. That is a recipe for disaster. Don’t use a café’s wireless network to check your bank balance. That can wait till you reach home. Or just call the bank up. That’s safer.

At work, please don’t plug an unauthorised wireless access point into your corporate network, this can severely compromise it. Use strong passwords for accounts, remove old accounts not being used. Incorporate strong firewall rules and demarcate effective DMZ so that you stay safer. Stop trying to find a way to jump over a proxy, or disable it. You are using company time for a purpose that can’t be work related. If it is needed, ask the network administrator for assistance.

I am not an alarmist, nor do I believe in sensationalism. I believe in staying safe so that I can enjoy the fruits of technology. And so should you, because you deserve it.

Readers can also visit ourNetwork Security section which offers a number of interesting articles covering Network Security.

About the Writer

Arani Mukherjee holds a Master’s degree in Distributed Computing Systems from the University of Greenwich, UK and works as network designer and innovator for remote management systems, for a major telecoms company in UK. He is an avid reader of anything related to networking and computing. Arani is a highly valued and respected member of Firewall.cx, offering knowledge and expertise to the global community since 2005.

 

  • Hits: 16893

Introduction To Network Security - Part 2

This article builds upon our first article Introduction to Network Security - Part 1. This article is split into 5 pages and covers a variety of topics including:

  • Tools and Attacker Uses
  • General Network Tools
  • Exploits
  • Port Scanners
  • Network Sniffers
  • Vulnerability Scanners
  • Password Crackers
  • What is Penetration Testing
  • More Tools
  • Common Exploits
  • A Brief Walk-through of an Attack
  • and more.

Tools An Attacker Uses

Now that we've concluded a brief introduction to the types of threats faced by both home users and the enterprise, it is time to have a look at some of the tools that attackers use.

Keep in mind that a lot of these tools have legitimate purposes and are very useful to administrators as well. For example I can use a network sniffer to diagnose a low level network problem or I can use it to collect your password. It just depends which shade of hat I choose to wear.

General Network Tools

As surprising as it might sound, some of the most powerful tools especially in the beginning stages of an attack are the regular network tools available with most operating systems. For example and attacker will usually query the 'whois' databases for information on the target. After that he might use 'nslookup' to see if he can transfer the whole contents of their DNS zone (called a zone transfer -- big surprise !!). This will let him identify high profile targets such as webservers, mailservers, dns servers etc. He might also be able to figure what different systems do based on their dns name -- for example sqlserver.victim.com would most likely be a database server. Other important tools include traceroute to map the network and ping to check which hosts are alive. You should make sure your firewall blocks ping requests and traceroute packets.

Exploits

An exploit is a generic term for the code that actually 'exploits' a vulnerability in a system. The exploit can be a script that causes the target machine to crash in a controlled manner (eg: a buffer overflow) or it could be a program that takes advantage of a misconfiguration.

A 0-day exploit is an exploit that is unknown to the security community as a whole. Since most vulnerabilities are patched within 24 hours, 0-day exploits are the ones that the vendor has not yet released a patch for. Attackers keep large collections of exploits for different systems and different services, so when they attack a network, they find a host running a vulnerable version of some service and then use the relevant exploit.

Port Scanners

Most of you will know what portscanners are. Any system that offers TCP or UDP services will have an open port for that service. For example if you're serving up webpages, you'll likely have TCP port 80 open, FTP is TCP port 20/21, Telnet is TCP 23, SNMP is UDP port 161 and so on.

A portscanner scans a host or a range of hosts to determine what ports are open and what service is running on them. This tells the attacker which systems can be attacked.
For example, if I scan a webserver and find that port 80 is running an old webserver -- IIS/4.0, I can target this system with my collection of exploits for IIS 4. Usually the port scanning will be conducted at the start of the attack, to determine which hosts are interesting.

This is when the attacker is still footprinting the network -- feeling his way around to get an idea of what type of services are offered and what Operating Systems are in use etc. One of the best portscanners around is Nmap (https://www.insecure.org/nmap). Nmap runs on just about every operating system is very versatile in how it lets you scan a system and has many features including OS fingerprinting, service version scanning and stealth scanning. Another popular scanner is Superscan (https://www.foundstone.com) which is only for the windows platform.

Network Sniffers

A network sniffer puts the computers NIC (network interface card or LAN card) into 'promiscuous mode'. In this mode, the NIC picks up all the traffic on its subnet regardless of whether it was meant for it or not. Attackers set up sniffers so that they can capture all the network traffic and pull out logins and passwords. The most popular network sniffer is TCPdump as it can be run from the command line -- which is usually the level of access a remote attacker will get. Other popular sniffers are Iris and Ethereal.

When the target network is a switched environment (a network which uses layer 2 switches), a conventional network scanner will not be of any use. For such cases, the switched network sniffer Ettercap (http://ettercap.sourceforge.net) and WireShark (https://www.wireshark.org) are very popular. Such programs are usually run with other hacking capable applications that allow the attacker to collect passwords, hijack sessions, modify ongoing connections and kill connections. Such programs can even sniff secured communications like SSL (used for secure webpages) and SSH1 (Secure Shell - a remote access service like telnet, but encrypted).

Vulnerability Scanners

A vulnerability scanner is like a portscanner on steroids, once it has identified which services are running, it checks the system against a large database of known vulnerabilities and then prepares a report on what security holes are found. The software can be updated to scan for the latest security holes. These tools are very simple to use unfortunately, so many script kiddies simply point them at a target machine to find out what they can attack. The most popular ones are Retina (http://www.eeye.com), Nessus (http://www.nessus.org) and GFI LanScan (http://www.gfi.com). These are very useful tools for admins as well as they can scan their whole network and get a detailed summary of what holes exist.

Password Crackers

Once an attacker has gained some level of access, he/she usually goes after the password file on the relevant machine. In UNIX like systems this is the /etc/passwd or /etc/shadow file and in Windows it is the SAM database. Once he gets hold of this file, its usually game over, he runs it through a password cracker that will usually guarantee him further access. Running a password cracker against your own password files can be a scary and enlightening experience. L0phtcrack cracked my old password fR7x!5kK after being left on for just one night !

There are essentially two methods of password cracking :

Dictionary Mode - In this mode, the attacker feeds the cracker a word list of common passwords such as 'abc123' or 'password'. The cracker will try each of these passwords and note where it gets a match. This mode is useful when the attacker knows something about the target. Say I know that the passwords for the servers in your business are the names of Greek Gods (yes Chris, that's a shout-out to you ;)) I can find a dictionary list of Greek God names and run it through the password cracker.

Most attackers have a large collection of wordlists. For example when I do penetration testing work, I usually use common password lists, Indian name lists and a couple of customized lists based on what I know about the company (usually data I pick up from their company website). Many people think that adding on a couple of numbers at the start or end of a password (for example 'superman99') makes the password very difficult to crack. This is a myth as most password crackers have the option of adding numbers to the end of words from the wordlist. While it may take the attacker 30 minutes more to crack your password, it does not make it much more secure.

Brute Force Mode - In this mode, the password cracker will try every possible combination for the password. In other words it will try aaaaa, aaaab, aaaac, aaaad etc. this method will crack every possible password -- its just a matter of how long it takes. It can turn up surprising results because of the power of modern computers. A 5-6 character alphanumeric password is crackable within a matter of a few hours or maybe a few days, depending on the speed of the software and machine. Powerful crackers include l0phtcrack for windows passwords and John the Ripper for UNIX style passwords.

For each category, I have listed one or two tools as an example. At the end of this article I will present a more detailed list of tools with descriptions and possible uses.


What is Penetration-Testing?

Penetration testing is basically when you hire (or perform yourself) security consultants to attack your network the way an attacker would do it, and report the results to you enumerating what holes were found, and how to fix them. It's basically breaking into your own network to see how others would do it.

While many admins like to run quick probes and port scans on their systems, this is not a penetration test -- a penetration tester will use a variety of specialised methods and tools from the underground to attempt to gain access to the network. Depending on what level of testing you have asked for, the tester may even go so far as to call up employees and try to social engineer their passwords out of them (social engineering involves fooling a mark into revealing information they should not reveal).

An example of social engineering could be an attacker pretending to be someone from the IT department and asking a user to reset his password. Penetration testing is probably the only honest way to figure out what security problems your network faces. It can be done by an administrator who is security aware, but it is usually better to pay an outside consultant who will do a more thorough job.

I find there's a lack of worthwhile information online about penetration testing -- nobody really goes about describing a good pen test, and what you should and shouldn't do. So I've hand picked a couple of good papers on the subject and then given you a list of my favourite tools, and the way I like to do things in a pen-test.

This is by no means the only way to do things, it's like subnetting -- everyone has their own method -- this is just a systematic approach that works very well as a set of guidelines. Depending on how much information you are given about the targets as well as what level of testing you're allowed to do, this method can be adapted.

Papers Covering Penetration Testing

I consider the following works essential reading for anyone who is interested in performing pen-tests, whether for yourself or if you're planning a career in security:

'Penetration Testing Methodology - For Fun And Profit' - Efrain Tores and LoNoise, you can google for this paper and find it.

'An Approach To Systematic Network Auditing' - Mixter (http://mixter.void.ru)

'Penetration Testing - The Third Party Hacker' - Jessica Lowery. Boy is this ever a good paper ! (https://www.sans.org/rr/papers/index.php?id=264)

'Penetration Testing - Technical Overview' - Timothy P. Layton Sr. also from the www.sans.org (https://www.sans.org) reading room

Pen-test Setup

I don't like working from laptops unless its absolutely imperative, like when you have to do a test from the inside. For the external tests I use a Windows XP machine with Cygwin (www.cygwin.com) and VMware (www.vmware.com) most linux exploits compile fine under cygwin, if they don't then I shove them into vmware where I have virtual machines of Red Hat, Mandrake and Win2k boxes. In case that doesnt work, the system also dual boots Red Hat 9 and often I'll just work everything out from there.

I feel the advantage of using a microsoft platform often comes from the fact that 90% of your targets may be microsoft systems. However the flexibility under linux is incomparable, it is truely the OS of choice for any serious hacker.. and as a result, for any serious security professional. There is no best O/S for penetration testing -- it depends on what you need to test at a point in time. That's one of the main reasons for having so many different operating systems set up, because you're very likely to be switching between them for different tasks.

If I don't have the option of using my own machine, I like to choose any linux variant.
I keep my pen-tests strictly to the network level, there is no social engineering involved or any real physical access testing other than basic server room security and workstation lockdown (I don't go diving in dumpsters for passwords or scamming employees).

I try as far as possible to determine the Rules Of Engagement with an admin or some other technically adept person with the right authorisation, not a corporate type. This is very important because if you do something that ends up causing trouble on the network, its going to make you look very unprofessional. It's always better to have it done clearly in writing -- this is what you are allowed to do.

I would recommend this even if you're an admin conducting an in-house test. You can get fired just for scanning your own network if its against your corporate policy. If you're an outside tester, offer to allow one of their people to be present for your testing if they want. This is recommended as they will ultimately be fixing most of these problems and being in-house people they will be able to put the results of the test in perspective to the managers.

Tools

I start by visiting the target website, running a whois, DNS zone transfer (if possible) and other regular techniques which are used to gather as much network and generic information about the target. I also like to pick up names and email addresses of important people in the company -- the CEO, technical contacts etc. You can even run a search in the newsgroups for @victim.com to see all the public news postings they have made. This is useful as a lot of admins frequent bulletin boards for help. All this information goes into a textfile. Keeping notes is critically important, it's very easy to forget some minor detail that you should include in your end report.

Now for a part of the arsenal -- not in any order and far from the complete list.

Nmap - Mine (and everyone elses) workhorse port scanner with version scanning, multiple scan types, OS fingerprinting and firewall evasion tricks. When used smartly, Nmap can find any Internet facing host on a network.

Nessus - My favourite free vulnerability scanner, usually finds something on every host. Its not too stealthy though and will show up in logs (this is something I don't have to worry about too much).

Retina - A very good commercial vulnerability scanner, I stopped using this after I started with nessus but its very very quick and good. Plus its vulnerability database is very up-to-date.

Nikto - This is a webserver vulnerability scanner. I use my own hacked up version of this perl program which uses the libwhisker module. It has quite a few IDS evasion modes and is pretty fast. It is not that subtle though, which is why I modified it to be a bit more stealthy.

Cisco Scanner - This is a small little windows util I found that scans IP ranges for routers with the default password of 'cisco'. It has turned up some surprising results in the past and just goes to show how even small little tools can be very useful. I am planning to write a little script that will scan IP ranges looking for different types of equipment with default passwords.

Sophie Script - A little perl script coupled with user2sid and sid2user (two windows programs) which can find all the usernames on a windows machine.

Legion - This is a windows file share scanner by the erstwhile Rhino9 security group. It is fast as hell and allows you to map the drive right from in the software.

Pwdump2 - Dumps the content of the windows password sam file for loading into a password cracker.

L0phtcrack 3.0 - Cracks the passwords I get from the above or from its own internal SAM dump. It can also sniff the network for password hashes or obtain them via remote registry. I have not tried the latest version of the software, but it is very highly rated.

Netcat - This is a TCP/UDP connection backend tool, oh boy I am lost without this ! Half my scripts rely on it. There is also an encrypted version called cryptcat which might be useful if you are walking around an IDS. Netcat can do anything with a TCP or UDP connection and it serves as my replacement to telnet as well.

Hping2 - A custom packet creation utility, great for testing firewall rules among other things.

SuperScan - This is a windows based port scanner with a lot of nice options. Its fast, and has a lot of other neat little tools like NetBIOS enumeration and common tools such as whois, zone transfers etc.

Ettercap - When sniffing a switched network, a conventional network sniffer will not work. Ettercap poisons the ARP cache of the hosts you want to sniff so that they send packets to you and you can sniff them. It also allows you to inject data into connections and kill connections among other things.

Brutus - This is a fairly generic protocol brute forcing tool. It can bruteforce HTTP, FTP, Telnet and many other login authentication systems. This is a windows tool, however I prefer Hydra for linux.

Bunch of Common Exploits Effeciently Sorted

This is my collection of exploits in source and binary form. I sort them in subdirectories by operating system, then depending on how they attack - Remote / Local and then according to what they attack - BIND / SMTP / HTTP / FTP / SSH etc etc. The binary filenames are arbitrary but the source filenames instantly tell me the name of the exploit and the version of the software vulnerable.

This is essential when you're short on time and you need to 'pick one'. I don't include DoS or DDoS exploits, there is nobody I know who would authorise you to take down a production system. Don't do it -- and tell them you arent doing it.. and only if they plead with you should you do it.

Presenting Reports

This is the critical part -- it's about presenting what you found to people who probably don't understand a word of what your job is about other than you're costing them money. You have to show them that there are some security problems in your network, and this is how serious they might be.

A lot of people end the pen-test after the scanning stage. Unless someone specifically tells me to do this, I believe it is important you exploit the system to at least level 1. This is important because there is a very big difference in saying something is vulnerable and actually seeing that the vulnerability is executable. Not to mention when dealing with a corporate type, seeing 'I gained access to the server' usually gets more attention than 'the server is vulnerable to blah blah'.

After you're done, make a VERY detailed chronological report of everything you did, including which tools you used, what version they are, and anything else you did without using tools (eg. SQL injection). Give gory technical details in annexes -- make sure the main document has an executive summary and lots of pie charts that they can understand. Try and include figures and statistics for whatever you can.

To cater to the admins, provide a report for each host you tested and make sure that for every security hole you point out, you provide a link to a site with a patch or fix, . Try to provide a link to a site with detailed information about the hole preferably bugtraq or some well known source -- many admins are very interested in these things and appreciate it.


A Brief Walk-through of an Attack

This is an account of how an attacker in the real world might go about trying to exploit your system. There is no fixed way to attack a system, but a large number will follow the similar methodology or at least the chain of events.

This section assumes that the attacker is moderately skilled and moderately motivated to breaking into your network. He/She has targeted you due to a specific motive -- perhaps you sacked them, or didn't provide adequate customer support (D-link India are you listening ? ;)). Hopefully this will help you figure out where your network might be attacked, and what an attacker might do once they are on the inside.

Remember that attackers will usually choose the simplest way to get into the network. The path of least resistance principle always applies.

Reconnaissance & Footprinting

Here the attacker will try to gather as much information about your company and network as they can without making a noise. They will first use legitimate channels, such as google and your company webpage to find out as much about you as they can. They will look for the following information:


Technical information is a goldmine, things like a webpage to help your employees log in from home will be priceless information to them. So also will newsgroup postings by your IT department asking how to set up particular software, as they now know that you use this software and perhaps they know of a vulnerability in it.

Personal information about the company and its corporate structure. They will want information on the heads of IT departments, the CEO and other people who have a lot of power. They can use this information to forge email, or social engineer information out of subordinates.

Information about your partners. This might be useful information for them if they know you have some sort of network connection to a supplier or partner. They can then include the supplier's systems in their attack, and find a way in to your network from there.

General news. This can be useful information to an attacker as well. If your website says that it is going down for maintenance for some days because you are changing your web server, it might be a clue that the new setup will be in its teething stages and the admins may not have secured it fully yet.

They will also query the whois databases to find out what block of IP addresses you own. This will give them a general idea of where to start their network level scans.
After this they will start a series of network probes. The most basic of which will be to determine if you have a firewall, and what it protects. They will try and identify any systems you have that are accessible from the Internet.

The most important targets will be the ones that provide public services. These will be :

Webservers - usually the front door into the network. All webserver software has some bugs in it, and if you're running home made CGI scripts such as login pages etc, they might be vulnerable to techniques such as SQL injection.

Mail servers - Sendmail is very popular and most versions have at least one serious vulnerability in them. Many IT heads don't like to take down the mail server for maintenance as doing without it is very frustrating for the rest of the company (especially when the CEO doesn't get his mail).

DNS servers - Many implementations of BIND are vulnerable to serious attacks. The DNS server can be used as a base for other attacks, such as redirecting users to other websites etc.

Network infrastructure - Routers and switches may not have been properly secured and may have default passwords or a web administration interface running. Once controlled they can be used for anything from a simple Denial of Service attack by messing up their configurations, to channeling all your data through the attackers machine to a sniffer.

Database servers - Many database servers have the default sa account password blank and other common misconfigurations. These are very high profile targets as the criminal might be looking to steal anything from your customer list to credit card numbers. As a rule, a database server should never be Internet facing.

The more naive of the lot (or the ones who know that security logs are never looked at) may run a commercial vulnerability scanner such as nessus or retina over the network. This will ease their work.

Exploitation Phase

After determining which are valid targets and figuring out what OS and version of software they are using (example which version of Apache or IIS is the web server running), the attacker can look for an exploit targeting that particular version. For example if they find you are running an out of date version of Sendmail, they will look for an exploit targeting that version or below.

They will first look in their collection of exploits because they have tested these. If they cannot find one, they will look to public repositories such as https://www.packetstormsecurity.nl. They will probably try to choose common exploits as these are more likely to work and they can probably test them in their own lab.

From here they have already won half the game as they are behind the firewall and can probably see a lot more of the internal network than you ever intended for them to. Many networks tend to be very hard to penetrate from the outside, but are woefully unprotected internally. This hard exterior with a mushy interior is a recipe for trouble -- an attacker who penetrates the first line of defense will have the full run of your network.

After getting in, they will also probably install backdoors on this first compromised system to provide them with many ways in, in case their original hole gets shut down. This is why when you identify a machine that was broken into, it should be built up again from scratch as there is no way of knowing what kind of backdoors might be installed. It could be tricky to find a program that runs itself from 2:00AM to 4:00AM every night and tries to connect to the attackers machine. Once they have successfully guaranteed their access, the harder part of the intrusion is usually over.

Privilege Escalation Phase

Now the attacker will attempt to increase his security clearance on the network. He/She will usually target the administrator accounts or perhaps a CEO's account. If they are focused on a specific target (say your database server) they will look for the credentials of anyone with access to that resource. They will most likely set up a network sniffer to capture all the packets as they go through the network.

They will also start manually hunting around for documents that will give them some interesting information or leverage. Thus any sensitive documents should be encrypted or stored on systems with no connection to the network. This will be the time they use to explore your internal network.

They will look for windows machines with file sharing enabled and see what they can get out of these. Chances are if they didn't come in with a particular objective in mind (for example stealing a database), they will take whatever information they deem to be useful in some way.

Clean Up Phase

Now the attacker has either found what they were looking for, or are satisfied with the level of access they have. They have made sure that they have multiple paths into the network in case you close the first hole. They will now try to cover up any trace of an intrusion. They will manually edit log files to remove entries about them and will make sure they hide any programs they have installed in hard to find places.

Remember, we are dealing with an intruder who is moderately skilled and is not just interested in defacing your website. They know that the only way to keep access will be if you never know something is amiss. In the event that there is a log they are unable to clean up, they may either take a risk leaving it there, or flood the log with bogus attacks, making it difficult for you to single out the real attack.


Where Can I Find More Information?

Without obviously plugging our site too much, the best place for answers to questions relating to this article is in our forums. The Security/Firewalls Forum is the best place to do this -- so you can ask anything from the most basic to the most advanced questions concerning network security there. A lot of common questions have already been answered in the forums, so you will quite likely find answers to questions like 'Which firewall should I use ?'.

As far as off-site resources are concerned, network security is a very vast field and there is seemingly limitless information on the subject. You will never find information at so-called hacker sites full of programs. The best way to learn about network security is to deal with the first word first -- you should be able to talk networking in and out, from packet header to checksum, layer 1 to layer 7.

Once you've got that down, you should start on the security aspect. Start by reading a lot of the papers on the net. Take in the basics first, and make sure you keep reading. Wherever possible, try to experiment with what you have read. If you don't have a home lab, you can build one 'virtually'. See the posts in the Cool Software forum about VMware.


Also, start reading the security mailing lists such as bugtraq and security-basics. Initially you may find yourself unable to understand a lot of what happens there, but the newest vulnerabilities are always announced on these lists. If you follow a vulnerability from the time its discovered to when someone posts an exploit for it, you'll get a very good idea of how the security community works.. and you'll also learn a hell of a lot in the process.

If you're serious about security, it is imperative that you learn a programming language, or at least are able to understand code if not write your own. The best choices are C and assembly language. However knowing PERL and Python are also valuable skills as you can write programs in these languages very quickly.

For now, here are a few links that you can follow for more information:

www.securityfocus.com - A very good site with all the latest news, a very good library and tools collection as well as sections dedicated to basics, intrusion detection, penetration testing etc. Also home of the Bugtraq mailing list.

www.sans.org - A site with excellent resources in its reading room, people who submit papers there are trying for a certification and as a result its mostly original material and of a very high calibre.

www.security-portal.com - A good general security site.

www.cert.org - The CERT coordination center provides updates on the latest threats and how to deal with them. Also has very good best practice tips for admins.

www.securityfocus.com/archive/1 - This is the link to Bugtraq, the best full disclosure security mailing list on the net. Here all the latest vulnerabilities get discussed way before you see them being exploited or in the press.

www.insecure.org - The mailing lists section has copies of bugtraq, full disclosure, security-basics, security-news etc etc. Also the home of nMap, the wonderful port scanner.

seclists.org - This is a direct link to the security lists section of insecure.org.

www.grc.com - For windows home users and newbies just interested in a non technical site. The site is home to Shields Up, which can test your home connection for file sharing vulnerabilities, do a port scan etc, all online. It can be a slightly melodramatic site at times though.

www.eeye.com - Home of the Retina Security Scanner. Considered the industry leader. The E-Eye team also works on a lot of the latest vulnerabilities for the windows platform.

www.nessus.org - Open source vulnerability scanner, and IMNSHO the best one going. If you're a tiger team penetration tester and you don't point nessus at a target, you're either really bad at your job or have a very large ego. If there's a vulnerability in a system, nessus will find it.

www.zonelabs.com - ZoneAlarm personal firewall for windows, considered the best, and also the market leader.

www.sygate.com - Sygate Personal Firewall, provides more configuration options than ZoneAlarm, but is consequently harder to use.

www.secinf.net - Huge selection of articles that are basically windows security related.

www.searchsecurity.com - A techtarget site which you should sign up for, very good info. Chris writes for searchnetworking.com its sister site.. I don't think the references could be much better.

www.antioffline.com - A very good library section on buffer overflows etc.

www.packetstormsecurity.nl - The largest selection of tools and exploits possible.


Conclusion

This 5-page article should serve as a simple introduction to network security. The field itself is too massive to cover in any sort of article, and the amount of cutting edge research that goes on really defies comprehension.

Some of the most intelligent minds work in the security field because it can be a very challenging and stimulating environment. If you like to think out-of-the-box and are the sort of person willing to devote large amounts of your time to reading and questioning why things happen in a particular way, security might be a decent career option for you.

Even if you're not interested in it as a career option, every admin should be aware of the threats and the solutions. Remember, you have to think like them to stop them !

If you're interested in network security, we highly recommend you read through the networking and firewall sections of this website. Going through the whole site will be some of the most enlightening time you'll ever spend online.

If you're looking for a quick fix, here are a few of the more important areas that you might want to cover:

Introduction to Networking

Introduction to Firewalls

Introduction to Network Address Translation (NAT)

Denial Of Service (DoS) Attacks

Locking down Windows networks

Introduction to Network Protocols

Also check out our downloads section where you will find lots of very good security and general networking tools.

We plan on putting up a lot of other security articles in the near future. Some will be basic and introductory like this one, while some may deal with very technical research or techniques.

As always feel free to give us feedback and constructive criticism. All flames however will be directed to /dev/null ;)

  • Hits: 69481

Are Cloud-Based Services Overhyped?

In these hard economic times, cloud computing is becoming a more attractive option for many organizations. Industry analyst firm, The 451 Group predicts that the marketplace for cloud computing will grow from $8.7bn in revenue in 2010 to $16.7bn by 2013. Accompanying this is an increasing amount of hype about cloud computing.

Cloud computing has gone through different stages, yet because the Internet only began to offer significant bandwidth in the 1990s, it became something for the masses over the last decade. Initial applications were known as Hosted Services. Then the term Application Service Provider emerged, with some hosted offerings known as Managed Services. More recently, in addition to these terms, Software as a Service (SaaS) became a catchphrase.  And as momentum for hosted offerings grew, SaaS is now complemented by Infrastructure as a Service, Platform as a Service, and even Hardware as a Service.

Is this a sign of some radical technology shift, or simply a bit more of what we have seen in the past? 

The answer is both. We are witnessing a great increase in global investment towards hosted offerings. These providers are expected to enjoy accelerated growth as Internet bandwidth becomes ubiquitous, faster, and less expensive; as network devices grow smaller; and as critical mass builds. Also, organizations are moving towards cloud services of all kinds through the use of different types of network devices – take, for example, the rise of smart phones, the iPad tablet, and the coming convergence of television and the Internet.

Yet, although cloud solutions may emerge as dominant winners in some emerging economies, on-premise solutions will remain in use. While start-ups and small businesses might find the cloud as the cheaper and safer option for their business – enjoying the latest technology without needing to spend money on an IT infrastructure, staff, and other expenses that come with on premise solutions; larger businesses usually stick to on-premise solutions for both philosophical and practical reasons such as wishing to retain control, and the ability to configure products for their own specific needs.

Gartner's chief security analyst, John Pescatore, for example, believes that cloud security is not enough when it comes to the upper end of the enterprise, financial institutions, and the government. On the other hand, he states that smaller businesses may actually get better security from the cloud. The reason behind this is that while the former has to protect confidential data and cannot pass it on to third parties, the latter is given better security (multiple backup locations, 24/7 monitoring, physical security protecting sites, and more).

Although the cloud might appear to be finding its fertile ground only now, especially in these times of belt-tightening, hosted services have been around for a while. For this reason, when choosing a cloud provider, always make sure you choose a company that has proven itself in the marketplace.

 

  • Hits: 13992

What if it Rains in the Cloud?

Cloud computing has become a cost-effective model for small and medium-sized enterprises, SMEs, that wish to use the latest technology on-demand and with no commitments or need to purchase and manage software products. These features have made hosted services an attractive choice, such that industry analyst firm, The 451 Group, has predicted the marketplace for cloud computing will grow from $8.7billion in revenue in 2010 to $16.7billion by 2013.

Yet, many organizations think twice when it comes to entrusting their data to third parties. Let's face it, almost every web user has an account on sites such as Gmail or Facebook – where personal information is saved on a separate mainframe; but when it comes to businesses allowing corporate data to go through third parties, the danger and implications are greater as an error affects a whole system, not just a single individual.

So The Question Arises: What If It Rains In The Cloud?

Some SMEs are apprehensive about using hosted services because their confidential data is being handled by third parties and because they believe the solution provider might fail. Funnily enough, it's usually the other way around. Subject to selecting a reputable provider, smaller businesses can attain better security via cloud computing as the solution provider usually invests more in security (multiple backup locations, 24/7 monitoring, physical security protecting sites, and more) than any individual small business could. Also, the second the service provider patches security vulnerability, all customers are instantly protected, as opposed to downloadable patches that the IT team within a company must apply.

And, to prevent data leaks, cloud services providers make it their aim to invest in the best technology infrastructures to protect their clients' information, knowing that even the slightest mistake can ruin their reputation – not to mention potential legal claims – and, with that, their business.

A drawback with some hosted services is that if you decide you want to delete a cloud resource, this might not result in true wiping of the data. In some cases, adequate or timely deletion might be impossible for example, because the disk that needs to be destroyed also stores data from other clients. Also, certain organizations find it difficult to entrust their confidential data to third parties.

Use Your Umbrella

Cloud computing can be the better solution for many SMEs, particularly in the case of start-ups and small businesses which cannot afford to invest in a proper IT infrastructure. The secret is to know what to look for when choosing a provider: Engage the services of a provider that will provide high availability and reliability. It would be wise to avoid cloud service providers that do not have much of a track record, and that perhaps are of limited size and profitability, subject to M&A activity, and changing development priorities.

To enjoy the full potential promised by the technology, it is important to choose a hosted service provider that has proven itself in the marketplace and that has solid ownership and management, applies stringent security measures, uses multiple data centers so as to avoid a single point of failure, provides aggressive solid service level agreement, and is committed to cloud services for the long term.

Following these suggestions, you can have a peace of mind that your data is unlikely to be subjected to ‘bad weather'!

  • Hits: 15961

Three Reasons Why SMEs Should Also Consider Cloud-Based Solutions

Small and medium enterprises (SMEs) are always looking for the optimum way to implement technology within their organizations be it from a technical, financial or personal perspective. Technology solutions can be delivered using one of three common models: as on-premise solutions (i.e. installed on company premises), hosted services (handled by an external third party) or a mix of both. Let's take a look at the cloud-based solutions in this brief post.

The Reasons for Cloud-based Backup Solutions

When talking about a hosted service, we are referring to a delivery model which enables SMEs to make the most out of the latest technology through a third party. Cloud-based solutions or services are gaining in popularity as an alternative strategy for businesses , especially for startups and small businesses, particularly when considering the three reasons below:

•  Financial – Startups and very small SMEs often find it financially difficult to set up the infrastructure and IT system required when they are starting or still building the business. The additional cost to build an IT infrastructure and recruit IT personnel is at times too high and not a priority IT when they just need email and office tools. In such scenario a hosted service makes sense because the company can depend on a third party to provide additional services, such as archiving and email filtering, at a monthly cost. This reduces costs and allows the business to focus on other important areas requiring investment. As the business grows, the IT needs of that company will dictate to what extent a hosted or managed service is necessary and cost-effective.

•  Build your business – The cost saving aspect is particularly important for those businesses that require a basic IT infrastructure but it still want to benefit from security and operational efficiency without spending any money. Hosted / managed services give companies the option to test and try technologies before deciding whether they need to move their IT in-house or leave it in the hands of third parties.

•  Pay-as-you-go or rental basis – Instead of investing heavily in IT hardware, software and personnel, a pay-per-use or subscription system makes more sense. Companies choosing this delivery model would do well, however, to read contractual agreements carefully. Many vendors/providers tie in customers for two or three years, which may be just right for a startup SME, but companies should look closely at any associated costs if they decide to stop the service and at whether migrating their data will prove a very costly affair. The key to choosing a hosted or managed service is to do one's homework and plan well. Not all companies will find a cloud-based service to be suitable even if the cost structure appears to be attractive.

Are There Any Drawbacks To This System?

Despite all the advantages mentioned above, some SMEs are still apprehensive when it comes to cloud-based solutions because they are concerned about their data's security. Although an important consideration, a quality cloud-based provider will have invested heavily in security and, more often than not, in systems that are beyond what a small business can afford to implement. A good provider will have invested in multiple backup locations, 24/7 monitoring, physical security to protect sites, and more.

On the other hand, the fact that the data would be exposed to third parties and not handled internally could be seen as a drawback by some companies, especially those handling sensitive data. As stated earlier, beware of the fine print and medium- to long-term costs before committing.

Another Option

If you're a server-hugger and need to have that all-important server close to your office, businesses can always combine their on-premise solution with a hosted or managed service – benefiting from the advantages and doing away with the inherent disadvantages.

Every company is different and whether you decide to go for a cloud-based solution or not, keep in mind that there is no right or wrong – it's all a matter of what your current business's infrastructure is like and your needs at the time. However, if you are a startup or a small business, cloud-based solutions are an attractive option worth taking into consideration.

 

  • Hits: 16159

61% of SMEs use Email Archiving in-house – What About the Others ?

A recent survey on email archiving, based on 202 US-based SMEs, found that a growing number of organizations are considering or would consider a third-party hosted email archiving service. A total of 18% of those organizations that already use an email archiving solution, have opted for a hosted service, while 38% said are open to using such a service.

At the same time, 51% of those surveyed said they would still only use an on-premise email archiving solution.

The findings paint an interesting picture of email archiving use among SMEs. Apart from the shocking statistic that more than 63% do not archive their email, those that do, or consider doing so, are interested in the various options available.

articles-email-archiving-1

On-premise or Hosted?

An increasing number of IT services are now offered as Software as a Service (SaaS) or hosted by a third party. Many services prove to be very cost effective when implemented at the scale which outsource service providers can manage, but there are still many admins – as the survey shows – who prefer to keep everything in house; security personnel who want to maintain data integrity internally, and business leaders who do not see the value of a cloud solution for their organization because their requirements dictate otherwise.

What is Email Archiving?

At its simplest, email archiving technology helps businesses maintain a copy of all emails sent or received from all users. This indispensible solution can be used for searches and to meet eDiscovery, compliance audits and reviews, to increase the overall long term storage capacity of the email system, and as a disaster recovery repository to ensure data availability.

Because email is so heavily tied to the internet, email archiving can readily be outsourced to service providers and can often be combined with other outsourced services like spam and malware filtering. Hosted email archiving eases the load on your IT staff, allowing them to focus on core activities, and can be a more economical solution than paying for additional servers, storage, and tape backups. It does of course require you to entrust your data to a third party, and often this is where companies may opt for an internal solution.

An internal email archiving solution, on the other hand, requires only minimal care and feeding, and offers the advantage of maintaining all data internally.

Email archiving solutions are essential for all businesses of any size, and organizations should consider the pros and cons of both hosted and on-premises email archiving, and deploy the solution which best suits their company's budget and needs.

  • Hits: 14680

Email Security - Can't Live Without It!

This white paper explains why antivirus software alone is not enough to protect your organization against the current and future onslaught of computer viruses. Examining the different kinds of email threats and email attack methods, this paper describes the need for a solid server-based content-checking gateway to safeguard your business against email viruses and attacks as well as information leaks.

We apologize but this paper is no longer available. Back to the Security Articles section.

  • Hits: 12425

Log-Based Intrusion-Detection and Analysis in Windows Servers

Introduction - How to Perform Network-Wide Security Event Log Management

Microsoft Windows machines have basic audit facilities but they fall short of fulfilling real-life business needs(i.e., monitoring Windows computers in real-time, periodically analyzing security activity, and maintaining along-term audit trail). Therefore, the need exists for a log-based intrusion detection and analysis tool such as EventsManager.

This paper explains how EventsManager’s innovative architecture can fill the gapsin Windows’ security log functionality – without hurting performance and while remaining cost-effective. Itdiscusses the use of EventsManager to implement best practice and fulfill due diligence requirementsimposed by auditors and regulatory agencies; and provides strategies for making maximum use of GFIEventsManager’s capabilities.

This white paper is no longer available by the vendor. To read similar interesting security articles, please visit our Security Articles section.

  • Hits: 14620

Web Monitoring for Employee Productivity Enhancement

All too often when web monitoring and Internet use restrictions are put into place it hurts company morale and does little to

enhance employee productivity. Not wanting to create friction in the workplace many employers shy away from using what could be a significant employee productivity enhancement tool. Wasting time through Internet activities is simple and it’s a huge hidden cost to business. Just answering a few personal e-mails, checking the sports scores, reading the news headlines and checking to see how your bid is holding up can easily waste an hour of time each day. If the company has an 8 person CAD department and each of them spends an hour day on the above activities, that’s a whole employee wasted!

Employees both want and don’t want to have their Internet use restricted. The key to success in gaining productivity and employee acceptance of the problem is the perception of fairness, clear goals and self enforcement.

Why Employees Don’t Want Internet Blocking

  1. They don’t know what is blocked and what is allowed. This uncertainty creates fear that they may do “something” that could hurt their advancement opportunities or worse jeopardize their job.
  2. Someone ruined it for everyone and that person still works here. When everyone is punished, no one is happy. Resentment builds against the employee known to have visited inappropriate websites.
  3. There’s no procedure in place for allowing an employee access to a blocked website. When an employee finds that a website they tried to access is blocked, what do they do? Certainly this indiscretion is going to show up on a report somewhere. What if they really need that site? Is there a procedure in place for allowing this person to access it?

Uncertainty is fodder for loss of morale. In today’s economic climate employees are especially sensitive to any action that can be perceived as clamping down on them. Therefore a web monitoring program must be developed that can be viewed in a positive light by all employees.

Why Employers are Afraid of Internet Blocking

  • The potential of adding to IT costs and human resources headaches takes the away the value from web monitoring. The Internet is a big place and employees are smart. Employers don’t want to get into a situation where they are simply chasing their tail, trading one productivity loss by incurred costs and frustration elsewhere.
  • Employers want to allow employee freedom. There is general recognition by employers that a happy employee is a loyal productive employee. Allowing certain freedoms creates a more satisfying work environment. The impact of taking that away may cause good employees to leave and an increase in turn over can be costly.

The fear of trading one cost for another or trading one headache for another has prevented many employers from implementing internet monitoring and blocking. A mistrust of IT services may also come into play.Technology got us into this situation, where up to 20% of employee time is spent on the Internet, many employers don’t trust that technology can also help them gain that productivity back. A monitoring program needs to be simple to implement and maintain.

Why Employees Want Internet Controls

  • Employees are very aware of what their co-workers are doing or not doing. If an employee in the office spends an hour every day monitoring their auctions on ebay, or reading personal e-mail or chatting onIM every other employee in the office knows it and resents it. If they are working hard, everyone elseshould be too.
  • Unfortunately pornographic and other offensive material finds its way into the office when the Internet is unrestricted. Exposure to this material puts the employee in a difficult situation. Do they tell the boss? Do they try to ignore it? Do they talk to the employee themselves? The employee would rather not be put into this situation.
  • Employees want to work for successful, growing companies. Solid corporate policies that are seen as a necessary means to continue to propel the company forward add to employee satisfaction. Web monitoring can be one of those policies.

How Employers can Gain Employee Support for Web Monitoring

  • Provide a clear, fair policy statement and expose the reasoning and goals. Keep it simple. Employees won’t read a long policy position paper. Stick to the facts and use positive language.
  • Policies that make sense to staff are easy to enforce
  • Policies with goals are easy to measure
  • When the goal has been reached celebrate with your employees in a big way. Everyone likes to feel like part of the team.
  • Empower your employees. White list, don’t black list. Let each employee actively participate in deciding which sites are allowed and which aren’t for them. Let the employee tell you what they need to be most productive and then provide it, no questions asked.
  • Most job positions can be boiled down to between 5 and 20 websites. Employees know what they need. Ask them to provide a list.
  • Show employees the web monitoring reports. Let them see the before and after and let them see the on-going reports. This will encourage self monitoring. This is an enforcement tool in disguise. Employees know that management can view these reports too and will take care that they make them look good.
  • Send employees a weekly report on their Internet usage. They will look at and will act upon to make sure they are portrayed to management in the best light and may even compare themselves against others.

Summary

Web monitoring is good for business. The Internet as a productivity tool has wide acceptance but recent changes have brought new distractions costing business some of those productivity gains. The Internet can be controlled but needs to be done in a way that allows for employee buy-in, self monitoring and self enforcement to be successful.

  • Hits: 14374

Security Threats: A Guide for Small & Medium Businesses

A successful business works on the basis of revenue growth and loss prevention. Small and medium-sized businesses are particularly hit hard when either one or both of these business requirements suffer. Data leakage, down-time and reputation loss can easily turn away new and existing customers if such situations are not handled appropriately and quickly. This may, in turn, impact on the company’s bottom line and ultimately profit margins. A computer virus outbreak or a network breach can cost a business thousands of dollars. In some cases, it may even lead to legal liability and lawsuits.

The truth is that many organizations would like to have a secure IT environment but very often this need comes into conflict with other priorities. Firms often find the task of keeping the business functions aligned with the security process highly challenging. When economic circumstances look dire, it is easy to turn security into a checklist item that keeps being pushed back. However the reality is that, in such situations, security should be a primary issue. The likelihood of threats affecting your business will probably increase and the impact can be more detrimental if it tarnishes your reputation.This paper aims to help small and medium-sized businesses focus on threats that are likely to have an impact on, and affect, the organization.

These threats specifically target small and medium-sized business rather than enterprise companies or home users.

Security Threats That Affect SMBs - Malicious Internet Content

Most modern small or medium-sized businesses need an Internet connection to operate. If you remove thismeans of communication, many areas of the organization will not be able to function properly or else they maybe forced to revert to old, inefficient systems. Just think how important email has become and that for manyorganizations this is the primary means of communication. Even phone communications are changing shapewith Voice over IP becoming a standard in many organizations.At some point, most organizations have been the victim of a computer virus attack.

While many may have antivirusprotection, it is not unusual for an organization of more than 10 employees to use email or the internetwithout any form of protection. Even large organizations are not spared. Recently, three hospitals in Londonhad to shut down their entire network due to an infection of a version of a worm called Mytob. Most of the timewe do not hear of small or medium-sized businesses becoming victims of such infections because it is not intheir interest to publicize these incidents. Many small or medium-sized business networks cannot afford toemploy prevention mechanisms such as network segregation.

These factors simply make it easier for a worm tospread throughout an organization.Malware is a term that includes computer viruses, worms, Trojans and any other kinds of malicious software.Employees and end users within an organization may unknowingly introduce malware on the network whenthey run malicious executable code (EXE files). Sometimes they might receive an email with an attached wormor download spyware when visiting a malicious website. Alternatively, to get work done, employees maydecide to install pirated software for which they do not have a license. This software tends to have more codethan advertised and is a common method used by malware writers to infect the end user’s computers. Anorganization that operates efficiently usually has established ways to share files and content across theorganization. These methods can also be abused by worms to further infect computer systems on the network.Computer malware does not have to be introduced manually or consciously.

Basic software packages installedon desktop computers such as Internet Explorer, Firefox, Adobe Acrobat Reader or Flash have their fair share ofsecurity vulnerabilities. These security weaknesses are actively exploited by malware writers to automaticallyinfect victim’s computers. Such attacks are known as drive-by downloads because the user does not haveknowledge of malicious files being downloaded onto his or her computer. In 2007 Google issued an alert 1describing 450,000 web pages that can install malware without the user’s consent.

Then You Get Social Engineering Attacks

This term refers to a set of techniques whereby attackers make themost of weaknesses in human nature rather than flaws within the technology. A phishing attack is a type ofsocial engineering attack that is normally opportunistic and targets a subset of society. A phishing emailmessage will typically look very familiar to the end users – it will make use of genuine logos and other visuals(from a well-known bank, for example) and will, for all intents and purposes, appear to be the genuine thing.When the end user follows the instructions in the email, he or she is directed to reveal sensitive or privateinformation such as passwords, pin codes and credit card numbers.

Employees and desktop computers are not the only target in an organization. Most small or medium-sizedcompanies need to make use of servers for email, customer relationship management and file sharing. Theseservers tend to hold critical information that can easily become a target of an attack. Additionally, the movetowards web applications has introduced a large number of new security vulnerabilities that are activelyexploited by attackers to gain access to these web applications. If these services are compromised there is ahigh risk that sensitive information can be leaked and used by cyber-criminals to commit fraud.

Attacks on Physical Systems

Internet-borne attacks are not the only security issue that organizations face. Laptops and mobiles areentrusted with the most sensitive of information about the organization. These devices, whether they arecompany property or personally owned, often contain company documents and are used to log on to thecompany network. More often than not, these mobile devices are also used during conferences and travel, thusrunning the risk of physical theft.

The number of laptops and mobile devices stolen per year is ever on theincrease. Attrition.org had over 400 articles in 20082 related to high profile data loss, many of which involvedstolen laptops and missing disks. If it happens to major hospitals and governments that have established ruleson handling such situations, why should it not happen to smaller businesses?

Another Threat Affecting Physical Security is that of Unprotected Endpoints

USB ports and DVD drives can bothbe used to leak data and introduce malware on the network. A USB stick that is mainly used for work and maycontain sensitive documents, becomes a security risk if it is taken home and left lying around and othermembers of the family use it on their home PC. While the employee may understand the sensitive nature of theinformation stored on the USB stick, the rest of the family will probably not.

They may copy files back and forthwithout considering the implications. This is typically a case of negligence but it can also be the work of atargeted attack, where internal employees can take large amounts of information out of the company.Small and medium-sized businesses may overlook the importance of securing the physical network and serverroom to prevent unauthorized persons from gaining access. Open network points and unprotected serverrooms can allow disgruntled employees and visitors to connect to the network and launch attacks such as ARP spoofing to capture network traffic with no encryption and steal passwords and content.

Authentication and Privilege Attacks

Passwords remain the number one vulnerability in many systems. It is not an easy task to have a secure systemwhereby people are required to choose a unique password that others cannot guess but is still easy for them toremember. Nowadays most people have at least five other passwords to remember, and the password used forcompany business should not be the same one used for webmail accounts, site memberships and so on. Highprofile intrusions such as the one on Twitter3 (the password was happiness), clearly show that passwords areoften the most common and universal security weakness and attacks exploiting this weakness do not require alot of technical knowledge.

Password policies can go a long way to mitigate the risk, but if the password policy is too strict people will findways and means to get around it. They will write the password on sticky notes, share them with their colleaguesor simply find a keyboard pattern (1q2w3e4r5t) that is easy to remember but also easy to guess.

Most complex password policies can be easily rendered useless by non-technological means.In small and medium-sized businesses, systems administrators are often found to be doing the work of thenetwork operators and project managers as well as security analysts. Therefore a disgruntled systemsadministrator will be a major security problem due to the amount of responsibility (and access rights) that he orshe holds. With full access privileges, a systems administrator may plan a logic bomb, backdoor accounts or leaksensitive company information that may greatly affect the stability and reputation of the organization.Additionally, in many cases the systems administrator is the person who sets the passwords for importantservices or servers. When he or she leaves the organization, these passwords may not be changed (especially ifnot documented) thus leaving a backdoor for the ex-employee.

A startup company called JournalSpace4 wascaught with no backups when their former system administrator decided to wipe out the main database. Thisproved to be disastrous for the company which ended up asking users to retrieve their content from Google’scache.The company’s management team may also have administrative privileges on their personal computers orlaptops. The reasons vary but they may want to be able to install new software or simply to have more controlof their machines. The problem with this scenario is that one compromised machine is all that an attacker needsto target an organization.

The firm itself does not need to be specifically picked out but may simply become avictim of an attack aimed at a particular vulnerable software package. Even when user accounts on the network are supposed to have reduced privileges, there may be times whereprivilege creep occurs. For example, a manager that hands over an old project to another manager may retainthe old privileges for years even after the handover!

When his or her account is compromised, the intruder alsogains access to the old project.Employees with mobile devices and laptop computers can pose a significant risk when they make use ofunsecured wireless networks whilst attending a conference or during their stay at a hotel. In many cases,inadequate or no encryption is used and anyone ‘in between’ can view and modify the network traffic. This canbe the start of an intrusion leading to compromised company accounts and networks.

Denial Of Service

In an attempt to minimize costs, or simply through negligence, most small and some medium-sized businesseshave various single points of failures. Denial of service is an attack that prevents legitimate users from makinguse of a service and it can be very hard to prevent. The means to carry out a DoS attack and the motives mayvary, but it typically leads to downtime and legitimate customers losing confidence in the organization - and itis not necessarily due to an Internet-borne incident.

In 2008 many organizations in the Mediterranean Sea basin and in the Middle East suffered Internet downtimedue to damages to the underwater Internet cables. Some of these organizations relied on a single Internetconnection, and their business was driven by Internet communications.

Having such a single point of failureproved to be very damaging for these organizations in terms of lost productivity and lost business. Reliability isa major concern for most businesses and their inability to address even one single point of failure can be costly.If an organization is not prepared for a security incident, it will probably not handle the situation appropriately.

One question that needs to be asked is: if a virus outbreak does occur, who should handle the various steps thatneed to be taken to get the systems back in shape? If an organization is simply relying on the systemsadministrator to handle such incidents, then that organization is not acknowledging that such a situation is notsimply technical in nature. It is important to be able to identify the entry point, to approach the personsconcerned and to have policies in place to prevent future occurrences - apart from simply removing the virusfrom the network! If all these tasks are left to a systems administrator, who might have to do everything ad hoc,then that is a formula for lengthy downtime.

Addressing Security Threats - An Anti-virus is not an Option

The volume of malware that can hit organizations today is enormous and the attack vectors are multiple.Viruses may spread through email, websites, USB sticks, and instant messenger programs to name but a few. Ifan organization does not have an anti-virus installed, the safety of the desktop computers will be at the mercyof the end user – and relying on the end user is not advisable or worth the risk.

Protecting desktop workstations is only one recommended practice. Once virus code is present on a desktopcomputer, it becomes a race between the virus and the anti-virus. Most malware has functionality to disableyour anti-virus software, firewalls and so on. Therefore you do not want the virus to get to your desktopcomputer in the first place!The solution is to deploy content filtering at the gateway.

Anti-virus can be part of the content filtering strategywhich can be installed at the email and web gateway. Email accounts are frequently spammed with maliciousemail attachments. These files often appear to come from legitimate contacts thus fooling the end user intorunning the malware code. Leaving the decision to the user whether or not to trust an attachment received byemail is never a good idea.

By blocking malware at the email gateway, you are greatly reducing the risk that endusers may make a mistake and open an infected file. Similarly, scanning all incoming web (HTTP) traffic formalicious code addresses a major infection vector and is a requirement when running a secure networkenvironment.

Security Awareness

A large percentage of successful attacks do not necessarily exploit technical vulnerabilities. Instead they rely onsocial engineering and people’s willingness to trust others. There are two extremes: either employees in anorganization totally mistrust each other to such an extent that the sharing of data or information is nil; or, at theother end of the scale, you have total trust between all employees.

In organizations neither approach isdesirable. There has to be an element of trust throughout an organization but checks and balances are just asimportant. Employees need to be given the opportunity to work and share data but they must also be aware ofthe security issues that arise as a result of their actions. This is why a security awareness program is soimportant.For example, malware often relies on victims to run an executable file to spread and infect a computer ornetwork.

Telling your employees not to open emails from unknown senders is not enough. They need to betold that in so doing they risk losing all their work, their passwords and other confidential details to thirdparties. They need to understand what behavior is acceptable when dealing with email and web content.Anything suspicious should be reported to someone who can handle security incidents. Having opencommunication across different departments makes for better information security, since many socialengineering attacks abuse the communication breakdowns across departments.

Additionally, it is important tokeep in mind that a positive working environment where people are happy in their job is less susceptible toinsider attacks than an oppressive workplace.

Endpoint Security

A lot of information in an organization is not centralized. Even when there is a central system, information isoften shared between different users, different devices and copied numerous times. In contrast with perimetersecurity, endpoint security is the concept that each device in an organization needs to be secured. It isrecommended that sensitive information is encrypted on portable devices such as laptops.

Additionally,removable storage such as DVD drives, floppy drives and USB ports may be blocked if they are considered to bea major threat vector for malware infections or data leakage.Securing endpoints on a network may require extensive planning and auditing. For example, policies can beapplied that state that only certain computers (e.g. laptops) can connect to specific networks. It may also makesense to restrict usage of wireless (WiFi) access points.

Policies

Policies are the basis of every information security program. It is useless taking security precautions or trying tomanage a secure environment if there are no objectives or clearly defined rules. Policies clarify what is or is notallowed in an organization as well as define the procedures that apply in different situations. They should beclear and have the full backing of senior management. Finally they need to be communicated to theorganization’s staff and enforced accordingly.

There are various policies, some of which can be enforced through technology and others which have to beenforced through human resources. For example, password complexity policies can be enforced throughWindows domain policies. On the other hand, a policy which ensures that company USB sticks are not takenhome may need to be enforced through awareness and labeling.

As with most security precautions, it isimportant that policies that affect security are driven by business objectives rather than gut feelings. If securitypolicies are too strict, they will be bypassed, thus creating a false sense of security and possibly create newattack vectors.

Role Separation

Separation of duties, auditing and the principle of least privilege can go a long way in protecting anorganization from having single points of failure and privilege creep. By employing separation of duties, theimpact of a particular employee turning against the organization is greatly reduced. For example, a systemadministrator who is not allowed to make alterations to the database server directly, but has to ask thedatabase administrator and document his actions, is a good use of separation of duties.

A security analyst whoreceives a report when a network operator makes changes to the firewall access control lists is a goodapplication of auditing. If a manager has no business need to install software on a regular basis, then his or heraccount should not be granted such privileges (power user on Windows). These concepts are very importantand it all boils down to who is watching the watchers.

Backup and Redundant Systems

Although less glamorous than other topics in Information Security, backups remain one of the most reliablesolutions. Making use of backups can have a direct business benefit when things go wrong. Disasters do occurand an organization will come across situations when hardware fails or a user (intentionally or otherwise)deletes important data.

A well-managed and tested backup system will get the business back up and runningin very little time compared to other disaster recovery solutions. It is therefore important that backups are notonly automated to avoid human error but also periodically tested. It is useless having a backup system ifrestoration does not function as advertised.Redundant systems allow a business to continue working even if a disaster occurs.

Backup servers andalternative network connections can help to reduce downtime or at least provide a business with limitedresources until all systems and data are restored.

Keeping your Systems Patched

New advisories addressing security vulnerabilities in software are published on a daily basis. It is not an easytask to stay up-to-date with all the vulnerabilities that apply for software installed on the network, thereforemany organizations make use of a patch management system to handle the task. It is important to note thatpatches and security updates are not only issued for Microsoft products but also for third party software. Forexample, although the web browser is running the latest updates, a desktop can still be compromised whenvisiting a website simply because it is running a vulnerable version of Adobe Flash.

Additionally it may beimportant to assess the impact of vulnerability before applying a patch, rather than applying patchesreligiously. It is also important to test security updates before applying them to a live system. The reason is that,from time to time, vendors issue patches that may conflict with other systems or that were not tested for yourparticular configuration.

Additionally, security updates may sometimes result in temporary downtime, forexposureSimple systems are easier to manage and therefore any security issues that apply to such systems can beaddressed with relative ease. However, complex systems and networks make it harder for a security analyst toassess their security status. For example, if an organization does not need to expose a large number of services on the Internet, the firewall configuration would be quite straightforward. However, the greater the company’sneed to be visible – an online retailer, for example – the more complex the firewall configuration will be, leavingroom for possible security holes that could be exploited by attackers to access internal network services.

When servers and desktop computers have fewer software packages installed, they are easier to keep up-todateand manage. This concept can work hand in hand with the principle of least privilege. By making use offewer components, fewer software and fewer privileges, you reduce the attack surface while allowing forsecurity to be more focused to tackle real issues.

Conclusion

Security in small and medium-sized businesses is more than just preventing viruses and blocking spam. In 2009,cybercrime is expected to increase as criminals attempt to exploit weaknesses in systems and in people. Thisdocument aims to give managers, analysts, administrators and operators in small and medium-sized businessesa snapshot of the IT security threats facing their organization. Every organization is different but in manyinstances the threats are common to all. Security is a cost of doing business but those that prepare themselveswell against possible threats will benefit the most in the long term.



  • Hits: 33977

Web Security Software Dealing With Malware

It is widely acknowledged that any responsible modern-day organization will strive to protect its network against malware attacks. Each day brings on a spawning of increasingly sophisticated viruses, worms, spyware, Trojans, and all other kinds of malicious software which can ultimately lead to an organization's network being compromised or brought down. Private information can be inadvertently leaked, a company's network can crash; whatever the outcome, poor security strategies could equal disaster. Having a network that is connected to the Internet leaves you vulnerable to attack, but Internet access is an absolute necessity for most organizations, so the wise thing to do would be to have a decent web security package installed on your machines, preferably at the gateway.

There are several antivirus engines on the market and each product has its own heuristics, and subsequently its own particular strengths and weaknesses. It's impossible to claim any one as the best overall at any given time. It can never be predicted which antivirus lab will be the quickest to release an update providing protection against the next virus outbreak; it is often one company on one occasion and another one the next.

Web security can never be one hundred percent guaranteed at all times, but, there are ways to significantly minimize the risks. It is good and usual practice to use an antivirus engine to help protect your network, but it would naturally be much better to use several of them at once. Why is this? If, hypothetically speaking, your organization uses product A, and a new virus breaks out, it might be Lab A or Lab B, or any other antivirus lab, which releases an update the fastest. So the logical conclusion would be that, the more AV engines you make use of, the greater the likelihood of you nipping that attack in the bud.

This is one of the ways in which web security software can give you better peace of mind. Files which are downloaded on any of your company's computers can each be scanned using several engines, rather than just one, which could significantly reduce the time it will take to obtain the latest virus signatures, therefore diminishing the risk to your site by each new attack.

Another plus side of web security software is that multiple download control policies can be set according to the individual organization's security policies, which could be either user, group or IP-based, controlling the downloading of different file types such as JavaScript, MP3, MPEG, exe, and more by specific users/groups/IP addresses. Hazardous files like Trojan downloader programs very often appear disguised as harmless files in order to gain access to a system. A good web security solution will analyze and detect the real file types HTTP/FTP file downloads, making sure that files which are downloaded contain no viruses or malware.

The long and short of it is this: you want the best security possible for your network, but it's not within anyone's power to predict where the next patch will come from. Rather than playing Russian roulette by sticking to one AV engine, adopt a web security package that will enable you to use several of them.

  • Hits: 14009

The Web Security Strategy for Your Organization

In today's business world, internet usage has become a necessity for doing business. Unfortunately, a company's use of the internet comes with considerable risk to its network and business information.

Web security threats include phishing attacks, malware, scareware, rootkits, keyloggers, viruses and spam. While many attacks occur when information is downloaded from a website, others are now possible through drive-by attacks where simply visiting a website can infect a computer. These attacks usually result in data and information leakage, loss in productivity, loss of network bandwidth and, depending on the circumstances, even liability issues for the company. In addition to all this, cleanup from malware and other types of attacks on a company's network are usually costly from both the dollar aspect as well as the time spent recovering from these web security threats.

Fortunately, there are steps a company can take to protect itself from these web security threats. Some are more effective than others, but the following suggestions should help narrow down the choices.

Employee Internet Usage Policy

The first and probably the least expensive solution would be to develop and implement an employee internet usage policy. This policy should clearly define what an employee can and cannot do when using the internet. It should also address personal usage of the internet on the business computer. The policy should identify the type of websites that can be accessed by the employee for business purposes and what, if any, type of material can be downloaded from the internet. Always make sure the information contained in the policy fits your unique business needs and environment.

Employee Education

Train your employees to recognize web security threats and how to lower the risk of infection. In today's business environment, laptops, smartphones, iPads, and other similar devices are not only used for business purposes, but also for personal and home use. When devices are used at home, the risk of an infection on that device is high and malware could easily be transferred to the business network. This is why employee education is so important.

Patch Management

Good patch management practices should also be in place and implemented using a clearly-defined patch management policy. Operating systems and applications, including browsers, should be updated regularly with the latest available security patches. The browser, whether a mobile version used on a smartphone or a full version used on a computer, is a primary vector for malware attacks and merits particular attention. Using the latest version of a browser is a must as known vulnerabilities would have been addressed

Internet Monitoring Software

Lastly, I would mention the use of internet monitoring software. Internet monitoring software should be able to protect the network against malware, scareware, viruses, phishing attacks and other malicious software. A robust internet monitoring software solution will help to enforce your company's internet usage policy by blocking connections to unacceptable websites, by monitoring downloads, and by monitoring encrypted web traffic going into and out of the network.

There is no single method that can guarantee 100% web security protection, however a well thought-out strategy is one huge step towards minimizing risk that the network could be targeted by the bad guys.

 



  • Hits: 18702

Introduction To Network Security - Part 1

As more and more people and businesses have begun to use computer networks and the Internet, the need for a secure computing environment has never been greater. Right now, information security professionals are in great demand and the importance of the field is growing every day. All the industry leaders have been placing their bets on security in the last few years.

All IT venodors agree today that secure computing is no longer an optional component, it is something that should be integrated into every system rather than being thrown in as an afterthought. Usually programmers would concentrate on getting a program working, and then (if there was time) try and weed out possible security holes.

Now, applications must be coded from the ground up with security in mind, as these applications will be used by people who expect the security and privacy of their data to be maintained.

This article intends to serve as a very brief introduction to information security with an emphasis on networking.

The reasons for this are twofold:

Firstly, in case you did not notice.. this is a networking website,

Secondly, the time a system is most vulnerable is when it is connected to the Internet.

For an understanding of what lies in the following pages, you should have decent knowledge of how the Internet works. You don't need to know the ins and outs of every protocol under the sun, but a basic understanding of network (and obviously computer) fundamentals is essential.

If you're a complete newbie however, do not despair. We would recommend you look under the Networking menu at the top of the site...where you will find our accolade winning material on pretty much everything in networking.

Hacker or Cracker?

There is a very well worn out arguement against using the incorrect use of the word 'hacker' to denote a computer criminal -- the correct term is a 'cracker' or when referring to people who have automated tools and very little real knowledge, 'script kiddie'. Hackers are actually just very adept programmers (the term came from 'hacking the code' where a programmer would quickly program fixes to problems he faced).

While many feel that this distinction has been lost due to the media portraying hackers as computer criminals, we will stick to the original definitions through these articles more than anything to avoid the inevitable flame mail we will get if we don't !

On to the Cool Stuff!

This introduction is broadly broken down into the following parts :

• The Threat to Home Users
• The Threat to the Enterprise
• Common Security Measures Explained
• Intrusion Detection Systems
• Tools an Attacker Uses
• What is Penetration-Testing?
• A Brief Walk-through of an Attack
• Where Can I Find More Information?
• Conclusion

The Threat to Home Users

Many people underestimate the threat they face when they use the Internet. The prevalent mindset is "who would bother to attack me or my computer?", while this is true -- it may be unlikely that an attacker would individually target you, as to him, you are just one more system on the Internet.

Many script kiddies simply unleash an automated tool that will scan large ranges of IP addresses looking for vulnerable systems, when it finds one, this tool will automatically exploit the vulnerability and take control of this machine.

The script kiddie can later use this vast collection of 'owned' systems to launch a denial of service (DoS) attacks, or just cover his tracks by hopping from one system to another in order to hide his real IP address.

This technique of proxying attacks through many systems is quite common, as it makes it very difficult for law enforcement to back trace the route of the attack, especially if the attacker relays it through systems in different geographic locations.

It is very feasible -- in fact quite likely -- that your machine will be in the target range of such a scan, and if you haven't taken adequate precautions, it will be owned.

The other threat comes from computer worms that have recently been the subject of a lot of media attention. Essentially a worm is just an exploit with a propagation mechanism. It works in a manner similar to how the script kiddie's automated tool works -- it scans ranges of IP addresses, infects vulnerable machines, and then uses those to scan further.

Thus the rate of infection increases geometrically as each infected system starts looking for new victims. In theory a worm could be written with such a refined scanning algorithm, that it could infect 100% of all vulnerable machines within ten minutes. This leaves hardly any time for response.

Another threat comes in the form of viruses, most often these may be propagated by email and use some crude form of social engineering (such as using the subject line "I love you" or "Re: The documents you asked for") to trick people into opening them. No form of network level protection can guard against these attacks.

The effects of the virus may be mundane (simply spreading to people in your address book) to devastating (deleting critical system files). A couple of years ago there was an email virus that emailed confidential documents from the popular Windows "My Documents" folder to everyone in the victims address book.

So while you per se may not be high profile enough to warrant a systematic attack, you are what I like to call a bystander victim.. someone who got attacked simply because you could be attacked, and you were there to be attacked.

As broadband and always-on Internet connections become commonplace, even hackers are targetting the IP ranges where they know they will find cable modem customers. They do this because they know they will find unprotected always-on systems here that can be used as a base for launching other attacks.

The Threat to the Enterprise

Most businesses have conceded that having an Internet presence is critical to keep up with the competition, and most of them have realised the need to secure that online presence.

Gone are the days when firewalls were an option and employees were given unrestricted Internet access. These days most medium sized corporations implement firewalls, content monitoring and intrusion detection systems as part of the basic network infrastructure.

For the enterprise, security is very important -- the threats include:

• Corporate espionage by competitors,
• Attacks from disgruntled ex-employees
• Attacks from outsiders who are looking to obtain private data and steal the company's crown jewels (be it a database of credit cards, information on a new product, financial data, source code to programs, etc.)
• Attacks from outsiders who just want to use your company's resources to store pornography, illegal pirated software, movies and music, so that others can download and your company ends up paying the bandwidth bill and in some countries can be held liable for the copyright violations on movies and music.

As far as securing the enterprise goes, it is not enough to merely install a firewall or intrustion detection system and assume that you are covered against all threats. The company must have a complete security policy and basic training must be imparted to all employees telling them things they should and should not do, as well as who to contact in the event of an incident. Larger companies may even have an incident response or security team to deal specifically with these issues.

One has to understand that security in the enterprise is a 24/7 problem. There is a famous saying, "A chain is only as strong as its weakest link", the same rule applies to security.

After the security measures are put in place, someone has to take the trouble to read the logs, occasionally test the security, follow mailing-lists of the latest vulnerabilities to make sure software and hardware is up-to-date etc. In other words, if your organisation is serious about security, there should be someone who handles security issues.

This person is often a network administrator, but invariably in the chaotic throes of day-to-day administration (yes we all dread user support calls ! :) the security of the organisation gets compromised -- for example, an admin who needs to deliver 10 machines to a new department may not password protect the administrator account, just because it saves him some time and lets him meet a deadline. In short, an organisation is either serious about security issues or does not bother with them at all.

While the notion of 24/7 security may seem paranoid to some people, one has to understand that in a lot of cases a company is not specifically targetted by an attacker. The company's network just happen to be one that the attacker knows how to break into and thus they get targetted. This is often the case in attacks where company ftp or webservers have been used to host illegal material.

The attackers don't care what the company does - they just know that this is a system accessible from the Internet where they can store large amounts of warez (pirated software), music, movies, or pornography. This is actually a much larger problem than most people are aware of because in many cases, the attackers are very good at hiding the illegal data. Its only when the bandwidth bill has to be paid that someone realises that something is amiss.

Firewalls

By far the most common security measure these days is a firewall. A lot of confusion surrounds the concept of a firewall, but it can basically be defined as any perimiter device that permits or denies traffic based on a set of rules configured by the administrator. Thus a firewall may be as simple as a router with access-lists, or as complex as a set of modules distributed through the network controlled from one central location.

The firewall protects everything 'behind' it from everything in front of it. Usually the 'front' of the firewall is its Internet facing side, and the 'behind' is the internal network. The way firewalls are designed to suit different types of networks is called the firewall topology.

Here is the link to a detailed explanation of different firewall topologies :Firewall.cx Firewall Topologies

You also get what are known as 'personal firewalls' such as Zonealarm, Sygate Personal Firewall , Tiny Personal Firewall , Symantec Endpoint Security etc.

These are packages that are meant for individual desktops and are fairly easy to use. The first thing they do is make the machine invisible to pings and other network probes. Most of them also let you choose what programs are allowed to access the Internet, therefore you can allow your browser and mail client, but if you see some suspicious program trying to access the network, you can disallow it. This is a form of 'egress filtering' or outbound traffic filtering and provides very good protection against trojan horse programs and worms.

However firewalls are no cure all solution to network security woes. A firewall is only as good as its rule set and there are many ways an attacker can find common misconfigurations and errors in the rules. For example, say the firewall blocks all traffic except traffic originating from port 53 (DNS) so that everyone can resolve names, the attacker could then use this rule to his advantage. By changing the source port of his attack or scan to port 53, the firewall will allow all of his traffic through because it assumes it is DNS traffic.

Bypassing firewalls is a whole study in itself and one which is very interesting especially to those with a passion for networking as it normally involves misusing the way TCP and IP are supposed to work. That said, firewalls today are becoming very sophisticated and a well installed firewall can severely thwart a would-be attackers plans.

It is important to remember the firewall does not look into the data section of the packet, thus if you have a webserver that is vulnerable to a CGI exploit and the firewall is set to allow traffic to it, there is no way the firewall can stop an attacker from attacking the webserver because it does not look at the data inside the packet. This would be the job of an intrusion detection system (covered further on).

Anti-Virus Systems

Everyone is familiar with the desktop version of anti virus packages like Norton Antivirus and Mcafee. The way these operate is fairly simple -- when researchers find a new virus, they figure out some unique characteristic it has (maybe a registry key it creates or a file it replaces) and out of this they write the virus 'signature'.

The whole load of signatures that your antivirus scans for what is known as the virus 'definitions'. This is the reason why keeping your virus definitions up-to-date is very important. Many anti-virus packages have an auto-update feature for you to download the latest definitions. The scanning ability of your software is only as good as the date of your definitions. In the enterprise, it is very common for admins to install anti-virus software on all machines, but there is no policy for regular update of the definitions. This is meaningless protection and serves only to provide a false sense of security.

With the recent spread of email viruses, anti-virus software at the MTA (Mail Transfer Agent , also known as the 'mail server') is becoming increasingly popular. The mail server will automatically scan any email it recieves for viruses and quarantine the infections. The idea is that since all mail passes through the MTA, this is the logical point to scan for viruses. Given that most mail servers have a permanent connection to the Internet, they can regularly download the latest definitions. On the downside, these can be evaded quite simply. If you zip up the infected file or trojan, or encrypt it, the anti-virus system may not be able to scan it.

End users must be taught how to respond to anti virus alerts. This is especially true in the enterprise -- an attacker doesn't need to try and bypass your fortress like firewall if all he has to do is email trojans to a lot of people in the company. It just takes one uninformed user to open the infected package and he will have a backdoor to the internal network.

It is advisable that the IT department gives a brief seminar on how to handle email from untrusted sources and how to deal with attachments. These are very common attack vectors simply because you may harden a computer system as much as you like, but the weak point still remains the user who operates it. As crackers say 'The human is the path of least resistance into the network'.

Intrusion Detection Systems

IDS's have become the 'next big thing' the way firewalls were some time ago. There are bascially two types of Intrusion Detection Systems :

• Host based IDS
• Network based IDS

Host based IDS - These are installed on a particular important machine (usually a server or some important target) and are tasked with making sure that the system state matches a particular set baseline. For example, the popular file-integrity checker Tripwire -- this program is run on the target machine just after it has been installed. It creates a database of file signatures for the system and regularly checks the current system files against their known 'safe' signatures. If a file has been changed, the administrator is alerted. This works very well as most attackers will replace a common system file with a trojaned version to give them backdoor access.

Network based IDS - These are more popular and quite easy to install. Basically they consist of a normal network sniffer running in promiscuous mode (in this mode the network card picks up all traffic even if its not meant for it). The sniffer is attached to a database of known attack signatures and the IDS analyses each packet that it picks up to check for known attacks. For example a common web attack might contain the string '/system32/cmd.exe?' in the URL. The IDS will have a match for this in the database and will alert the administrator.

Newer IDS' support active prevention of attacks - instead of just alerting an administrator, the IDS can dynamically update the firewall rules to disallow traffic from the attacking IP address for some amount of time. Or the IDS can use 'session sniping' to fool both sides of the connection into closing down so that the attack cannot be completed.

Unfortunately IDS systems generate a lot of false positives (a false positive is basically a false alarm, where the IDS sees legitimate traffic and for some reason matches it against an attack pattern) this tempts a lot of administrators into turning them off or even worse -- not bothering to read the logs. This may result in an actual attack being missed.

IDS evasion is also not all that difficult for an experienced attacker. The signature is based on some unique feature of the attack, and so the attacker can modify the attack so that the signature is not matched. For example, the above attack string '/system32/cmd.exe?' could be rewritten in hexadecimal to look something like the following:

'2f%73%79%73%74%65%6d%33%32%2f%63%6d%64%2e%65%78%65%3f'

Which might be totally missed by the IDS. Furthermore, an attacker could split the attack into many packets by fragmenting the packets. This means that each packet would only contain a small part of the attack and the signature would not match. Even if the IDS is able to reassemble fragmented packets, this creates a time overhead and since IDS' have to run at near real-time status, they tend to drop packets while they are processing. IDS evasion is a topic for a paper on its own.

The advantage of a network based IDS is that it is very difficult for an attacker to detect. The IDS itself does not need to generate any traffic, and in fact many of them have a broken TCP/IP stack so they don't have an IP address. Thus the attacker does not know whether the network segment is being monitored or not.

Patching and Updating

It is embarassing and sad that this has to be listed as a security measure. Despite being one of the most effective ways to stop an attack, there is a tremendously laid back attitude to regulary patching systems. There is no excuse for not doing this, and yet the level of patching remains woefully inadequate. Take for example the MSblaster worm that spread havoc recently. The exploit was known almost a month in advance, and a patch had been released, still millions of users and businesses were infected. While admins know that having to patch 500 machines is a laborious task, the way I look at it is I would rather be updating my systems on a regular basis than waiting for disaster to strike and then running around trying to patch and clean up those 500 systems.

For the home user, its a simple matter of running the automatic update software that every worthwhile OS comes with. In the enterprise there is no 'easy' way to patch large numbers of machines, but there are patch deployment mechanisms that take a lot of the burden away. Frankly, it is part of an admin's job to do this, and when a network is horribly fouled up by the latest worm it just means someone, somewhere didn't do his job well enough.

Click here to read 'Introduction to Network Security - Part 2'

  • Hits: 79388

The VIRL Book – A Guide to Cisco’s Virtual Internet Routing Lab (Cisco Lab)

cisco-virl-book-guide-to-cisco-virtual-internet-routing-lab-1Cisco’s Virtual Internet Routing Lab (VIRL) is a network simulation tool developed by Cisco that allows engineers, certification candidates and network architects to create their own Cisco Lab using the latest Cisco IOS devices such as Routers, Catalyst or Nexus switches, ASA Firewall appliances and more.

Read Jack Wang's Introduction to Cisco VIRL article to find out more information about the product

Being a fairly new but extremely promising product it’s quickly becoming the standard tool for Cisco Lab simulations. Managing and operating Cisco VIRL might have its challenges, especially for those new to the virtualization world, but one of the biggest problems has been the lack of dedicated online resources for VIRL management leaving a lot of unanswered questions on how to use VIRL for different types of simulations, how to build topologies, how to fine tune them etc.

The recent publication of “The VIRL Book’ by Jack Wang has changed the game for VIRL users. Tasks outlined above plus a lot more are now becoming easier to handle, helping users manage their VIRL server in an effective and easy to understand way.

The introduction to VIRL has been well crafted by Jack as he addressed each and every aspect of VIRL, why one should opt for VIRL, what VIRL can offer and how it different from other simulation tools.

This unique title addresses all possible aspects of VIRL and has been written to satisfy even the most demanding users seeking to create complex network simulations. Key topics covered include:

  • Planning the VIRL Installation
  • Installing VIRL
  • Creating your first simulation
  • Basic operation & best practices,
  • Understanding the anatomy of VIRL
  • External Connectivity to the world
  • Advanced features
  • Use VIRL for certifications
  • Running 3rd party virtual machines
  • Sample Network Topologies

The Planning the VIRL Installation section walks through the various VIRL installation options, be it a virtual machine, bare metal installation or on the cloud, what kind of hardware suits the VIRL installation. This makes life easier for VIRL users to ensure they are planning well and selecting the right hardware for their VIRL installation.

Understanding the Cisco VIRL work-flow

Figure 1. Understanding the Cisco VIRL work-flow

The Installing VIRL section is quite engaging as Jack walks through the installation of VIRL on various virtual platforms such as VMware vSphere ESXI, VMWare Fusion, VMWare Workstation, Bare-Metal and on the cloud. All these installations are described simple steps and with great illustrations. The troubleshooting part happens to be the cream of this section as it dives into small details such as bios settings and more, proving how attentive the author is to simplifying troubleshooting.

The Creating your first simulation section is a very helpful section as it goes though in depth about how to create a simulation, comparison of Design mode and Simulation mode, generating initial configurations etc. This section really helped us to understand VIRL in depth and especially how to create a simulation with auto configurations.

The External connectivity to the world section helps the user open up to a new world of virtualization and lab simulations. Jack really mastered this section and simplified the concepts of FLAT network and SNAT network while at the same time dealing with issues like how to add 3rd party virtual machines into VIRL. The Palo Alto Firewall integration happens to be our favorite.

To summarize, this title is a must guide for all Cisco VIRL users as it deals with every aspect of VIRL and we believe this not only simplifies the use of the product but also helps users understand how far they can go with it. Jack’s hard work and insights are visible in every section of the book and we believe it’s not an easy task to come out with such a great title. We certainly congratulate Jack. This is a title that should not be missing from any Cisco VIRL user’s library.

  • Hits: 17769

Cisco Press Review for “Cisco Firepower and Advanced Malware Protection Live Lessons” Video Series

Title:              Cisco Firepower & Advanced Malware Protection Live Lessons
Authors:        Omar Santos
ISBN-10:       0-13-446874-0
Publisher:     Cisco Press
Published:    June 22, 2016
Edition:         1st Edition
Language:    English

cisco-firepower-and-advanced-malware-protection-live-lessons-1The “Cisco Firepower and Advanced Malware Protection Live Lessons” video series by Omar Santos is the icing on the cake for someone who wants to start their journey of Cisco Next-Generation Network Security. This video series contains eight lessons on the following topics:

Lesson 1: Fundamentals of Cisco Next-Generation Network Security

Lesson 2: Introduction and Design of Cisco ASA with FirePOWER Services

Lesson 3: Configuring Cisco ASA with FirePOWER Services

Lesson 4: Cisco AMP for Networks

Lesson 5: Cisco AMP for Endpoints

Lesson 6: Cisco AMP for Content Security

Lesson 7: Configuring and Troubleshooting the Cisco Next-Generation IPS Appliances

Lesson 8: Firepower Management Center

Lesson 1 deals with the fundamentals of Cisco Next-Generation Network Security products, like security threats, Cisco ASA Next-Generation Firewalls, FirePOWER Modules, Next-Generation Intrusion Prevention Systems, Advanced Malware Protection (AMP), Email Security, Web Security, Cisco ISE, Cisco Meraki Cloud Solutions and much more. Omar Santos has done an exceptional job creating short videos, which are a maximum of 12 minutes, he really built up the series with a very informative introduction dealing with the security threats the industry is currently facing, the emergence of Internet of Things (IOT) and its impact and the challenges of detecting threats.

Lesson 2 deals with the design aspects of the ASA FirePOWER Service module, how it can be deployed in production networks, how High-Availability (HA) works, how ASA FirePOWER services can be deployed at the Internet Edge and the VPN scenarios it supports. The modules in this lesson are very brief and provide an overview. If someone were looking for in-depth information they must refer to Cisco documentation.  

Lesson 3 is the most important lesson of the series as it deals with the initial setup of the Cisco ASA FirePOWER Module in Cisco ASA 5585-X and Cisco ASA 5500-X appliances, also Omar demonstrates how Cisco ASA redirects traffic to the Cisco ASA FirePOWER module and he concludes the lesson with basic troubleshooting steps.

Lessons 4, 5 and 6 are dedicated to Cisco AMP for networks, endpoints and content security. Omar walks through an introduction to AMP, each lesson deals with various options, it’s a good overview of AMP and he’s done a commendable job keeping it flowing smoothly. Cisco AMP for endpoint is quite interesting as Omar articulates the info in a much easier way and the demonstrations are good to watch.

The best part of this video series is the Lesson that deals with the configuration of Cisco ASA with FirePOWER services, in a very brief way Omar shows the necessary steps for the successful deployment in the Cisco ASA 5585-X and Cisco ASA 5500-X platform.

The great thing about Cisco Press is that it ensures one doesn’t need to hunt for reference or study materials, it always has very informative products in the form of videos and books. You can download these videos and watch them at your own pace.

To conclude, the video series is really good to watch as it deals with various topics of Cisco Next-Generation Security products in less than 13 minutes, the language used is quite simple and easy to understand, however, this video series could do with more live demonstrations especially a demonstration on how to reimage the ASA appliances to install the Cisco FirePOWER module.

This is a highly recommended product especially for engineers interested in better understanding how Cisco’s Next-Generation security products operate and more specifically the Cisco FirePOWER services, Cisco AMP and advanced threat detection & protection.

  • Hits: 11166

Cisco CCNP Routing & Switching v2.0 – Official Cert Guide Library Review (Route 300-101, Switch 300-115 & Tshoot 300-135)

Title:          Cisco CCNP Routing & Switching v2.0 – Official Cert Guide Library
Authors:    Kevin Wallace, David Hucaby, Raymond Lacoste    
ISBN-13:    978-1-58720-663-4
Publisher:  Cisco Press
Published:  December 23rd, 2014
Edition:      1st Edition
Language:  English

Reviewer: Chris Partsenidis

star-5  

CCNP Routing and Switching - Library V2 ISBN 0-13-384596-6The Cisco CCNP Routing and Switching (CCNP R&S) certification is the most popular Cisco Professional series certification at the moment, requiring candidates sit and pass three professional level exams: Route 300-101, Switch 300-115 & Tshoot 300-135.

The Cisco Press CCNP R&S v2.0 Official Cert Guide Library has been updated to reflect the latest CCNP R&S curriculum updates (2014) and is perhaps the only comprehensive study guide out there, that guarantees to help you pass all three exams on your first try, saving money, time and unwanted disappointments – and ‘no’ - this is not a sales pitch as I personally used the library for my recently acquired CCNP R&S certification!  I’ll be writing about my CCNP R&S certification path experience very soon on Firewall.cx.

The CCNP R&S v2 Library has been written by three well-known CCIE veteran engineers (Kevin Wallace, David Hucaby, Raymond Lacoste) and with the help and care of Cisco Press, they’ve managed to produce the best CCNP R&S study guide out there.   While the CCNP R&S Library is aimed for CCNP certification candidates – it can also serve as a great reference guide for those seeking to increase their knowledge on advanced networking topics, technologies and improve their troubleshooting skills.

The Cisco Press CCNP R&S v2 Library is not just a simple update to the previous study guide. Key topics for each of the three exams are now clearer than ever, with plentiful examples, great diagrams, finer presentation and analysis.

The CCNP Route exam (300-101) emphasizes on a number of technologies and features that are also reflected in the ROUTE study guide book. IPv6 (dual-stack), EIGRP IPv6 & OSPF IPv6, RIPng (RIP IPv6), NAT (IPv4 & IPv6), VPN Concepts (DMVPN and Easy VPN), are amongst the list of ‘hot’ topics covered in ROUTE book. Similarly the CCNP Switch exam (300-115) emphasizes, amongst other topics, on Cisco StackWise, Virtual Switching Service (VSS) and Advanced Spanning Tree Protocol implementations – all of which are covered extensively in the SWITCH book.

Each of the three books is accompanied by a CD, containing over 200 practice questions (per CD) that are designed to help prepare the candidate for the real exam. Additional material on each CD includes memory table exercises and answer keys, a generous amount of videos, plus a study planner tool – that’s pretty much everything you’ll need for a successful preparation and achieving the ultimate goal: passing each exam.

Using the CCNP R&S v2 Library to help me prepare for each CCNP exam was the best thing I did after making the decision to pursue the CCNP certification. Now it’s proudly sitting amongst my other study guides and used occasionally when I need a refresh on complex networking topics.

  • Hits: 16782

GFI’s LANGUARD Update – The Most Trusted Patch Management Tool & Vulnerability Scanner Just Got Better!

gfi-languardGFI’s LanGuard is one of the world’s most popular and trusted patch management & vulnerability scanner products designed to effectively monitor and manage networks of any size. IT Administrators, Network Engineers and IT Managers who have worked with Languard would surely agree that the above statement is no exaggeration.

Readers who haven’t heard or worked with GFI’s LanGuard product should definitely visit our LanGuard 2014 product review and read about the features this unique network security product offers and download their free copy.

GFI recently released an update to LanGuard, taking the product to a whole new level by providing new key-features that have caught us by surprise.

Following is a short list of them:

  • Mobile device scanning:  customers can audit mobile devices that connect to Office 365, Google Apps and Apple Profile Manager.
  • Expanded vulnerability assessment for network devices: GFI LanGuard 2014 R2 offers vulnerability assessment of routers, printers and switches from the following vendors: Cisco, 3Com, Dell, SonicWALL, Juniper Networks, NETGEAR, Nortel, Alcatel, IBM and Linksys. 
  • CIPA compliance reports: CIPA compliance reports: additional reporting to ensure US schools and libraries adhere to the Children’s Internet Protection Act (CIPA). GFI LanGuard has now dedicated compliance reports for 11 security regulations and standards, including PCI DSS, HIPAA, SOX and PSN CoCo.
  • Support for Fedora: Fedora is 7th Linux distribution supported by LanGuard for automatic patch management
  • Chinese Localization: GFI LanGuard 2014 R2 is now also available in Chinese Traditional and Simplified versions.

One of the features we loved was the incredible support of Cisco products. With its latest release, GFI LanGuard supports over 1500 different Cisco products ranging from routers (including the newer ISR Gen 2), Catalyst switches (Layer2 & Layer3 switches), Cisco Nexus switches, Cisco Firewalls (PIX & ASA Series), VPN Gateways, Wireless Access points, IPS & IDS Sensors, Voice Gateways and much more!

  • Hits: 11543

CCIE Collaboration Quick Reference Review

Title:              CCIE Collaboration Quick Reference
Authors:        Akhil Behl
ASIN:             B00KDIM9FI
Publisher:      Cisco Press
Published:     May 16, 2014
Edition:         1st Edition
Language:     English

Reviewer: Arani Mukherjee

star-5  

0-13-384596-6This ebook has been designed for a specific target audience, as the title of the book suggests, hence it cannot be alleged that it is not suitable for all levels of Cisco expertise. Furthermore, since it is a quick reference, there is no scope for something like poetic licence. As a quick reference, it achieves the two key aims:

1) Provide precise information
2) Do it in a structured format

And eliminate any complexity or ambiguity on the subject matter by adhering to these two key aims.

Readers of this review have to bear in mind that the review is not about the content/subject matter and its technical accuracy. This has already been achieved by the technical reviewer, as mentioned in the formative sections of the ebook. This review is all about how effectively the ebook manages to deliver key information to its users.

So, to follow up on that dictum, it would be wise to scan through how the material has been laid out.

It revolves around the Cisco Unified Communication (UC) workspace service infrastructure and explains what it stands for and how it delivers what it promises. So the first few chapters are all about the deployment of this service. Quality of Service (QoS) follows deployment. This chapter is dedicated entirely towards ensuring the network infrastructure will provide the classification of policies and scheduling for multiple network traffic classes.

The next chapter is Telephony Standards and Protocols. This chapter talks about the various voice based protocols and their respective criteria. These include analog, digital and fax communication protocols.

From this point onwards the reference material concentrates purely on the Cisco Unified Communication platform. It discusses the relevant subsections of CUCM in the following line-up:

  • Cisco Unified Communications Manager
  • Cisco Unified Communications Security
  • Cisco Unity Connection
  • Cisco Unified Instant Messaging and Presence
  • Cisco Unified Contact Centre Express
  • Cisco IOS Unified Communications Applications &
  • Cisco Collaboration Network Management

In conclusion, what we need to prove or disprove are the key aims of a quick reference:

Does it provide precise information? - The answer is Yes. It does so due to the virtue that it is a reference guide. Information has to be precise as it would be used in situations where credibility or validity won't be questioned.

Does it do the above in a structured manner? - The answer is Yes. The layout of the chapters in its current form helps to achieve that. The trajectory of the discussion through the material ensures it as well.

Does it eliminate any complexity and ambiguity? - The answer again is Yes. This is a technical reference material and not a philosophical debate penned down for the benefit of its readers. The approach of the author is very simplistic. It follows the natural order of events from understanding the concept, deploying the technology and ensuring quality of the services, to managing the technology to provide a robust efficient workspace environment.

In addition to the above proof it needs to be mentioned that, since it is an eBook, users will find it easy to use it from various mobile platforms like tablets or smart phones. It wouldn’t be easy to carry around a 315 page reference guide, even if it was printed on both sides of the paper!

For its target audience, this eBook will live up to its readers expectations and is highly recommended for anyone pursuing the CCIE Collaboration or CCNP Voice certification.

  • Hits: 15555

CCIE Collaboration Quick Reference Exam Guide

Title:             CCIE Collaboration Quick Reference
Authors:        Akhil Behl
ISBN-10(13): 0-13-384596-6
Publisher:      Cisco Press
Published:      May  2014
Edition:          1st Edition
Language:      English

star-5

CCIE Collaboration Quick ReferenceThis title addresses the current CCIE Collaboration exam from both written and lab exam perspective. The title helps CCIE aspirants to achieve CCIE Collaboration certification and excel in their professional career. The ebook is now available for pre-order and is scheduled for release on 16 May 2014.
 
Here’s the excerpt from Cisco Press website:

CCIE Collaboration Quick Reference provides you with detailed information, highlighting the key topics on the latest CCIE Collaboration v1.0 exam. This fact-filled Quick Reference allows you to get all-important information at a glance, helping you to focus your study on areas of weakness and to enhance memory retention of important concepts. With this book as your guide, you will review and reinforce your knowledge of and experience with collaboration solutions integration and operation, configuration, and troubleshooting in complex networks. You will also review the challenges of video, mobility, and presence as the foundation for workplace collaboration solutions. Topics covered include Cisco collaboration infrastructure, telephony standards and protocols, Cisco Unified Communications Manager (CUCM), Cisco IOS UC applications and features, Quality of Service and Security in Cisco collaboration solutions, Cisco Unity Connection, Cisco Unified Contact Center Express, and Cisco Unified IM and Presence.

This book provides a comprehensive final review for candidates taking the CCIE Collaboration v1.0 exam. It steps through exam objectives one-by-one, providing concise and accurate review for all topics. Using this book, exam candidates will be able to easily and effectively review test objectives without having to wade through numerous books and documents for relevant content for final review.

Table of Contents

Chapter 1 Cisco Collaboration Infrastructure
Chapter 2 Understanding Quality of Service
Chapter 3 Telephony Standards and Protocols
Chapter 4 Cisco Unified Communications Manager
Chapter 5 Cisco Unified Communications Security
Chapter 6 Cisco Unity Connection
Chapter 7 Cisco Unified IM Presence
Chapter 8 Cisco Unified Contact Center Express
Chapter 9 Cisco IOS UC Applications
Chapter 10 Cisco Collaboration Network Management

 If you are considering sitting for your CCIE Collaboration exam, then this is perhaps one of the most valuable resources you'll need to get your hands on!
  • Hits: 13134

Network Security Product Review: GFI LanGuard 2014 - The Ultimate Tool for Admins and IT Managers

Review by Arani Mukherjee

Network Security GFI Languard 2014 100% ScoreFor a company’s IT department, it is essential to manage and monitor all assets with a high level of effectiveness, efficiency and transparency for users. Centralised management software becomes a crucial tool for the IT department to ensure that all assets are performing at their utmost efficiency, and that they are safeguarded from any anomalies, be it a virus attack, security holes created by unpatched softwares or even the OS.

GFI LanGuard is one such software that promises to provide a consolidated platform from which software, network and security management can be performed, remotely, on all assets under its umbrella. Review of LanGuard Version 2011 was published previously on Firewall.cx by our esteemed colleagues Alan Drury and John Watters. Here are our observations on the latest version of LanGuard 2014. This is something we would call a perspective from a fresh pair of eyes.

Installation

The installation phase has been made seamless by GFI. There are no major changes from the previous version. Worth noting is that near the end of the installation you will be asked to point towards an existing instance of SQL Server, or install one. This might prolong the entire process but, overall, a very tidy installation package. Our personal opinion is to ensure the hardware server has a decent amount of memory and CPU speed to provide the sheer number crunching needs of LanGuard.

First Look: The Dashboard

Once the installation is complete, LanGuard is ready to roll without the need for any OS restarts or a hardware reboot. For the purpose of this review two computers, one running Windows 7 and the other running Linux Ubuntu, were used. The Dashboard is the first main screen the user will encounter:

review-languard-2014-1Main Screen (Click to enlarge)

LanGuard will be able to pick up the machines it needs to monitor from the workgroup it belongs to. Obviously it does show a lot of information at one glance. The section of Common Tasks (lower left corner) is very useful for performing repetitive actions like triggering scans, or even adding computers. Adding computers can be done by looking into the existing domain, by computer name, or even by its IP address. Once LanGuard identifies the computer, and knows more about it from scan results, it allocates the correct workgroup under the Entire Network section.

Below is what the Dashboard looked like for a single device or machine:

review-languard-2014-2(Click to enlarge)

The Dashboard has several sub categories, but we’ll talk about them once we finish discussing the Scan option.

Scan Option

The purpose of this option is to perform the management scan of the assets that need to be monitored via LanGuard. Once the asset is selected LanGuard will perform various types of scans, called audit operations. Each audit operation corresponds to an output of information under several sections for that device. Information ranges from hardware type, software installed, ports being used, patch information etc.

The following screenshot displays a scan in progress on such a device:

review-languard-2014-3LanGuard Scan Option (Click to enlarge)

The progress of the Scan is shown at the top. The bottom section, with multiple tabs, lets the user know the various types of audit operations that are being handled. If any errors occur they appear in the Errors tab. This is very useful in terms of finding out if there are any latent issues with any device that might hamper LanGuard’s functions.

The Dashboard – Computers Tab

Once the Scan is complete, the Dashboard becomes more useful in terms of finding information about the devices. The Computers Tab is a list view of all such devices. The following screenshot shows how the various sections can be used to group and order the devices on the list:

review-languard-2014-4LanGuard Computer Tab (Click to enlarge)

Notice that just above the header named ‘Computer Information’, it asks the user to drag any column header to group the computers using that column. This is a unique feature. This goes to show that LanGuard has given the control of visibility to the user, instead of providing stock views. As well, every column header can be used to set filters. This means the user has multiple viewing options that can be adjusted depending on the need of the hour.

The Dashboard – History Tab

This tab is a listed historical view of all actions that have been taken on a given device. Every device’s functional history is shown, based on which computer has been selected on the left ‘Entire Network’ section. This is like an audit trail that can be used to track the functional progression of the computer. The following screenshot displays the historical data generated on the Windows 7 desktop that was used for our testing.

review-languard-2014-5LanGuard History Tab (Click to enlarge)

Information is sectioned in terms of date, and then further sectioned in terms of time stamps. We found the level of reporting to be very useful and easy to read.

The Dashboard – Vulnerabilities

This is perhaps one of the most important tabs under the Dashboard. At once glance you can find out the main weakness of the machine scanned. All such vulnerabilities are sub divided into Types, based on their level of criticality. If the user selects a type, the actual list of issues comes up in the right hand panel.

Now if the user selects a single vulnerability, a clearer description appears at the bottom. LanGuard not only tells you about the weakness, it also provides valid recommendations on how to deal with it. Here’s a view of our test PC’s desktop’s weaknesses. Thanks to LanGuard, all of them were resolved!

review-languard-2014-6LanGuard Vulnerabilities Tab (Click to enlarge)

The Dashboard – Patches

Like the Vulnerabilities tab, the Patches tab shows the user the software updates and patches that are lacking on the target machine. Below is a screenshot demonstrating this:

review-languard-2014-7LanGuard Patches Tab (Click to enlarge)

Worth noting is the list of action buttons on the panel at the bottom right corner. The user has the option of acknowledging the patch issue or set it to ‘ignore’. The ‘Remediate’ option will be discussed at a later date.

The Dashboard – Ports Tab

The function of the Ports tab is to display which ports are open on the target machine. They are smarty divided into TCP and UDP ports. When the user selects either of the two divisions, the ports are listed in the right panel. Selecting a port displays the process which is using that port, along with the process path. From a network management point of view, with network security in mind, this is an excellent feature to have.

review-languard-2014-8LanGuard Ports Tab (Click to enlarge)

The Dashboard – Software Tab

This tab is a good representation of how well LanGuard scans the target machine and brings out information about it. Any software installed, along with version and authorisation, is listed. An IT manager can use this information to reveal any unauthorised software that might be in use on company machines. This makes absolute sense when it comes to safeguarding company assets from the hazards of pirated software:

review-languard-2014-9LanGuard Software Tab (Click to enlarge)

The Dashboard – Hardware Tab

The main purpose of the Hardware tab is titular, displaying the hardware components of the machines. The information provided is very detailed and can be very useful in maintaining a framework of similar hardware for the IT Infrastructure. LanGuard is very good at obtaining detailed information about a machine and presenting it in a very orderly fashion. Here’s what LanGuard presented in terms of hardware information:

review-languard-2014-10LanGuard Hardware Tab (Click to enlarge)

The Dashboard – System Information

Obviously, LanGuard was providing user specific information along with services and shares on the machines. This shows all the processes and services running on the machines. It also shows all the various user profiles and current users logged onto the machine. It can be used to see if a user is available on a machine, the shares that are listed, and identify them as authorised or not. Same can be done for the users that reside on that machine. As always selecting the System Information List on the right hand panel would display more details on the bottom panel.

review-languard-2014-11LanGuard System Information Tab (Click to enlarge)

Remediate Option

One of the key options for LanGuard, Remediate, is there to ensure all important patches and upgrades necessary for your machines are delivered as and when required. As mentioned earlier in the Dashboard – Patches section, any upgrade or patch that is missing is listed with a Remediate option. But Remediate not only lets the user deploy patches, but it also helps in delivering bespoke software and malware protection. This is a core vital function as it ensures the security of the IT infrastructure along with its integrity. A quick look at the main screen for Remediate clearly defines its utilities:

review-languard-2014-12LanGuard Remediate Main Screen (Click to enlarge)

The level of detail provided and the ease of operation was clearly evident.

Here’s a snapshot of the Software Updates screen. The layout speaks for itself:

review-languard-2014-13LanGuard Deploy Software Updates Screen (Click to enlarge)

Obviously, the user is allowed to pick and choose which updates to deploy and which ones to shelve for the time being.

Activity Monitor Option

This is more of an audit trail of all the actions, whether manually triggered or scheduled, that have been taken by LanGuard. This helps the user to find out if any scan or search has encountered any issues. This gives a bird’s eye view of how well LanGuard is working in the background to ensure the assets are being monitored properly.

The top left panel helps the user to select which audit trail needs to be seen and, based on that, the view dynamically changes to accommodate the relevant information. Here’s what it would look like if one wanted to see the trail of Security Scans:

review-languard-2014-14LanGuard Activity Monitor Option (Click to enlarge)

Reports Option

All the aforementioned information is worth gathering if it can be presented for making commercial and technical decisions. That is where LanGuard presents us with a plethora of reporting options. The sheer volume of options was a bit overwhelming but every report has its own merits. The screen shown in the screenshot below does not even show the bottom of the reports menu, there’s a lot to scroll below as well:

review-languard-2014-15LanGuard Reports Option (Click to enlarge)

Running the Network Security Report provides a level of presentation which played with every detail, and wasn’t confusing with too much information. Here’s what it looked like:

review-languard-2014-16LanGuard Network Security Report (Click to enlarge)

The graphical report was certainly eye catching.

Configuration Option

Clearly LanGuard has not shied away from letting users having the power to tweak the software to their best advantage. Users can scan the network for devices and remotely deploy the agents which would perform the repeated scheduled scans.

review-languard-2014-17LanGuard Configuration Option (Click to enlarge)

LanGuard was unable to scan the Ubuntu box properly and refused to deploy the agent, in spite of being given the right credentials.

A check on GFI’s website for the minimum level of Linux supported showed that the Ubuntu was two versions above the requirements. The scan could recognise it as ‘Probably Unix’ and that’s the most LanGuard managed. We suspect the problem to be related with the system's firewall and security settings.

The following message appeared on the Agent Dialog box when trying to deploy it on the Linux machine: “Not Supported for this Operating System”

review-languard-2014-18Minor issues identifing our Linux workstation (Click to enlarge)

Moving on to LanGuard’s latest offering, the ability to manage mobile devices. This is a new addition to LanGuard’s arsenal. It can manage and monitor mobile devices that use an Microsoft Exchange Server for email access etc. Company smart phones and tablets can be managed using this new tool. Here’s the interface for the same purpose.

review-languard-2014-19LanGuard Managing Mobile Devices (Click to enlarge)

Utilities Option

We call it the Swiss Army Knife for network management. One of our favourite sections, it included some quick and easy way of checking network features of any devices or an IP Address. This just goes to prove that LanGuard is very well thought out piece of software. Not only does it include mission critical functions, it also provides a day to day point of mission control for the IT Manager.

We could not stop ourselves from performing a quick check on the output from the Whois option here:

review-languard-2014-21LanGuard Whois using Utilities (Click to enlarge)

The other options were pretty self-explanatory and of course very handy for a network manager.

Final Verdict

LanGuard provides an impressive set of tools. The process of adding machines, gathering information and then displaying the information is very efficient. The reporting is extremely resourceful and caters to every need possible for an IT Manager. Hoping the lack of support for Linux is an isolated incident. It has grabbed the attention of this reviewer to the point that he is willing to engage his own IT Manager and query what software his IT Department uses.

If it’s not LanGuard, there’s enough evidence here to put a case for this brilliant piece of software. LanGuard is a very good tool and should be part of an IT Manager’s or Administrator’s arsenal when it comes to managing a small to large enterprise IT Infrastructure.

 

 

 

  • Hits: 27948

Interview: Kevin Wallace CCIEx2 #7945 (Routing/Switching and Voice) & CCSI (Instructor) #20061

ccie-kevin-wallaceKevin Wallace is a well-known name in the Cisco industry. Most Cisco engineers and Cisco certification candidates know Kevin from his Cisco Press titles and the popular Video Mentor training series.  Today, Firewall.cx has the pleasure of interviewing Kevin and revealing how he managed to become one of the world's most popular CCIEs, which certification roadmap Cisco candidates should choose, which training method is best for your certification and much more.

Kevin Wallace, CCIEx2 (R/S and Voice) #7945, is a Certified Cisco Systems Instructor (CCSI #20061), and he holds multiple Cisco certifications, including CCNP Voice, CCSP, CCNP, and CCDP, in addition to multiple security and voice specializations. With Cisco experience dating back to 1989 (beginning with a Cisco AGS+ running Cisco IOS 7.x). Kevin has been a network design specialist for the Walt Disney World Resort, a senior technical instructor for SkillSoft/Thomson NETg/KnowledgeNet, and a network manager for Eastern Kentucky University. Kevin holds a Bachelor of Science Degree in Electrical Engineering from the University of Kentucky. Kevin lives in central Kentucky with his wife (Vivian) and two daughters (Stacie and Sabrina).

Firewall.cx Interview Questions

Q1. Hello Kevin and thanks for accepting Firewall.cx’s invitation. Can you tell us a bit about yourself, your career and daily routine as a CCIE (Voice) and Certified Cisco Systems Instructor (CCSI)?

Sure. As I was growing up, my father was the central office supervisor at the local GTE (General Telephone) office. So, I grew up in and around a telephone office. In college, I got a degree in Electrical Engineering, focusing on digital communications systems. Right out of college, I went to work for GTE Laboratories where I did testing of all kinds of telephony gear, everything from POTS (Plain Old Telephone Service) phones to payphones, key systems, PBX systems, and central office transmission equipment.

Then I went to work for a local university, thinking that I was going to be their PBX administrator but, to my surprise, they wanted me to build a data network from scratch, designed around a Cisco router. This was about 1989 and the router was a Cisco AGS+ router running Cisco IOS 7.x. And I just fell in love with it. I started doing more and more with Cisco routers and, later, Cisco Catalyst switches.

Also, if you know anything about my family and me you know we’re huge Disney fans and we actually moved from Kentucky to Florida where I was one of five Network Design Specialists for Walt Disney World. They had over 500 Cisco routers (if you count RSMs in Cat 5500s) and thousands of Cisco Catalyst switches. Working in the Magic Kingdom was an amazing experience.

However, due to a family health issue we had to move back to KY where I started teaching classes online for KnowledgeNet (a Cisco Learning Partner). This was in late 2000 and, even though we’ve been through a couple of acquisitions (first Thomson NETg and then Skillsoft), we’re still delivering Cisco authorized training live and online.

Being a Cisco trainer has been a dream job for me because it lets me stay immersed in Cisco technologies all the time. Of course I need, and want, to keep learning. I’m always in pursuit of some new certification. Just last year I earned my second CCIE, in Voice. My first CCIE, in Route/Switch, came way back in 2001.

In addition to teaching live online Cisco courses (mainly focused on voice technologies), I also write books and make videos for Cisco Press and have been for about the last ten years.

So, to answer your question about my daily routine: it’s a juggling act of course delivery and course development projects for Skillsoft and whatever book or video title I’m working on for Cisco Press.

Q2. We would like to hear your personal opinion on Firewall.cx’s technical articles covering Cisco technologies, VPN Security and CallManager Technologies. Would you recommend Firewall.cx to Cisco engineers and certification candidates around the world?

Firewall.cx has an amazing collection of free content. Much of the reference material is among the best I’ve ever seen. As just one example, the Protocol Map Cheat Sheet in the Downloads area is jaw-dropping. So, I would unhesitatingly recommend Firewall.cx to other Cisco professionals.

Q3. As a Cisco CCIE (Voice) and Certified Cisco Systems Instructor (CCSI) with more than 14 years experience, what preparation techniques do you usually recommend to students/engineers who are studying for Cisco certifications?

For me, it all starts with goal setting. What are you trying to achieve and why? If you don’t have a burning desire to achieve a particular certification, it’s too easy to run out of gas along your way.

You should also have a clear plan for how you intend to achieve your goal. “Mind mapping” is a tool that I find really useful for creating a plan. It might, for example, start with a goal to earn your CCNA. That main goal could then be broken down into subgoals such as purchasing a CCNA book from Cisco Press, building a home lab, joining an online study group, etc. Each of those subgoals could then be broken down even further.

Also, since I work for a Cisco Learning Partner (CLP), I’m convinced that attending a live training event is incredibly valuable in certification preparation. However, if a candidate’s budget doesn’t permit that I recommend using Cisco Press books and resources on Cisco’s website to self-study. You’ve also got to “get your hands dirty” working on the gear. So, I’m a big fan of constructing a home lab.

When I was preparing for each of my CCIE certifications, I dipped into the family emergency fund in order to purchase the gear I needed to practice on. I was then able to sell the equipment, nearly at the original purchase price, when I finished my CCIE study.

But rather than me rattling on about you should do this and that, let me recommend a super inexpensive book to your readers. It’s a book I wrote on being a success in your Cisco career. It’s called, “Your Route to Cisco Career Success,” and it’s available as a Kindle download (for $2.99) from Amazon.com.

If anyone reading this doesn’t have a Kindle reader or app, the book is also available as a free .PDF from the Products page of my website, 1ExamAMonth.com/products.

Q4. In today’s fast paced technological era, which Cisco certifications do you believe can provide a candidate with the best job opportunities?

I often recommend that certification candidates do a search on a job website, such as dice.com or monster.com, for various Cisco certs to see what certifications are in demand in their geographical area.

However, since Cisco offers certifications in so many different areas, certification candidates can pick an area of focus that’s interesting to them. So, I wouldn’t want someone to pursue a certification path just because they thought there might be more job opportunities in that track if they didn’t have an interest and curiosity about that field.

Before picking a specific specialization, I do recommend that everyone demonstrate that they know routing and switching. So, my advice is to first get your CCNA in Routing and Switching and then get your CCNP. At that point, decide if you want to specialize in a specific technology area such as security or voice, or if you want to go even deeper in the Routing and Switching arena and get your CCIE R/S.

Q5. There is a steady rise on Cisco Voice certifications and especially the CCVP certification. What resources would you recommend to readers who are pursuing their CCVP certification that will help them prepare for their exams?

Interestingly, Cisco has changed the name of the CCVP certification to the CCNP Voice certification, and it’s made up of five exams: CVOICE, CIPT1, CIPT2, TVOICE and CAPPS. Since I teach all of these classes live and online, I think that’s the best preparation strategy. However, it is possible to self-study for those exams. Cisco Press offers comprehensive study guides for the CVOICE, CIPT1 and CIPT2 exams. However, you’ll need to rely on the exam blueprints for the TVOICE and CAPPS exams, where you take each blueprint topic and find a resource (maybe a book, maybe a video, or maybe a document on Cisco’s website) to help you learn that topic.

For hands-on experience, having a home lab is great. However, you could rent rack time from one of the CCIE Voice training providers or purchase a product like my CCNP Voice Video Lab Bundle, which includes over 70 videos of lab walkthroughs for $117.

Q6. What is your opinion on Video based certification training as opposed to text books – Self Study Guides?

Personally I use, and create, both types of study materials. Books are great for getting deep into the theory and for being a real-world reference. However, for me, there’s nothing like seeing something actually configured from start to finish and observe the results. When I was preparing for my CCIE Voice lab I would read about a configuration, but many times I didn’t fully understand it until I saw it performed in a training video.

So, to answer your question: instead of recommending one or the other, I recommend both.

We thank Kevin Wallace for his time and interview with Firewall.cx.

 

 

  • Hits: 25104

Interview: Vivek Tiwari CCIEx2 #18616 (CCIE Routing and Switching and Service Provider)

CCIE Interview - Vivek Tiwari CCIE #18616  (CCIE Routing and Switching and Service Provider)Vivek Tiwari holds a Bachelor’s degree in Physics, MBA and many certifications from multiple vendors including Cisco’s CCIE.  With a double CCIE on R&S and SP track under his belt he mentors and coaches other engineers. 

Vivek has been working in the Inter-networking industry for more than fifteen years, consulting for many Fortune 100 organizations. These include service providers, as well as multinational conglomerate corporations and the public sector. His five plus years of service with Cisco’s Advanced Services has gained him the respect and admiration of colleagues and customers alike.

His experience includes, but is not limited to, network architecture, training, operations, management and customer relations, which made him a sought after coach and mentor, as well as a recognized leader. 

He is also the author of the following titles:

 “Your CCIE Lab Success Strategy the non-Technical guidebook

“Stratégie pour réussir votre Laboratoire de CCIE”

“Your CCNA Success Strategy Learning by Immersing – Sink or Swim”

“Your CCNA Success Strategy the non-technical guidebook for Routing and Switching”

Q1.  Hello Vivek and thanks for accepting Firewall.cx’s invitation for this interview.   Can you let us know a bit more about your double CCIE area of expertise and how difficult the journey to achieve it was?

I have my CCIE in Routing and Switching and Service Provider technologies. The first CCIE journey was absolutely difficult. I was extremely disappointed when I failed my lab the first time. This is the only exam in my life that I had not passed the first time. However, that failure made me realize that CCIE is difficult but within my reach. I realized the mistakes I was making, persevered and eventually passed Routing and Switching CCIE in about a year’s time.

After the first CCIE I promised myself never to go through this again but my co-author Dean Bahizad convinced me to try a second CCIE and surprisingly it was much easier this time and I passed my Service Provider lab in less than a year’s time.

We have chronicled our story and documented the huge number of lessons learned in our book Your CCIE Lab Success Strategy the non-technical guidebook. This book has been reviewed by your website and I am proud to state has been helping engineers all over the globe.

Q2. As a globally recognised and respected Cisco professional, what do you believe is the true value of Firewall.cx toward its readers?

Firewall.cx is a gem for its readers globally. Any article that I have read to date on Firewall.cx is well thought of and has great detailed information. The accompanying diagrams are fantastic. The articles get your attention and are well written because I have always read the full article and have never left it halfway.

The reviews for books are also very objective and give you a feel for it. Overall this is a great service to the network engineer community.

Thanks for making this happen.

Q3. Could you describe your daily routine as a Cisco double CCIE?

My daily routine as a CCIE depends on the consulting role that I am playing at that time. I will describe a few of them:

Operations: being in operations you will always be on the lookout for what outages happened in the last 24 hours or in the last week. Find the detailed root cause for it and suggest improvements. These could range from a change in design of the network to putting in new processes or more training at the appropriate levels.

Architecture: As an architect you are always looking into the future and trying to interpret the current and future requirements of your customer. Then you have to extrapolate these to make the network future proof for at least 5 to 7 years. Once that is done then you have to start working with network performance expected within the budget and see what part of the network needs enhancement and what needs to be cut.

This involves lots of meetings and whiteboard sessions.

Mix of the Above: After the network is designed you have to be involved at a pilot site where you make your design work with selected operations engineers to implement the new network. This ensures knowledge transfer and also proves that the design that looked good on the board is also working as promised.

All of the above does need documentation so working with Visio, writing white papers, implementation procedures and training documents are also a part of the job. Many engineers don’t like this but it is essential.

Q4. There are thousands of engineers out there working on their CCNA, CCNP and CCVP certifications.  Which certification do you believe presents the biggest challenge to its candidates?

All certifications have their own challenges. This challenge varies from one individual to another. However, in my mind CCNA is extremely challenging if it is done the proper way. I say this because most of the candidates doing CCNA are new to networking and they have not only to learn new concepts of IP addressing and routing but also have to learn the language of typing all those commands and making it work on a Cisco Device.

The multitude of learning makes it very challenging. Candidates are often stuck in a maze running from one website to another or studying one book and then another without any real results. That is the reason we have provided a GPS for CCNA, our book “Your CCNA exam Success Strategy the non-technical guidebook

I also want to point out that whenever we interview CCNA engineers many have the certificate but it seems they have not spent the time to learn and understand the technologies.

What they don’t understand is that if I am going to depend on them to run my network which has cost my company millions of dollars I would want a person with knowledge not just a certificate.

Q5. What resources do you recommend for CCNA, CCNP, CCVP and CCIE candidates, apart from the well-known self-study books?

Apart from all the books the other resources to have for sure are

  1. A good lab. It could be made of real network gear or a simulator, but you should be able to run scenarios on it.
  2. Hands on practice in labs.
  3. Be curious while doing labs and try different options (only on the lab network please)
  4. A positive attitude to learning and continuous improvement.
    a) Write down every week what you have done to improve your skills
    b) Don’t be afraid to ask questions.
  5. Lastly and most important have a mentor. Follow the guidelines in our book about choosing a mentor and how to take full advantage of a mentor. Remember a mentor is not there to spoon feed you: a mentor is there to make sure you are moving in the right direction and in case you are stuck to show you a way out (not to push you out of it). A mentor is a guide not a chauffeur.

Q6. When looking at the work of other Cisco engineers, e.g network designs, configurations-setup etc, what do you usually search for when trying to identify a knowledgeable and experienced Cisco engineer?

I usually do not look at a design and try to find a flaw in it. I do make a note of design discrepancies that come to my mind. I say that from experience because what you see as a flaw might be a design requirement. For example, I have seen that some companies send all the traffic coming inside from the firewall across the data center to a dedicated server farm where it is analysed and then sent across to the different parts of the company. It is very inefficient and adds delay but it is by design.

I have seen many differences in QOS policies even between different groups within the organizations.

If a network design satisfies the legal, statutory and organization requirements then it is the best design.

Q7. What advice would you give to our readers who are eager to become No.1 in their professional community? Is studying and obtaining certifications enough or is there more to it?

Studying is important but more important is to understand it and experience it. Obtaining certifications has become necessary now because that is one of the first ways that a candidate can prove to their prospective employer that they have learnt the technologies. If an employer is going to let you work on his network that will cost him thousands of dollars per minute of downtime (think eBay, amazon, PayPal, a car assembly line) or could even cost lives of people (think of a hospital network, or the emergency call network like the 911 in US, or the OnStar network in US) then they’d better be careful in hiring. I am sure you agree. Certification is what gets you in the door for an interview only but it is:

  • Your knowledge and understanding
  • Your experience
  • Your attitude towards your work
  • How well you work in teams
  • Which work related areas are of interest to you (Security, Voice, Wireless etc.) that gets you the job and makes you move ahead in your career.

The best way to move ahead and be No. 1 in your career is to do what you are passionate about. If you are pursuing your passion then it is not work anymore and you enjoy doing it and will excel beyond limits.

Another thing I would want to tell the readers is don’t chase money. Chase excellence in whatever you are doing and money will be the positive side effect of your excellence.

 

  • Hits: 35323

The New GFI EventsManager 2013 - Active Network and Server Monitoring

On the 21st of January 2013, GFI announced its new version of its popular GFI EventsManager, now named, GFI EventsManager 2013.

For those who are unaware of the product, GFI EventsManager is one of the most popular software solutions that allows a network administrator, engineer or IT manager to actively monitor a whole IT infrastructure from a single intuitive interface.

Even though GFI EventsManager has been in continuous development, this time GFI has surprised us once again by introducing highly anticipated features that make this product a one-of-a-kind winner.

gfi-eventsmanager-2013-features-1

Below is a list of some of the new features included in GFI EventsManager 2013 that make this product a must for any company:

  • Active network and server monitoring based on monitoring checks is now available and can function in conjunction with the log based monitoring system in order to provide a complete and thorough view of the status of your environment.
  • The unique combination of active network and server monitoring through log-based network and server monitoring provides you not only with incident identification but also with a complete set of logs from the assets that failed, making problem investigation and solving much easier.
  • Enhanced console security system helps complying with 'best practices' recommendations that imply access to data on a “need-to-know” basis. Starting with this version, each GFI EventsManager user can be assigned a subset of computers that he/she manages and the console will only allow usage of the data coming from those configured computers while the user is logged in.
  • New schema for parsing XML files, available by default, that enables monitoring of XML–based logs and configuration files.
  • New schema for parsing DHCP text logs that enables monitoring of DHCP IP assignment.
  • More flexibility for storing events: the new database system has been updated to include physical deletion of events for easier maintenance and collection to remote databases.
  • Hashing of log data for protection against attempts at tampering with the logs coming from outside the product, enables enhanced log consolidation and security.
  • New reports for J Sox and NERC CIP compliance.
  • Hits: 15720

Interview: Akhil Behl CCIEx2 #19564 (Voice & Security)

It's not everyday you get the chance to interview a CCIE, and especially a double CCIE!  Today, Firewall.cx interviews Akhil Behl, a Double CCIE (Voice & Security) #19564 and author of the popular Cisco Press title ‘Securing Cisco IP Telephony Networks'.

Akhil Behl's Biography

ccies author akhil behlAkhil Behl is a Senior Network Consultant with Cisco Advanced Services, focusing on Cisco Collaboration and Security architectures. He leads Collaboration and Security projects worldwide for Cisco Services and the Collaborative Professional Services (CPS) portfolio for the commercial segment. Prior to his current role, he spent 10 years working in various roles at Linksys, Cisco TAC, and Cisco AS. He holds CCIE (Voice and Security), PMP, ITIL, VMware VCP, and MCP certifications.

He has several research papers published to his credit in international journals including IEEE Xplore.

He is a prolific speaker and has contributed at prominent industry forums such as Interop, Enterprise Connect, Cloud Connect, Cloud Summit, Cisco SecCon, IT Expo, and Cisco Networkers.

Be sure to not to miss our on our review of Akhil's popular Securing Cisco IP Telephony Networks and outstanding article on Secure CallManager Express Communications - Encrypted VoIP Sessions with SRTP and TLS.

Readers can find outstanding Voice Related Technical Articles in our Cisco VoIP/CCME & CallManager Section.

Interview Questions

Q1. What are the benefits of a pure VoIP against a hybrid system?

Pure VoIP solutions are a recent addition to the overall VoIP portfolio. SIP trunks by service providers are helping covert PSTN world reachable by IP instead of TDM. A pure VoIP system has a number of advantages over a hybrid VoIP system for example:

  • All media and signaling is purely IP based and no digital or TDM circuits come into picture. This in turn implies better interoperability of various components within and outside the ecosystem.
  • Configuration, troubleshooting, and monitoring of a pure VoIP solution is much more lucid as compared to a hybrid system.
  • The security construct of a pure VoIP system is something which the provider and consumer can mutually agree upon and deploy. In other words, the enterprise security policies can now go beyond the usual frontiers up to the provider’s soft-switch/SBC.

Q2. What are the key benefits/advantages and disadvantages of using Cisco VoIP Telephony System, coupled with its security features?

Cisco’s IP Telephony / Unified Communications systems present a world class VoIP solution to consumers from small to medium to large enterprises and SMB’s as well as various business verticals such as education, finance, banking, energy sector, and government agencies. When the discussion is around security aspect of Cisco IP Telephony / UC solution, the advantages outweigh the disadvantages because of a multitude of factors:

  • Cisco IP Telephony endpoints, and underlying network gear is capable of providing robust security by means of built in security features
  • Cisco IP Telephony portfolio leverages industry standard cryptography and is compatible with any product based on RFC standards
  • Cisco engineering leaves no stone unturned to ensure that the IP Telephony products and applications deliver feature rich consumer experience; while maintaining a formidable security posture
  • Cisco Advanced Services helps consumers design, deploy, operate, and maintain a secure, stable, and robust Cisco IP Telephony network
  • Cisco IP Telephony and network applications / devices / servers can be configured on-demand to enable security to restrain a range of threats

Q3. As an author, please comment on the statement that your book can be used both as a reference and as a guide for security of Cisco IP Telephony implementation.

Over the past 10 years, I have seen people struggling with lack of a complete text which can act as a reference, a guide, and a companion to help resolve UC security queries pertinent to design, deployment, operation, and maintenance of a Cisco UC network. I felt there was a lack of a complete literature which could help one through various stages of Cisco UC solution development and build i.e. Plan, Prepare, Design, Implement, Operate, and Optimize (PPDIOO) and thought of putting together all my experience and knowledge in form of a book where the two realms i.e. Unified Communications and Security converge. More often than not, people from one realm are not acquainted with intricacies of the other. This book serves to fill in the otherwise prominent void between the UC and Security realms and acts as a guide and a reference text for professionals, engineers, managers, stakeholders, and executives.

Q4. What are today’s biggest security threats when dealing with Cisco Unified Communication installations?

While there are a host of threats out there which lurk around your Cisco UC solution, the most prominent ones are as follows:

  • Toll-Fraud
  • Eavesdropping
  • Session/Call hijacking
  • Impersonation or identity-theft
  • DOS and DDOS attacks
  • Poor or absent security guidelines or policy
  • Lack of training or education at user level on their responsibility towards corporate assets such as UC services

As you can see, not every threat is a technical threat and there’re threats pertinent to human as well as organizational factors. More often than not, the focus is only on technical threats while, organizations and decision makers should pay attention to other (non-technical) factors as well without which a well-rounded security construct is difficult to achieve.

Q5. When implementing SIP Trunks on CUCM/CUBE or CUCME, what steps should be taken to ensure Toll-Fraud is prevented?

An interesting question since, toll-fraud is a chronic issue. With advent of SIP trunks for PSTN access, the threat surface has evolved and a host of new threats comes into picture. While most of these threats can be mitigated at call-control and Session Border Controller (CUBE) level, an improper configuration of call restriction and privilege as well as a poorly implemented security construct can eventually lead to a toll-fraud. To prevent toll-fraud on SIP trunks following suggestions can be helpful:

  • Ensure that users are assigned the right calling search space (CSS) and partitions (in case of CUCM) or Class of Restriction (COR in case of CUCME)  at line/device level to have a granular control of who can dial what
  • Implement after-hour restrictions on CUCM and CUCME
  • Disable PSTN or out-dial from Cisco Unity, Unity Connection, and CUE or at least restrict it to a desirable local/national destination(s) as per organization’s policies
  • Implement strong pin/password policies to ensure user accounts cannot be compromised by brute force or dictionary based attacks
  • For softphones such as Cisco IP Communicator try and use extension mobility which gives an additional layer of security by enabling user to dial international numbers only when logged in to the right profile with right credentials
  • Disable PSTN to PSTN tromboning of calls is not required or as per organizational policies
  • Where possible enable secure SIP trunks and SIP authorization for trunk registration with provider
  • Implement COR where possible at SRST gateways to discourage toll-fraud during an SRST event
  • Monitor usage of the enterprise UC solution by call billing and reporting software (e.g. CAR) on an ongoing basis to detect any specific patterns or any abnormal usage

Q6. A common implementation of Cisco IP Telephony is to install the VoIP Telephony network on a separate VLAN – the Voice VLAN, which has restricted access through access lists applied on a central layer-3 switch. Is this common practice adequate to provide basic-level of security?

Well, I wouldn’t just filter the traffic at Layer 3 with access-lists or just do VLAN segregation at layer 2 but also enable security features such as:

  • Port security
  • DHCP snooping
  • Dynamic ARP Inspection (DAI)
  • 802.1x
  • Trusted Relay Point (TRP)
  • Firewall zoning

and so on, throughout the network to ensure that legitimate endpoints in voice VLAN (whether hard phones or softphones) can get access to enterprise network and resources. While most of the aforementioned features can be enabled without any additional cost, it’s important to understand the impact of enabling these features in a production network as well as to ensure that they are in-line with the corporate/IP Telephony security policy of the enterprise.

Q7. If you were asked to examine a customer’s VoIP network for security issues, what would be the order in which you would perform your security checks? Assume Cisco Unified Communications Manager Express with IP Telephones (wired & wireless), running on Cisco Catalyst switches with multiple VLANs (data, voice, guest network etc) and Cisco Aironet access points with a WLC controller. Firewall and routers exist, with remote VPN teleworkers

My first step towards assessing the security of the customer’s voice network will be to ask them for any recent or noted security incidents as it will help me understand where and how the incident could have happened and what are the key security breach or threats I should be looking at apart from the overall assessment.

I would then start at the customer’s security policy which can be a corporate security policy or an IP Telephony specific security policy to understand how they position security of enterprise/SMB communications in-line with their business processes. This is extremely important as, without proper information on what their business processes are and how security aligns with them I cannot advise them to implement the right security controls at the right places in the network. This also ensures that the customer’s business as usual is not interrupted when security is applied to the call-control, endpoints, switching infrastructure, wireless infrastructure, routing infrastructure, at firewall level, and for telecommuters.

Once I have enough information about the customer’s network and security policy, I will start at inspection of configuration of access switches, moving down to distribution, to core to data center access. I will look at the WLC and WAP configurations next followed by IOS router and firewall configuration.

Once done at network level, I will continue the data collection and analysis at CUCME end. This will be followed by an analysis of the endpoints (wired and wireless) as well as softphones for telecommuters.

At this point, I should have enough information to conduct a security assessment and provide a report/feedback to the customer and engage with the customer in a discussion about the opportunities for improvement in their security posture and construct to defend against the threats and security risks pertinent to their line of business.

Q8. At Firewall.cx, we are eagerly looking forward to our liaison with you, as a CCIE and as an expert on Cisco IP Telephony. To all our readers and members, what would be your message for all those who want to trace your footsteps towards a career in Cisco IP Telephony?

I started in IT industry almost a decade ago with Linksys support (a division of Cisco Systems). Then I worked with Cisco TAC for a couple of years in the security and AVVID teams, which gave me a real view and feel of things from both security and telephony domains. After Cisco TAC I joined the Cisco Advanced Services (AS) team where I was responsible for Cisco’s UC and security portfolio for customer facing projects. From thereon I managed a team of consultants. On the way I did CCNA, CCVP, CCSP, CCDP, and many other Cisco specialist certifications to enhance my knowledge and worked towards my first CCIE which was in Voice and my second CCIE which was in Security. I am a co-lead of Cisco AS UC Security Tiger Team and have been working on a ton of UC Security projects, consulting assignments, workshops, knowledge transfer sessions, and so on.

It’s almost two years ago when I decided to write a book on the very subject of my interest that is – UC/IP Telephony security. As I mentioned earlier in this interview, I felt there was a dire need of a title which could bridge the otherwise prominent gap between UC and Security domains.

My advice to anyone who wishes to make his/her career into Cisco IP Telephony domain is, ensure your basics are strong as the product may change and morph forms however, the basics will always remain the same. Always be honest with yourself and do what it takes to ensure that you complete your work/assignment – keeping in mind the balance between your professional and personal life. Lastly, do self-training or get training from Cisco/Partners on new products or services to ensure you are keeping up with the trends and changes in Cisco’s collaboration portfolio.

  • Hits: 35802

Software Review: Colasoft Capsa 7 Enterprise Network Analyzer

Reviewer: Arani Mukherjee

review-100-percent-badgeColasoft Capsa 7.2.1 Network Analyser was reviewed by Firewall.cx a bit more than a year ago. In a year Colasoft has managed to bring in the latest version of the Analyser software i.e. Version 7.6.1.

As a packet analyser, Colasoft Capsa Enterprise has already collected many accolades from many users and businesses, so I would refrain from turning this latest review into a comparison between the two versions. Since Colasoft has made the effort to give us a new version of a well established software, it’s only fair that I perform the review in light of the latest software. This only goes to prove that the new software is not just an upgraded version of the old one, but a heavy weight analyser in its own right.

capsa enterprise v7.1 review

As an effective packet analyser, the various functions performed are: detecting network issues; intrusion and misuse; isolating network problems; monitoring bandwidth; usage; data in motion; end point security and server as a day to day primary data source for network monitoring and management. Capsa is one of the most well known packet analysers available for use today and the reasons it occupies such an enviable position in the networking world are its simplicity in deployment, usage, and data representation. Let’s now put Capsa under the magnifying glass to have a better understanding of why it’s one of the best you can get.

colasoft Capsa enterprise traffic chart

Installing Colasoft Capsa Enterprise

I have mentioned before that I will not use this as an opportunity for comparison between the two versions. However, I must admit, Capsa has retained all the merits displayed in the older version. This is a welcome change as often I have witnessed newer versions of software suddenly abandoning certain features just after all the users have got used to it. So in light of that, the first thing notable is the ease of installation of the software. It was painless from the time you download the full version or the demo copy til you put in the license key information and activate it online. There are other ways of activating it but as a network manager why would someone install a packet analyser on a machine which does not have any network connection.

It takes 5-7 minutes to get the software up and running to a point where you can start collecting data about your network. It carries all the hallmarks of a seamless easy installation and deployment and for all of us, one less thing to worry about. Bearing in mind some of you might find an adhoc review of this software already done while Colasoft’s nChronos Server was being reviewed, I will try not to repeat myself.

Using Capsa Enterprise

You will be greeted with a non cluttered well designed front screen as displayed below.

The default view is the first tab called Dashboard. One you have selected which adapter you want to monitor, and you can have several sessions based on what you do, you hit the ‘Start’ button to start collecting data. The Dashboard then starts coming up with data as it is being gathered. The next screenshot shows what your dashboard will end up looking like:

packet sniffing main console traffic analyzer

Every tab on this software will display data based on what you want to see. In the Node Explorer on the left you can select either a full analysis or particular analysis based on either protocol, the physical nodes or IP nodes.

The Total Traffic Graph is a live progressing chart which can update its display as fast as 1 second, or as slow as up to 1 hour. If you don’t fancy the progressing line graph, you can ponder the bar chart at the bottom. For your benefit you can pause the live flow of the graph by right clicking and selecting ‘Pause Refresh’, as show below:

capsa enterprise main interface

The toolbar at the top needs particular mention because of the features it provides. My favourite was obviously the Utilisation and PPS meters. I forced a download from an FTP site and captured how the needles reacted. Also note the traffic chart which captured bytes per second. The needle position updated every 1 second:

colasoft capsa traffic

The Summary tab is there to provide the user with a full statistical analysis of the network traffic. The separated sections are self explanatory and do provide in-depth meta data.

The Diagnosis tab is of particular interest. It gives a full range view of what’s happening to the data in the network in terms of issues encountered:

capsa enterprise protocol diagnosis

The diagnosis is separated in terms of the actual layers, severity and event description. This I found to be very useful when defining the health of my network.

The Protocol tab gave me a ringside view of the protocols that were topping the list and what was responsible for what chunk of data flowing through the network. I deemed it useful when I wanted to find out who’s been downloading too much using FTP, or who has set up a simultaneous ping test of a node.

Physical and IP Endpoints tabs showed data conversations happening between the various nodes in my network. I actually used this feature to isolate two nodes which were responsible for a sizeable chunk of the network traffic within a LAN. A feature I’m sure network managers will find useful.

Physical, IP, TCP, and UDP Conversations is purely an expanded form of the info provided at the bottom of the previous two tabs.

My favourite tab was the Matrix. Not because of just the name but because of what it displayed. Every data transfer and its corresponding links were mapped based on IP nodes, Physical nodes. You also have the luxury of only seeing the top 100 in the above categories. Here’s a screenshot of my network in full bloom, the top 100 physical conversations:

colasoft capsa matrix analysis

The best display for me was when I selected Top 100 IPv4 Conversations and hovered the mouse over one particular conversation. Not only did Capsa tell me how many peers it was conversing with, it also showed me how many packets were received and sent:

review-capsa-enterprisev7-7

Further on the Packet tab is quite self explanatory. It shows every packet spliced up into its various protocol and encapsulation based components. This is one bit that definitely makes me feel like a Crime Scene Investigator, a feeling I also had while reviewing nChronos. I also sensed that this also helps in terms of understanding how a packet is built, and transferred across a network. Here’s a screenshot of one such packet:

capsa enterprise packet view

As shown above, the level of detail is exhaustive. I wish I’d had this tool when I was learning about packets and their structure. This would have made my learning experience a bit more pleasurable.

All of this is just under the Analysis section. Under the Tools section, you will find very useful applications like the Ping and the MAC Scanner. For me, the MAC Scanner was very useful as I could take a snapshot of all MAC addresses and then be able to compare any changes at a later date. This is useful if there is a change in any address and you are not aware of it. It could be anything from a network card change to a new node being added without you knowing.

I was pleasantly surprised about the level of flexibility of this software when it came to how you wish to see the data. There is the option to have your own charts, add filters against protocols to ignore data that is not important, create alarm conditions which will notify if a threshold is broken or met. A key feature for me was to be able to store packet data and then play it later on using the Packet Player, another nice tool in the Tools section. This historical lookup facility is essential for any comparison that needs be performed after a network issue has been dealt with.

Summary

I have worked with several packet or network analysers and I have to admit Capsa Enterprise captures data and displays it in the best way I have seen. My previous experiences were marred by features that were absent and features that didn’t work or deliver the expected outcome. Colasoft has done a brilliant job of delivering Capsa which meets all my expectations. This software is not only helpful for the network managers but also for students of computer networking. I definitely would have benefitted from Capsa had I known about it back then, but I have now. This tool puts network managers more in control of their networks and gives them that much needed edge for data interpretation. I would tag it with a ‘Highly Recommended’ logo.

 

  • Hits: 30411

Cloud-based Network Monitoring: The New Paradigm - GFI Free eBook

review-gfi-first-aid-kit-1GFI has once again managed to make a difference: They recently published a free eBook named "Cloud-based network monitoring: The new paradigm" as part of their GFI Cloud offerings.

IT managers face numerous challenges when deploying and managing  applications across their network infrastructure. Cloud computing and cloud-based services are the way forward.

This 28 page eBook covers a number of important key-topics which include:

  • Traditional Network Management
  • Cloud-based Network Monitoring: The new Paradigm
  • Big Challenges for Small Businesses
  • A Stronger Defense
  • How to Plan Ahead
  • Overcoming SMB Pain Points
  • The Best Toold for SMB's
  • ...and much more!

This eBook is no longer offered by the vendor. Please visit our Security Article section to gain access to similar articles.

  • Hits: 14237

GFI Network Server Monitor Online Review - Road Test

Reviewer: Alan Drury

review-100-percent-badgeThere’s a lot of talk about ‘the cloud’ these days, so we were intrigued when we were asked to review GFI’s new Cloud offering. Cloud-based solutions have the potential to revolutionise the way we work and make our lives easier, but can reality live up to the hype? Is the future as cloudy as the pundits say? Read on and find out.

What is GFI Cloud?

GFI Cloud is a new service from GFI that provides anti-virus (VIPRE) and workstation/server condition monitoring (Network Server Monitor Online) via the internet. Basically you sign up for GFI Cloud, buy licenses for the services you want and then deploy them to your internet-connected machines no matter where they are. Once that’s done, as long as you have a PC with a web browser you can monitor and control them from anywhere.

In this review we looked at GFI Network Server Monitor Online, but obviously to do that we had to sign up for GFI Cloud first.

Installation of GFI Network Server Monitor Online

Installation is quick and easy; so easy in fact that there’s no good reason for not giving this product a try. The whole installation, from signing up for our free 30-day trial to monitoring our first PC, took barely ten minutes.

To get started, simply follow the link from the GFI Cloud product page and fill in your details:

gfi-network-server-monitor-cloud-1

Next choose the service you’re interested in. We chose Network Server Monitor Online:

gfi-network-server-monitor-cloud-2

Then, after accepting the license agreement, you download and run the installer and that’s pretty much it:

gfi-network-server-monitor-cloud-3

Your selected GFI Cloud products are then automatically monitoring your first machine – how cool is that?

Below is a screenshot of the GFI Cloud desktop. The buttons down the left-hand side and the menu bar across the top let you view the output from either Server Monitor or VIPRE antivirus or, as shown here, you can have a status overview of your whole estate.

gfi-network-server-monitor-cloud-4

We’ve only got one machine set up here but we did add more, and a really useful touch is that machines with problems always float to the top so you need never be afraid of missing something. There’s a handy Filters box through which you can narrow down your view if required. You can add more machines and vary the services running on them, but we’ll come to that later. First let’s have a closer look at Network Server Monitor Online.

How Does It Work?

Network Server Monitor Online uses the GFI Cloud agent installed on each machine to run a series of health checks and report the results. The checks are automatically selected based on the type of machine and its OS. Here’s just a sample of those it applied to our tired XP laptop:

As well as the basics like free space on each of the volumes there’s a set of comprehensive checks to make sure the essential Windows services are running, checks for nasties being reported in the event logs and even a watch on the SMART status of the hard disk.

If these aren’t enough you can add your own similar checks and, usefully, a backup check:

gfi-network-server-monitor-cloud-6

This really is nice – the product supports lots of mainstream backup suites and will integrate with the software to check for successful completion of whatever backup regime you’ve set up. If you’re monitoring a server then that onerous daily backup check is instantly a thing of the past.

As well as reporting into the GFI Cloud desktop each check can email you or, if you add your number to your cloud profile, send you an SMS text alert. So now you can relax on your sun lounger and sip your beer safe in the knowledge that if your phone’s quiet then all is well back at the office.

Adding More Machines To GFI Network Server Monitor Online

gfi-network-server-monitor-cloud-7

Adding more machines is a two-step process. First you need to download the agent installer and run it on the machine in question. There’s no need to login - it knows who you are so you can do a silent push installation and everything will be fine. GFI Cloud can also create a group policy installer for installation on multiple workstations and servers. On our XP machine the agent only took 11k of RAM and there was no noticeable performance impact on any of the machines we tested.

Once the agent’s running the second step is to select the cloud service(s) you want to apply:

gfi-network-server-monitor-cloud-8

When you sign up for GFI cloud you purchase a pool of licenses and applying one to a machine is as simple as ticking a box and almost as quick – our chosen product was up and running on the target machine in less than a minute.

This approach gives you amazing flexibility. You can add services to and remove them from your machines whenever you like, making sure that every one of your purchased licenses is working for you. It’s also scalable – you choose how many licenses to buy so you can start small and add more as you grow. Taking the license off a machine doesn’t remove it from GFI Cloud (it just stops the service) so you can easily put it back again, and if a machine is ever lost or scrapped you can retrieve its licenses and use them somewhere else. Quite simply, you’re in control.

Other Features

Officially this review is about Network Server Monitor Online, but by adding a machine into GFI Cloud you also get a comprehensive hardware and software audit. This is quite useful in itself but when coupled with Network Server Monitor Online it tells you almost everything you need to know:

gfi-network-server-monitor-cloud-9

On top of this you can reboot machines remotely and see at a glance which machines have been shut down or, more ominously, are supposed to be up but aren’t talking to the cloud.

The whole thing is very easy to use but should you need it the documentation is excellent and you can even download a free e-book to help you on your way.

In Conclusion

What GFI has done here is simply brilliant. For a price that even the smallest organisation can afford you get the kind of monitoring, auditing and alerting that you know you need but think you don’t have the budget for. Because it’s cloud-based it’s also a godsend for those with numerous locations or lots of home-workers and road warriors. The low up-front cost and the flexible, scalable, pay-as-you-go licensing should please even the most hard-bitten financial director. And because it’s so easy to use it can sit there working for you in the background while you get on with other things.

Could it be improved? Yes, but even as it stands this is a solid product that brings reliable and useful monitoring, auditing and alerting within the reach of those who can’t justify the expense of dedicated servers and costly software. GFI is on a winner here, and for that reason we’re giving GFI Cloud and GFI Network Server Monitor Online the coveted Firewall.cx ten-out-of-ten award.

  • Hits: 17933

Colasoft: nChronos v3 Server and Console Review

Reviewer: Arani Mukherjee

review-100-percent-badgenChronos, a product of Colasoft, is one of the cutting edge packet/network analysers that the market has to offer today. What we have been promised by Colosoft through their creation is an end to end, round the clock packet analysis, coupled with historical network analysis. nChronos provides an enterprise network management platform which enables users to troubleshoot, diagnose  and address network security and performance issues. It also allows retrospective network analysis and, as stated by Colasoft, will “provide forensic analysis and mitigate security risks”. Predictably it is a must have for anyone involved with network management and security.

Packet analysis has been in the forefront for a while, for the purposes of network analysis; detection of network intrusion; detect misuse; isolate exploited systems; monitor network usage; bandwidth usage; endpoint security status; verify adds, moves and changes and various other such needs. There are quite a few players in this field and, for me, it does boil down to some key unique selling points. I will lay out the assessment using criteria like ease of installation, ease of use, unique selling points and, based on all of the aforementioned, how it stacks up against competition.

Ease of Installation - nChronos Installation

The installation instructions for both nChronos Server and console are straightforward. You install the server first, followed by the console. Setting up the server was easy enough. The only snag that I encountered was when I tried to log onto the server for the first time. The shortcut created by default runs the web interface using the default web browser. However, it calls ‘localhost’ as the primary link for the server. That would bring up the default web page of the physical server on which nChronos server was installed. I was a bit confused when the home page of my web server came up instead of what I was expecting. But one look into the online help files and the reference on this topic said to try ‘localhost:81’ as an option and, if that doesn’t work, try ‘localhost:82’. The first option worked straight away, so I promptly changed the shortcut of nChronos server to point to ‘localhost:81’. Voilà, all was good. Rest of the configuration was extremely smooth, and the run of events followed exactly what was said in the instruction manual. For some reason at the end of the process the nChronos server is meant to restart. If by any chance you receive an error message in the lines of the server not being able to restart, it’s possibly a glitch. The server restarted just fine, as I found out later. I went ahead to try the various installation scenarios mentioned and all of them worked just as fine.

Once the server was up and running, I proceeded to install the nChronos Console, which was also straightforward. It worked the first time, every time. With the least effort I was able to link up the console with the server and start checking out the console features. And yes, don’t forget to turn the monitoring on for the network interfaces you need to manage. You can do that either from the server or from the console itself. So all in all, the installation process passed with some high grades.

Ease Of Use

Just before starting to use the software I was getting a bit apprehensive about what I needed to include in this section. First I thought I would go through the explanation of how the software works and elaborate on the technologies used to render the functionalities provided. But then it occurred to me that it would be redundant for me to expand on all of that because this is specialist software. The users of this type of software are already aware of what happens in the background and are well versed with the technicalities of the features. I decided to concentrate on how effectively this software helps me perform the role of network management, packet tracing and attending to issues related to network security.

The layout of the nChronos Server is very simple and I totally agree with Colasoft’s approach of a no nonsense interface. You could have bells and whistles added but they would only enhance the cosmetic aspect of the software, adding little or nothing to its function.

colasoft nchronos server administrationThe screenshot above gives you an idea of what the Server Administration page looks like, which is the first page that would open up once the user has logged in. This is the System Information page. On the left pane you will find several other pages to look at i.e. Basic Settings which displays default port info and HDD info of the host machine, User Account (name says it all), and Audit Log (which will basically show the audit trail of user activity.)

The interesting page to look at is Network Link. This is where the actual interfaces to be monitored are added. The screenshot below shows this page:

colasoft nchronos network link

Obviously for the purpose of this review the only NIC registered on the server was the NIC of my own machine. This is the page from where you can start monitoring of the various network interfaces all over your network. Packet data for any NIC would not be captured if you haven’t clicked on the ‘Start’ button for the specific NIC. So don’t go about blaming the car not starting up when you haven’t even turned the ignition key!!!

All in all, it’s simple and it’s effective as it gives you less chances of making any errors.

Now that the server is all up and running we use the nChronos Console to peer into the data that it is capturing:

colasoft nchronos network console

The above screenshot shows the console interface. For the sake of simplicity I have labelled three separate zones, 1, 2, and 3. When the user logs in the for first time, he/she has to select the interface that needs to be looked at from zone 2 and click on the ‘Open’ button. That then shows all the details about that interface in Zones 1 and 3. Notice in Zone 1 there is a strip of buttons, one of which is the auto–scroll feature. I loved this feature as it helps you the see traffic as it passes through. To see a more detailed data analysis you simply click drag and release the mouse button to select a time frame. This unleashes a spectrum of relevant information in Zone 3. Each and every tab displays the packets captured through a category window, e.g. The application tab will show the types of application protocols have been used in that time frame i.e. HTTP, POP, etc.

One of the best features I found was the ability to parse each line of data under any tab by just double clicking on it. So if I double clicked the link on the application tab that says HTTP, it would drill down to IP Address. I can keep on drilling down and it would traverse from HTTP IP AddressIP ConversationTCP Conversation. I can jump to any specific drill down state by right clicking on the application protocol itself and making a choice on the right click menu. This is a very useful feature. For the more curious, the little spikes in traffic in zone 1 was my mail application checking for new mail every 5 seconds.

The magic happens when you right click on any line of data and select ‘Analyse Packet’. This invokes the nChronos Analyzer:

colasoft nchronos packet analyzer

The above screenshot shows what the Analyzer looks like by default. This was by far my favourite tool. The way the information about the packets was shown was just beyond belief. This is one example where Colasoft has shown one of its many strengths, where it can combine flamboyance with function. The list of tabs on the top will give you an idea of how many ways the Analyzer can show you the data you want to see. Some of my favourites were the following: Protocol

colasoft nchronos analysis

This is a screenshot of the Protocol Tab. I was impressed with seeing the number of column headers that were being used to show detailed information about the packets. The tree-like expanded way of showing protocols under particular data units, based on the layers involved, was useful.

Another one of my favourite tabs was the Matrix:

colasoft nchronos network matrix

The utility of this tab is to show the top 100 end to end conversions which can be IP conversions, physical conversions etc. If you double click any of those lines denoting a conversion it opens up an actual data exchange between the nodes. This is very important for a network manager if there is a need to decipher what exact communication was ensuing between to nodes, be it physical or IP, for a given point of time. It can be helpful in terms of checking network abuse, intrusions etc.

This brings me to my most favourite tab of all, the Packet tab. This tab will show you end to end data being exchanged between any two interfaces and show you exactly what data was being exchanged. I know most packet analyzers primary function is to be able to do that but I like Colasoft’s treatment of this functionality:

colasoft nchronos packet analysis

I took the liberty of breaking up the screen into three zones to show how easy it was to delve into any packet. In zone 1, you would select exactly which interchange of data between any concerned nodes you want to splice. Once you have done that, zone 2 starts showing the packet structure in terms of the difference network protocols i.e. Data link layer, Network Layer, Transport Layer, Application Layer etc. Then zone 3 shows you the actual data that was encapsulated inside that specific packet. This is by far the most lucid and practical approach I have seen by any packet analyzer software when showing encapsulated data within packets. I kid you not, I have seen many packet analyzers and Colasoft trumps the lot.

Summary

Colasoft’s unique selling points will always remain simplicity, careful positioning of features to facilitate easy access for users, presentation of data in a non–messy way for maximum usage and, specially for me, making me feel like a Crime Scene Investigator of networks, like you might see on CSI–Las Vegas (apologies to anyone who is hasn’t seen the CSI series).

Network security has become of paramount importance to us in our daily lives as more and more civil, military and scientific work and facilities are becoming dependant on networks. For a network administrator it is not only important to resume normalcy of network operations as soon as possible but also to go back and investigate successfully why an event, capable of crippling a network, might have happened in the first place. This is also applicable in terms of preventing such a disruptive event.

Colasoft’s nChronos Server and Console coupled with Analyzer is an assorted bundle of efficient software which helps to perform all the function required to preserve network integrity and security. It is easy to setup and maintain, requires minimum intervention when it’s working and delivers vast amounts of important information in the easiest manner possible. This software bundle is a must-have for any organisation which, for all the right reasons, values its network infrastructure highly, and wants to preserve its integrity and security.

  • Hits: 19495

GFI WebMonitor 2012 Internet Web Proxy Review

Review by Alan Drury and John Watters

review-badge-98The Internet connection is vital for many Small to Medium or Large-sized enterprises, but it can also be one of the biggest headaches. How can you know who is doing what? How can you enforce a usage policy? And how can you protect your organisation against internet-borne threats? Larger companies tend to have sophisticated firewalls and border protection devices, but how do you protect yourself when your budget won’t run to such hardware? This is precisely the niche GFI has addressed with GFI WebMonitor.

How Does GFI WebMonitor 2012 Work?

Before we get into the review proper it’s worth taking a few moments to understand how it works. GFI WebMonitor installs onto one of your servers and sets itself up there as an internet proxy. You then point all your browsers to the internet via that proxy and voilà – instant monitoring and control.

The server you choose doesn’t have to be internet-facing or even dual-homed (although it can be), but it does obviously need to be big enough and stable enough to become the choke point for all your internet access. Other than that, as long as it can run the product on one of the supported Microsoft Windows Server versions, you’re good to go.

We tested it in a average company that had an adequate amount of PCs, laptops and mobile clients (phones), running on a basic ADSL internet connection and a dual-core Windows 2003 Server box that was doing everything, including being the domain controller and the print server in its spare time, and happily confirmed no performance impact on the server.

Installing GFI WebMonitor 2012

As usual with GFI we downloaded the fully functional 30-day evaluation copy (82Mb) and received the license key minutes later by email. On running the installer we found our humble server lacked several prerequisites but happily the installer went off and collected them without any fuss.

review-gfi-webmonitor2012-1

After that it offered to check for updates to the program, another nice touch:


The next screen is where you decide how you want to implement the product. Having just a single server with a single network card we chose single proxy mode:

review-gfi-webmonitor2012-3

With those choices made the installation itself was surprisingly quick and before long we were looking at this important screen:

review-gfi-webmonitor2012-4

We reconfigured several user PCs to point to our newly-created http proxy and they were able to surf as if nothing had happened. Except, of course, for the fact that we were now in charge!

We fired off a number of web accesses (to www.Firewall.cx of course, among others) and some searches, then clicked Finish to see what the management console would give us.

WebMonitor 2012 - The All-Seeing Eye

The dashboard overview (above) displays a wealth of information. At a glance you can see the number of sites visited and blocked along with the top users, top domains and top categories (more on these later).  There’s also a useful trending graph which fills up over time, and you can change the period being covered by the various displays using the controls in the top right-hand corner. The console is also web-based so you can use it remotely.

review-gfi-webmonitor2012-5Many of the displays are clickable allowing you to easily drill down into the data, and if you hover the mouse you’ll get handy pop-up explanations. We were able to go from the overview to the detailed activities of an individual user with just a few clicks. A user here is a single source IP, in other words a particular PC rather that the person using it. Ideally we would have liked the product to query the Active Directory domain controller and nail down the actual logged-on user but to be honest given the reasonable price and the product’s undoubted usefulness we’re not going to quibble.

The other dashboard tabs help you focus on particular aspects. The Bandwidth tab (shown below) and the activity tab let you trend the activity either by data throughput or the number of sessions as well as giving you peaks, totals and future projections. The real-time traffic tab shows all the sessions happening right now and lets you kill them, and the quarantine tab lists the internet nasties that WebMonitor has blocked.

review-gfi-webmonitor2012-6

To the right of the dashboard, the reports section offers three pages of ad-hoc and scheduled reports that you can either view interactively or have emailed to you. You can pretty much get anything here: the bandwidth wasted by non-productive surfing during a time period; the use of social networking sites and/or webmail; the search engine activity; the detailed activity of a particular user and even the use of job search websites on company time.

review-gfi-webmonitor2012-7

Underlying all this is a huge database of site categories. This, along with the malware protection, is maintained by GFI and downloaded daily by the product as part of your licensed support so you’ll need to stay on support moving forward if you want this to remain up to date.

The Enforcer

Monitoring, however, is only half the story and it’s under the settings section that things really get interesting.  Here you can configure the proxy (it can handle https if you give it a certificate and it also offers a cache) and a variety of general settings but it’s the policies and alerts that let you control what you’ve been monitoring.

review-gfi-webmonitor2012-8

By defining policies you can restrict or allow all sorts of things, from downloading to instant messaging to categories of sites allowed or blocked and any time restrictions. Apply the relevant policies to the appropriate users and there you go.

The policies are quite detailed. For example, here’s the page allowing you to customise the default download policy. Using the scrolling list you can restrict a range of executables, audio/video files, document types and web scripts and if the default rules don’t meet your needs you can create your own. You can block them, quarantine them and generate an alert if anyone tries to do what you’ve forbidden.

review-gfi-webmonitor2012-9

Also, hidden away under the security heading is the virus scanning policy. This is really nice - GFI WebMonitor can scan incoming files for you using several anti-virus, spyware and malware detectors and will keep these up to date. This is the part of the program that generates the list of blocked nasties we mentioned earlier.

Pull down the monitoring list and you can set up a range of administrator alerts ranging from excessive bandwidth through attempted malware attacks to various types of policy transgression. By using the policies and alerts together you can block, educate or simply monitor across the whole spectrum of internet activity as you see fit.

review-gfi-webmonitor2012-10

Final Thoughts

GFI WebMonitor is a well thought-out, thoughtfully focussed and well integrated product that provides everything a small to large-sized enterprise needs to monitor and control internet access at a reasonable price. You can try it for free and the per-seat licensing model means you can scale it as required. It comes with great documentation both for reference and to guide you as you begin to take control.

 

  • Hits: 24640

Product Review - GFI LanGuard Network Security Scanner 2011

review-gfi-languard2011-badge
Review by Alan Drury and John Watters

Introduction

With LanGuard 2011 GFI has left behind its old numbering system (this would have been Version 10), perhaps in an effort to tell us that this product has now matured into a stable and enterprise-ready contender worthy of  serious consideration by small and medium-sized companies everywhere.

Well, after reviewing it we have to agree.

In terms of added features the changes here aren’t as dramatic as they were between say Versions 8 and 9, but what GFI have done is to really consolidate everything that LanGuard already did so well, and the result is a product that is rock-solid, does everything that it says on the tin and is so well designed that it’s a joy to use.

Installation

As usual for GFI we downloaded the fully-functional evaluation copy (124Mb) from its website and received our 30-day trial licence by email shortly afterwards. Permanent licences are reasonably priced and on a sliding scale that gets cheaper the more target IP addresses you want to scan. You can discover all the targets in your enterprise but you can only scan the number you’re licensed for.

Installation is easy. After selecting your language your system is checked to make sure it’s up to the job:

review-gfi-languard-2011-1
The installer will download and install anything you’re missing but it’s worth noting that if you’re on a secure network with no internet access then you’ll have to get them yourself.

Once your licence is in place the next important detail is the user account and password LanGuard will use to access and patch your machines. We’d suggest a domain account with administrator privileges to ensure everything runs smoothly across your whole estate. And, as far as installation goes, that’s pretty much it.

Scanning

LanGuard opened automatically after installation and we were delighted to find it already scanning our host machine:

review-gfi-languard-2011-2

The home screen (above) shows just how easy LanGuard is to use. All the real-world tasks you’ll need to do are logically and simply accessible and that’s the case all the way through. Don’t be deceived, though; just because this product is well-designed doesn’t mean it isn’t also well endowed.

Here’s the first treasure – as well as scanning and patching multiple versions of your Windows OS’s LanGuard 2011 interfaces with other security-significant programs. Here it is berating us for our archaic versions of Flash Player, Java, QuickTime and Skype:

review-gfi-languard-2011-3

This means you can take, from just one tool, a holistic view of the overall security of your desktop estate rather than just a narrow check of whether or not you have the latest Windows service packs. Anti-virus out of date? LanGuard will tell you. Die-hard user still on an older browser? You’ll know. And you can do something about it.

Remediation

Not only will LanGuard tell you what’s missing, if you click on Remediate down in the bottom right of the screen you can ask the product to go off and fix it. And yes, that includes the Java, antivirus, flash player and everything else:

review-gfi-languard-2011-4

Want to deploy some of the patches but not all? No problem. And would you like it to happen during the dark hours? LanGuard can do that too, automatically waking up the machines, shutting them down again and emailing you with the result. Goodness, we might even start to enjoy our job!

LanGuard can auto-download patches, holding them ready for use like a Windows SUS server, or it can go and get them on demand. We just clicked Remediate and off it went, downloaded our updated Adobe AIR and installed it without any fuss and in just a couple of minutes.

Agents and Reports

Previous versions of LanGuard were ‘agentless’, with the central machine scanning, patching and maintaining your desktop estate over the network. This was fine but it limited the throughput and hence what could be achieved in a night’s work. While you can still use it like this, LanGuard 2011 also introduces a powerful agent-based mode. Install the agent on your PCs (it supports all the current versions of Windows) and they will do the work while your central LanGuard server merely gives the orders and collects the results. The agents give you a lot of power; you can push-install them without having to visit every machine, and even if a laptop strays off the network for a while its agent will report in when it comes back. This is what you’d expect from a scalable, enterprise-credible product and LanGuard delivers it in style.

The reports on offer are comprehensive and nicely presented. Whether you just want a few pie charts to convince your boss of the value of your investment or you need documentary evidence to demonstrate PCI DSS compliance, you’ll find it here:

review-gfi-languard-2011-5

A particularly nice touch is the baseline comparison report; you define one machine as your baseline and LanGuard will then show you how your other PCs compare to it, what’s missing and/or different:

review-gfi-languard-2011-6

Other Features

What else can this thing do? Well there’s so much it’s hard to pick out the best points without exceeding our word limit, but here are a few of our favourites:

  • A comprehensive hardware audit of all the machines in your estate, updated regularly and automatically, including details of the removable USB devices that have been used
  • An equally comprehensive and automatic software audit, broken down into useful drag-and-drop categories, so you’ll always know exactly who has what installed. And this doesn’t just cover applications but all the stuff like Java, flash, antivirus and antispyware as well
  • The ability to define programs and applications as unauthorised, which in turn allows LanGuard to tell you where they are installed, alert you if they get installed and – oh joy, automatically remove them from the user’s machines
  • System reports including things like the Windows version, shared drives, processes, services and local users and groups including who logged on and when
  • Vulnerability reports ranging from basic details like open network ports to detected vulnerabilities with their corresponding OVAL and CVE references and hyperlinks for further information
  • A page of useful tools including SNMP walk, DNS lookup and enumeration utilities

Conclusion

We really liked this product. If you have a shop full of Windows desktops to support and you want complete visibility and control over all aspects of their security from just one tool then LanGuard 2011 is well worth a look. The real-world benefits of a tool like this are undeniable, but the beauty of LanGuard 2011 is in the way those benefits are delivered. GFI has drawn together all the elements of this complicated and important task into one seamless, intuitive and comprehensive whole and left nothing out, which is why we’ve given LanGuard 2011 the coveted Firewall.cx 10/10 award.

 

  • Hits: 29043

GFI Languard Network Security Scanner V9 Review

With Version 9, GFI's Network Security Scanner has finally come of age. GFI has focussed the product on its core benefit – maintaining the security of the Windows enterprise – and the result is a powerful application that offers real benefits for the time-pressed network administrator.

Keeping abreast of the latest Microsoft patches and Service Packs, regular vulnerability scanning, corrective actions, software audit and enforcement in a challenging environment can really soak up your time. Not any more though – install Network Security Scanner and you can sit back while all this and more happens automatically across your entire estate.

The user interface for Version 9 is excellent; so intuitive in fact that we didn't touch the documentation at all yet managed all of the product's features. Each screen leads you to the next so effectively that you barely need to think about what you are doing and using the product quickly becomes second nature.

Version 8 was good, but with Version 9 GFI has done it again.

Installation

Installation is straightforward. All the software needs is an account to run under, details of its back-end database and a location to reside. MS Access, MSDE or MS SQL Server databases are supported and you can even migrate your data from one to another if needs be.

The Interface

The separate toolbar scheduler from Version 8 is gone and, in its place, the opening screen gives you all the options you need: Scan this Computer, Scan the Network, Custom Scan or Scheduled Scan. Click ‘Scan this Computer' and the scan begins – just one simple mouse click and you're off.

reviews-gfi-languard-v9-1

Performance and Results

Scanning speed is just as good as Version 8 and in less than two minutes we had a summary of the results:

reviews-gfi-languard-v9-2

Simply look below the results summary and the handy Next Steps box (with amusing typographical error) leads you through the process of dealing with them.

The prospect of Analizing the results made our eyes water so, having taken care to protect our anatomy from any such unwarranted incursion, we clicked the link:

reviews-gfi-languard-v9-3

The scan results are grouped by category in the left column with details to the right. Expand the categories and you get a wealth of information.

The vulnerabilities themselves are described in detail with reference numbers and URLs to lead you to further resources, but that's not all. You also get the patch status of the scanned system, a list of open ports, a comprehensive hardware report, an inventory of the installed software and a system summary. Think of all this in terms of your enterprise – if you have this product scanning all your machines you can answer questions such as “Which machines are still on Service Pack 2?” or “How much memory is in each of the Sales PCs?” or “What software does Simon have installed on his laptop?” without going anywhere else. It's all there for you at the click of a mouse.

There are other gems here as well, too many to list but here are some of our favourites. Under Potential Vulnerabilities the scanner lists all the USB devices that had been connected so we could monitor the historical use of memory sticks and the like. And the software audit, useful in itself, held another delight. Right click on any software entry and you can tell the scanner to uninstall it, either from just this machine or from all the machines in the network. Go further and define a list of banned applications and the product will remove them for you, automatically, when it runs its regular scan. Imagine the face of that wayward user each morning …

Patch Deployment

Choose the Remediate link and you'll head off to the part of the product that installs patches and service packs. Needless to say, these can be downloaded for you from Microsoft as they are released and held by the product, ready for use:

reviews-gfi-languard-v9-4

You can either let the scanner automatically install whatever patches and service packs it finds missing or you can vet and release patches you want to allow. This will let you block the next release of Internet Explorer, for example, while allowing other critical patches through. You can also uninstall patches and service packs from here.

As in Version 8, you can also deploy custom software to a single machine or across your estate. In a nutshell, if it is executable or can be opened then you can deploy it. As a test we pushed a picture of a pair of cute kittens to a remote machine where the resident graphics program popped open to display them. You can install software just as easily provided the install needs no user intervention:

reviews-gfi-languard-v9-5

reviews-gfi-languard-v9-6

Alerts and Reporting

This is where GFI demonstrates it is serious about positioning this product as a robust and reliable enterprise-ready solution.

Firstly the scanner can email you the results of its nocturnal activities so all you have to do each morning is make yourself a coffee and check your inbox. We'd have liked to see this area expanded, perhaps with definable events that could trigger an SMS message, SNMP trap or a defined executable. Maybe in Version 10?

To convince your manager of the wisdom of your investment there is a good range of coloured charts and if you have the GFI report Manager framework the product slots right into that so you can generate detailed custom reports from the back-end database.

reviews-gfi-languard-v9-7

And speaking of the database, GFI has now provided maintenance options so you can schedule backups and perform management tasks from within the scanner itself; a good idea for a key application.

Subscribe to what?

A vulnerability scanner is only any good, of course, if it can be automatically updated with the latest exploits as they come out. GFI has changed the business model with Version 9, so you'll be expected to shell out a modest annual fee for a Software Maintenance Agreement (SMA) unlike Version 8 where you paid in full and updates were free thereafter.

A nag screen reminds you when your subscription runs out so you needn't worry about not noticing:

reviews-gfi-languard-v9-8

Conclusion

What more can we say? If you have an estate of Windows machines to secure and maintain then this is what you have been looking for. It does everything you might need and more, it's easy to use and delivers real-world benefits.

  • Hits: 21243

Colasoft Capsa v7.2.1 Network Analyser Review

Using network analysing software, we are able to monitor our network and dig into the various protocols to see what's happening in real time. This can help us understand much better the theoretical knowledge we've obtained throughout the years but, most importantly, help us identify, troubleshoot and fix network issues that we wouldn't be able to do otherwise.

A quick search on the Internet will surely reveal many network analysers available making it very confusing to select one. Some network analysers provide basic functions, such as packet sniffing, making them ideal for simple tasks while others give you all the necessary tools and functions to ensure your job is done the best possible way.

Colasoft's network analyser is a product that falls in the second category. We had the chance to test drive the Colasoft Network Analyser v7.2.1 which is the latest available version at the time of writing.

Having used previous versions of Colasoft's network analyser, this latest version we tested left us impressed and does, in fact, promise a lot no matter what the environment demands.

The Software

Colasoft's Capsa network analyser is available as a demo version directly from their website www.colasoft.com. We quickly downloaded the 21.8mb file and began the installation which was a breeze. Being small and compact meant the whole process didn't take more than 30-40 seconds.

We fired up the software, entered our registration details, activated our software and up came the first screen which shows a completely different philosophy to what we have been used to:

reviews-colasoft-1

Before you even start capturing packets and analysing your network, you're greeted with a first screen that allows you to select the network adaptor to be used for the session, while allowing you to choose from a number of preset profiles regarding your network bandwidth (1000, 100, 10 or 2 Mbps).

Next, you can select the type of analysis you need to run for this session ranging from Full analysis, Traffic Monitoring, Security analysis to HTTP, Email, DNS and FTP analysis. The concept of pre-configuring your packet capturing session is revolutionary and very impressive. Once the analysis profile is selected, the appropriate plug-in modules are automatically loaded to provide all necessary information.

For our review, we selected the ‘100Mb Network’ profile and ‘Full Analysis’ profile, providing access to all plug-in modules, which include ARP/RARP, DNS, Email, FTP, HTTP and ICMPv4 – more than enough to get any job done!

Optionally, you can use the ‘Packet Filter Settings’ section to apply filters to the packets that will be captured:

reviews-colasoft-2

The Main Dashboard

As soon as the program loaded its main interface, we were left surprised with the wealth of information and options provided.

The interface is broken into four sections: tool bar, node explorer, dashboard and online resource. The node explorer (left lower side) and online resource (right lower side) section can be removed, providing the dashboard with the maximum possible space to view all information related to our session.

reviews-colasoft-3

The menu provided allows the configuration of the program, plus access to four additional tools: Ping, Packet Player, Packet Builder and MAC Scanner.

To uncover the full capabilities of the Colasoft Capsa Network Analyser, we decided to proceed with the review by breaking down each of the four sections.

The ToolBar

The toolbar is populated with a number of options and tools that proved extremely useful and are easily accessible. As shown below, it too is broken into smaller sections allowing you to control the start - stop function of your capturing session, filters, network profile settings from where you can set your bandwidth settings, profile name, alarms and much more.

reviews-colasoft-4

The Analysis section is populated with some great features we haven't found in other similar tools. Here, you can enable or disable the built-in ‘diagnosis settings’ for over 35 different protocols and tcp/udp states.

reviews-colasoft-5

When selecting a diagnosis setting, Colasoft Capsa will automatically explain, in the right window, what the setting shows and the impact on the network. When done, click on the OK button and you're back to the main capturing screen.

The Analysis section also allows you to change the buffer size in case you want to capture packets for an extended period of time and, even better, you can enable the ‘auto packet saving’ feature which will automatically save all captured packets to your hard drive, making them available whenever you need them.

Right next to the analysis section is the 'Network Utilisation' and 'pps' (packets per second) gauges, followed by the 'Traffic History Chart'. These nifty gauges will show you in almost realtime the utilisation of your network card according to the network profile you selected before, plus any filters that might have been selected.

For example, if a 100Mbps network profile was selected, the gauges will represent the utilisation of a 100Mbps network card. If, in addition, filters were selected e.g. HTTP, then both gauges will represent a 100Mbps network utilisation only for the HTTP protocol. So if there were a large email or FTP download, it wouldn't register at the gauges as they will only show utilisation for HTTP traffic, according to the filter.

To give the gauges a try, we disabled all filters and started a 1.4Gig file transfer between our test bed and server, over our 100Mbps network. Utilisation hit the red areas while the pps remained at around 13,000 packets per second.

reviews-colasoft-6

The gauges are almost realtime as they are updated once every second, though we would have loved to see them swinging left-right in real time. One issue we encountered was that the 'Traffic History Chart' seemed to chop off the bandwidth value when moving our cursor toward the top of the graph. This is evident in our screenshot where the value shown is 80.8Mbps, and makes it almost impossible to use the history chart when your bandwidth is almost 100% utilised. We hope to see this fixed in the next version.

At the very end of the toolbar, the 'Packet Buffer' provides visual feedback on how full the buffer actually is, plus there are a few options to control the packet buffer for that session.

Node Explorer & DashBoard

On the lower left area we'll find the 'Node Explorer' which works in conjunction with the main dashboard to provide the information of our captured session. The Node Explorer is actually a very smart concept as it allows you to instantly filter information captured.

The Node Explorer starts populating the segmented areas automatically as it captures packets on the network. It provides a nice break-down of the information using a hierarchical approach that also follows the OSI model.

As we noticed, we could choose to select the Physical Explorer that contained nodes with MAC Addresses, or select the IP Explorer to view information about nodes based on their IP Address.

Each of these sections are then further broken down as shown. A nice simple and effective way to categorise the information and help the user find what is needed without searching through all captured packets.

Once we made a selection (Protocol Explorer/Ethernet II/IP (5) as shown below, the dashboard next to it provided up to 13 tabs of information which are analysed in the next screenshot.

reviews-colasoft-7

Selecting the IP Tab, the protocol tab in the main dashboard provided a wealth of information and we were quickly able to view the quantity of packets, type of traffic, amount of traffic and other critical information for the duration of our session.

We identified our Cisco Call Manager Express music-on-hold streaming under the UDP/SCCP, which consumes almost 88Kbps of bandwidth, an SNMP session which monitors a remote router accounting for 696bps of traffic, and lastly the ICMP tracking of our website, costing us another 1.616Kbps of traffic. All together, 89.512Kpbs.

reviews-colasoft-8

This information is automatically updated every second and you can customise the refresh rate from 10 presets. One function we really loved was the fact we could double-click on any of the shown protocols and another window would pop up with all packets captured for the selected protocol.

We double-clicked on the OSPF protocol (second last line in the above screenshot) to view all packets related to that protocol and here is what we got:

reviews-colasoft-9

Clearly there is no need to use filters as we would probably need to in other similar type of software, thanks to the smart design of the Node Explorer and Dashboard. Keep in mind that if we need to have all packets saved, we will need the appropriate buffer, otherwise the buffer is recycled as expected.

Going back to the main area, any user will realise that the dashboard area is where Colasoft's Capsa truly excels and unleashes its potential. The area is smartly broken into a tabbed interface and each tab does its own magic:

reviews-colasoft-10

The user can quickly switch between any tabs and obtain the information needed without disrupting the flow of packets captured.

Let's take a quick look at what each tab offers:

Summary Tab

The Summary tab is an overview of what the network analyser 'sees' on the network.

reviews-colasoft-11

We get brief statistics on the total amount of traffic we've seen, regardless of whether it’s been captured or not, the current network utilisation, bits per second and packets per second, plus a breakdown of the packet sizes we've seen so far. Handy information if you want to optimise your network according to your network packet size distribution.

Diagnosis Tab

The Diagnosis tab is truly a goldmine. Here you'll see all the information that related to problems automatically detected by Colasoft Capsa without additional effort!

This amazing section is broken up into the Application layer, Transport layer and Network layer (not shown). Capsa will break down each layer in a readable manner and show all related issues it has detected.

reviews-colasoft-12

Once a selection has been made - in our example we choose the 'Application layer/ DNS Server Slow Response' - the lower area of the window brings up a summary of all related packets this issue was detected in.

Any engineer who spends hours trying to troubleshoot network issues will truly understand the power and usefulness of this feature.

Protocol Tab

The Protocol tab provides an overview and break-down of the IP protocols on the network, along with other useful information as shown previously in conjunction with the Node Explorer.

reviews-colasoft-13

Physical Endpoint Tab

The Physical Endpoint tab shows conversations from physical nodes (Mac Addresses). Each node expands and its IP Address is revealed to help track the traffic. Similar statistics regarding the traffic is also shown:

reviews-colasoft-14

As with previous tabs, when selecting a node the physical conversation window opens right below and shows the relevant conversations along with their duration and total traffic.

IP Endpoint Tab

The IP Endpoint tab offers similar information but on the IP Layer. It shows all local and Internet IP addresses captured along with statistics such as number of packets, total bytes received, packets per second and more.

reviews-colasoft-15

When selecting an IP Address, Capsa will show all IP, TCP and UDP conversations captured for this host.

IP Conversation Tab

The IP Conversation tab will be useful to many engineers. It allows the tracking of conversations between endpoints on your network, assuming all traffic passes through the workstation where the Capsa Network Analyser is installed.

The tab will show individual sessions between endpoints, duration, bytes in and out from each end plus a lot more.

reviews-colasoft-16

Network engineers can use this area to troubleshoot problematic sessions between workstations, servers and connections toward the Internet. Clicking on a specific conversation will show all TCP and UDP conversations between the hosts, allowing further analysis.

Matrix Tab

The Matrix tab is an excellent function probably only found on Colasoft's Capsa. The matrix shows a graphical representation of all conversations captured throughout the session. It allows the monitoring of endpoint conversations and will automatically resolve endpoints when possible.

reviews-colasoft-17

Placing the mouse over a string causes Capsa to automatically show all relevant information about conversations between the two hosts. Active conversations are highlighted in green, multicast sessions in red and selected session in orange.

The menu on the left allows more options so an engineer can customise the information.

Packet Tab

The Packet tab gives access to the packets captured on the network. The user is able to lock the automatic scrolling or release it so new packets are shown as they are captured or have the program continue capturing packets without scrolling the packet window. This allows ease of access to any older packet without the need to scroll back for every new packet captured.

Even though the refresh time is customisable, the fastest refresh rate was only every 1 second. We would prefer a 'realtime' refresh rate and hope to see this implemented in the next update.

reviews-colasoft-18

Log Tab

The Log tab offers information on sessions related to specific protocols such as DNS, Email, FTP and HTTP. It's a good option to have, but we found little value in it since all other features of the program fully cover the information provided by the Log tab.

reviews-colasoft-19

 

Report Tab

The report tab is yet another useful feature of Colasoft's Capsa. It will allow the generation of a network report with all the captured packets and can be customised to a good extent. The program allows the engineer to insert a company logo and name, plus customise a few more fields.

The options provided in the report are quite a few, the most important being the Diagnosis and Protocol statistics.

reviews-colasoft-20

Finally, the report can be exported to PDF or HTML format to distribute it accordingly.

Professionals can use this report to provide evidence of their findings to their customers, making the job look more professional and saving hours of work.

Online Resource

The 'Online Resource' section is a great resource to help the engineer get the most out of the program. It contains links and live demos that show how to detect ARP poisoning attacks, ARP Flooding, how to monitor network traffic efficiently, track down BitTorrents and much more.

Once the user becomes familiar with the software they can select to close this section, giving its space to the rest of the program.

Final Conclusion

Colasoft's Capsa Network Analyser is without doubt a goldmine. It offers numerous enhancements that make it pleasant to work with and easy for anyone to find the information they need. Its unique functions such as the Diagnosis, Matrix and Reports surely make it stand out and can be invaluable for anyone troubleshooting network errors.

While the program is outstanding, it can do with some minor enhancements such as the real-time presentation of packets, more thorough network reports and improvement of the traffic history chart. Future updates will also need to include a 10Gbit option amongst the available network profies.

We would definitely advise any network administrator or engineer to give it a try and see for themselves how great a tool like Capsa can be.

  • Hits: 25776

GFI Languard Network Security Scanner V8

Can something really good get better? That was the question that faced us when we were assigned to review GFI's Languard Network Security Scanner, Version 8 , already well loved (and glowingly reviewed) at Version 5.

All vulnerability scanners for Windows environments fulfil the same basic function, but as the old saying goes “It's not what you do; it's the way that you do it”. GFI have kept all the good points from their previous releases and built on them; and the result is a tool that does everything you would want with an excellent user interface that is both task efficient and a real pleasure to use.

Installation

Visit GFI's website and you can download a fully-functional version that you can try before you buy; for ten days if you prefer to remain anonymous or for thirty days if you swap your details for an evaluation code. The download is 32Mb expanding to 125Mb on your disk when installed.

Installation is straightforward. All the software needs is an account to run under, details of its back-end database and a location to reside. MS Access, MSDE or MS SQL Server databases are supported and you can even migrate your data from one to another if needs be.

First of all, if you have a license key you can enter it during installation to save time later – just a little thing, but it shows this software has been designed in a very logical manner.

You're then asked for an account to run the Attendant service, the first of the Version 8 enhancements. This, as its name suggests, is a Windows service that sits in your system tray and allows you easy access to the program and its documentation plus a handy window that lets you see everything the scanner is doing as it works away in the background.

reviews-gfi-languard-v8-1

After this you're asked whether you'd like your scan results stored in Microsoft Access or SQL Server (2000 or higher). This is another nice feature, particularly if you're using the tool to audit, patch and secure an entire infrastructure.

One feature we really liked is the ability to run unattended scheduled scans and email the results. This is a feature you won't find in any other similar product.

GFI's LANguard scanner doesn't just find vulnerabilities, it will also download the updates that fix them and patch your machines for you.

Finally, you can tell the software where to install itself and sit back while the installation completes.

Getting Started

Each time you start the scanner it checks with GFI for more recent versions and for updated vulnerabilities and patches. You can turn this off if you don't always have internet access.

You'll also get a wizard to walk you through the most common scanning tasks. This is great for new users and again you can turn it off once you become familiar with the product.

reviews-gfi-languard-v8-2

The Interface

Everything takes place in one uncluttered main screen as shown below. As our first review task we closed the wizard and simply ‘had a go' without having read a single line of documentation. It's a testament to the good design of the interface that within a few mouse clicks we were scanning our first test system without any problems.

reviews-gfi-languard-v8-3

The left hand pane contains the tools, menus and options available to you. This is split over three tabs, an improvement over Version 5 where everything sat in one huge list. To the right of this are two panes that display the information or settings relating to the option you've chosen, and the results the product has obtained. Below them is a results pane that shows what the scanner is up to, tabbed again to let you view the three scanner threads or the overall network discovery.

Performance and Results

It's fast. While performance obviously depends on your system and network we were pleasantly surprised by the efficiency and speed of the scan.

Speed is nothing however without results, and the product doesn't disappoint. Results are logically presented as an expanding tree beneath an entry for each scanned machine. Select one of the areas in the left pane and you'll get the detail in the right pane. Right-click there and you can take appropriate action; in the example shown right-clicking will attempt a connection on that port:

reviews-gfi-languard-v8-4

Vulnerabilities are similarly presented with rich and helpful descriptions, while references for further information from Microsoft and others plus the ability to deploy the relevant patches are just a right-click away:

reviews-gfi-languard-v8-5

The scanner is also surprisingly resilient. We decided to be mean and ran a scan of a desktop PC on a large network – via a VPN tunnel within a VPN tunnel across the public internet with an 11Mb/s wireless LAN connection on the other end. The scan took about ten minutes but completed fine.

Patch Deployment

Finding vulnerabilities is only half the story; this product will also help you fix them. One click at the machine level of the scan results opens yet another helpful screen that gathers all your options in one place. You can elect to remotely patch the errant machine, shut it down or even berate the operator, and a particularly nice touch is the list of your top five most pressing problems:

reviews-gfi-languard-v8-6

Patch deployment is similarly intuitive. The product can download the required patches for you, either now or at a scheduled time, and can access files already downloaded by a WSUS server if you have one. Once you have the files available you can patch now or schedule the deployment, and either way installation is automatic.

Alongside this is another Version 8 feature which gives you access to the same mechanism to deploy and install software of your choice. We tested this by push-installing some freeware tools, but all you need is a fully scripted install for unattended installation and you can deploy anything you like out to your remote machines. This is where the Attendant Service comes in again as the tray application provides a neat log of what's scheduled and what's happened. The example shows how good the error reporting is (we deliberately supplied the wrong credentials):

reviews-gfi-languard-v8-7

This powerful feature is also remarkably configurable –you can specify where the copied files should go, check the OS before installation, change the user credentials (important for file system access and for push-installing the Patch Agent service), reboot afterwards or even seek user approval before going ahead. We've used other tools before for software deployment and we felt right at home with the facilities here.

Scripting and Tools

Another plus for the busy administrator is the facility to schedule scans to run when you'd rather be away doing something else. You can schedule a simple timed scan and have the results emailed to you, or you can set up repeating scans and have the product compare the current results with the previous and only alert you if something has changed. If you don't want your inbox battered you can sleep soundly knowing you can still consult the database next morning to review the results. And if you have mobile users your group scan (or patch) jobs can stay active until your last elusive road warrior has appeared on the network and been processed. Resistance is futile!

Under the Tools tab there are a few more goodies including an SNMP audit to find insecure community strings. This was the site of our only disappointment with the product – we would have liked the ability to write our own tools and add them in here, but it seemed we'd finally found something GFI hadn't thought of.

reviews-gfi-languard-v8-8

Having said that, all the other scripting and tweaking facilities you'd expect are there, including a comprehensive command-line interface for both scanning and patch deployment and the ability to write custom vulnerability definitions in VBScript. All this and more is adequately documented in the well-written on-line help and user manual, and if you're still stuck there's a link to GFI's knowledgebase from within the program itself.

Summary

We were really impressed by this product. GFI have done an excellent job here and produced a great tool, which combines vulnerability scanning and patch management , with heavyweight features and an excellent user interface that is a joy to work with.

  • Hits: 21066

Acunetix Web Vulnerability Scanner

The biggest problem with testing web applications is scalability. With the addition of even a single form or page to test, you invariably increase the number of repetitive tasks you have to perform and the number of relationships you have to analyze to figure out whether you can identify a security issue.

As such, performing a security assessment without automation is an exercise in stupidity. One can use the lofty argument of the individual skill of the tester, and this is not to be discounted – I’ll come back to it – but, essentially, you can automate at least 80% of the task of assessing website security. This is part of the reason that security testing is becoming highly commoditized, the more you have to scan, the more repetitive tasks you have to perform. It is virtually impossible for a tester to manually analyze each and every single variable that needs to be tested. Even if it were so, to perform this iterative assessment manually would be foolishly time-consuming.

This problem, coupled with the explosive growth of web applications for business critical applications, has resulted in a large array of web application security testing products. How do you choose a product that is accurate (false positives are a key concern), safe (we’re testing important apps), fast (we come back the complexity point) and perhaps most importantly, meaningful in its analysis?

This implies that its description of the vulnerabilities discovered, and the measures to be taken to mitigate them, must be crystal clear. This is essentially what you’re paying for, it doesn’t matter how good the scanning engine is or how detailed their threat database is if the output – risk description and mitigation – are not properly handled. With these points in mind, we at Firewallcx, decided to take Acunetix’s Web Vulnerability Scanner for a spin.

I’ve had the pleasure of watching the evolution of web scanning tools, right from my own early scripting in PERL, to the days of Nikto and libwhisker, to application proxies, protocol fuzzers and the like. At the outset, let me say that Acunetix’s product has been built by people who have understood this evolution. The designers of the product have been around the block and know exactly what a professional security tester needs in a tool like this. While this puppy will do point ’n’ shoot scanning with a wizard for newbies, it has all the little things that make it a perfect assistant to the manual tester.

A simple example of ‘the small stuff’ is the extremely handy encoder tool that can handle text conversions and hashing in a jiffy. Anyone who’s had the displeasure of having to whip up a base-64 decoder or resort to md5sum to obtain a hash in the middle of a test will appreciate why this is so useful. More importantly, it shows that the folks at Acunetix know that a good tester will be analyzing the results and tweaking the inputs away from what the scanning engine would do. Essentially they give you the leeway to plug your own intellect into the tool.

Usage is extremely straightforward, hit the icon and you’ll get a quick loading interface that looks professional and displays information smartly (I appreciate the tabbed interfaces, these things matter as a badly designed UI could overwhelm you with more information than you need). Here’s a shot of the target selection wizard:

reviews-acunetix-1

What I liked here was the ‘Optimize for the following technologies’ setup. Acunetix did a quick query of my target (our website, www.Firewall.cx) and identified PHP, mod_ssl, OpenSSL and FrontPage as modules that we’re using. When you’re going up against a blind target in a penetration test or setting up scans for 50 webapps at a time, this is something that you will really appreciate.

Next we come to the profile selection – which allows you to choose the scanning profile. Say I just want to look for SQL injection, I can pick that profile. You can use the profile editor to customize and choose your own checks. Standard stuff here. The profile and threat selection GUI is well categorized and it’s easy to find the checks you want to deselect or select.

reviews-acunetix-2

You can browse the threat database in detail as shown below:

reviews-acunetix-3

At around this juncture, the tool identified that www.Firewall.cx uses non-standard (non-404) error pages. This is extremely important for the tool to do. If it cannot determine the correct ‘page not found’ page, it will start throwing false positives on every single 302 redirect. This is a major problem with scanners such as Nikto and is not to be overlooked. Acunetix walked me through the identification of a valid 404 page. Perhaps a slightly more detailed explanation as to why this is important would benefit a newbie.

I had updated the tool before scanning, and saw the threat database being updated with some recent threats. I don’t know the threat update frequency, but the process was straightforward and, unlike many tools, didn’t require me to restart the tool with the new DB.

reviews-acunetix-4

Since I was more interested in the ‘how can I help myself’ as opposed to ‘how can you help me’ approach to scanning, I fiddled with the fuzzer, request generator and authentication tester. These are very robust implementations, we have fully fledged tools implementing just this functionality and you should not be surprised to see more people discarding other tools and using Acunetix as a one-stop-shop toolbox.

One note though, the usernames dictionary for the authentication tester is far too limited out of the box (3-4 usernames), the password list was reasonably large, but the tool should include a default username list (where are things like ‘tomcat’, ‘frontpage’ etc?) so as not to give people a false sense of security. Given that weak password authentication is still one of the top reasons for a security breach, this module could use a reworking. I would like to see something more tweakable, along the lines of Brutus or Hydra’s HTTP authentication capabilities. Perhaps the ability to plug in a third party bruteforce tool would be nice.

Here I am playing with the HTTP editor:

reviews-acunetix-5

Here’s the neat little encoder utility that I was talking about earlier. You will not miss this one in the middle of a detailed test:

reviews-acunetix-6

After being satisfied that this product could get me through the manual phase of my audits, I fell back on my tester’s laziness and hit the scan button while sipping a Red Bull.

The results arrive in real time and are browseable, which is far better than seeing a progress bar creep forward arbitrarily. While this may seem cosmetic, when you’re being pushed to deliver a report, you want to be able to keep testing manually in parallel. I was watching the results come in and using the HTTP editor to replicate the responses and judge what required my manual intervention.

Essentially, Acunetix chews through the application looking for potential flaws and lets you take over to verify them in parallel. This is absolutely the right approach and far more expensive tools that I’ve used do not realise this. Nobody with half smarts will rely purely on the output of a tool, a thorough audit will have the tester investigating concern areas on his own, if I have to wait for your tool to finish everything it does before I can even see those half-results, you’ve wasted my time.

Here’s how the scanning window looked:

reviews-acunetix-7

Now bear in mind that I was running this test over a 256kbps link on the Internet, I was expecting it to take time, especially given that Firewall.cx has an extremely large set of pages. Halfway through, I had to stop the test as it was bravely taking on the task of analyzing every single page in our forums. However, there was constant feedback through the activity window and my network interface, you don’t end up wondering whether the product has hung as is the case with many other products I’ve used.

The reporting features are pretty granular, allowing you to select the usual executive summary and detailed report options. Frankly, I like the way the results are presented and in the course of my audits never needed to generate a report from the tool itself. I’m certain that the features of the reporting module will more than suffice. The descriptions of the vulnerabilities are well written, the solutions are accurate and the links to more information come from authoritative sources. If you come back to what I said in the opening stages of this review, this is the most important information that a tool should look to provide. Nothing is more terrible than ambiguous results, and that is a problem you will not have with this product.

One drawback found with the product was the lack of a more complete scripting interface. Many testers would like the ability to add their own code to the scanning setup. I did check out the vulnerability editor feature, but would prefer something that gave me more flexibility. Another was the lack of a version for Linux / UNIX-like systems. The majority of security testers operate from these platforms and it would be nice not to have to switch to a virtual machine or deal with a dual boot configuration to be able to harness the power of this tool. Neither of these drawbacks are deal killers, and should be treated more as feature requests.

Other than that, I truly enjoyed using this product. Web application auditing can be a tedious and time consuming nightmare, and the best praise I can give Acunetix is that they’ve made a product that makes me feel a part of the test. The interactivity and levels of detail available to you give you the ability to be laid back or tinker with everything you want, while the test is still going on. With its features and reasonable pricing for a consultant’s license, this product is unmatched and will quickly become one of the premier tools in your arsenal.

  • Hits: 26224

GFI LANguard Network Security Scanner Version 5.0 Review

In the light of all the recent attacks that tend to focus on the vulnerabilities of Windows platforms, we were increasingly dissatisfied with the common vulnerability scanners that we usually employ. We wanted a tool that didn't just help find holes, but would help administer the systems, deploy patches, view account / password policies etc. In short, we were looking for a Windows specialist tool.

Sure, there's a number of very popular (and very expensive) commercial scanners out there. However, most of them are prohibitively priced for the networks we administrate and all of them fell short on the administrative front. We tested a previous version of LANguard and our initial impressions were good. Thus we decided to give their latest offering a spin.

Getting Started

Getting the tool was easy enough, a quick visit to GFI's intuitively laid out site, and a 10MB download later, we were set to go. We must mention that we're partial to tools that aren't too heavy on the disk-space. Sahir has started carrying around a toolkit on his cell-phone USB drive, where space is at a premium. 10MB is a reasonable size for a program with all the features of this one.

Installation was the usual Windows deal (Click <next> and see how quickly you can reach <finish>). We fired up the tool and was greeted with a splash screen that checked for a newer version, and downloaded new patch detection files, dictionaries, etc.

reviews-gfi-languard-1

We'd prefer to have the option of updating rather than having it happen every time at startup bu we couldn't find the option to change this behaviour; this is a minor point that GFI should add.

Interface

Once the program is fully updated, you're greeted with a slick interface that looks like it's been made in .Net. No low coloured icons and cluttered toolbars here. While some may consider this inconsequential, it's a pleasure to work on software that looks good. It gives it that final bit of polish that's needed for a professional package. You can see the main screen below.

reviews-gfi-languard-2

The left panel shows all the tools available and is like an ‘actions' pane. From here you can select the security scanner, filter your scan results in a variety of ways, access the tools (such as patch deployment, DNS lookup, traceroute, SNMP audit, SQL server audit etc) and the program configuration as well. In fact if you look under the menus at the top, you'll find very few options as just about everything can be controlled or modified from the left panel.

The right panel obviously shows you the results of the scan, or the tool / configuration section you have selected. In this case it's on the Security Scanner mode where we can quickly setup a target and scan it with a profile. A profile is a description of what you want to scan for, the built in profiles include:

  • Missing patches
  • CGI scanning
  • Only Web / Only SNMP
  • Ping them all
  • Share Finder
  • Trojan Ports
  • Full TCP & UDP port scan

In the Darkness, Scan ‘em...

We setup the default scanning profile and scanned our localhost (a mercilessly locked down XP box that resists spirited break-ins from our practice penetration tests). We scanned as the ‘currently logged on user' (an administrator account), which makes a difference, since you see a lot more when scanning with privileges than without. As we had expected, this box was fairly well locked down. Here is the view just after the scan finished:

reviews-gfi-languard-3

Clicking one of the filters in the left pane brings up a very nicely formatted report, showing you the information you requested (high vulnerabilities, low vulnerabilities, missing patches etc). Here is the full report:

reviews-gfi-languard-4

As you can see, it identified three open ports (no filtering was in place on the loopback interface) as well as MAC address, TTL, operating system etc.

We were not expecting much to show up on this highly-secured system, so we decided to wander further.

The Stakes Get Higher...

Target 2 is the ‘nightmare machine'. It is a box so insecure that it can only be run under VMWare with no connection to the Internet. What better place to set LANguard free than on a Windows XP box, completely unpatched, completely open? If it were setup on the ‘net it would go down within a couple of minutes!

However, this was not good enough for our rigorous requirements, so we infected the box with a healthy dose of Sasser. Hopefully we would be able to finish the scan before LSASS.exe crashed, taking the system down with it. To make life even more difficult, we didn't give LANguard the right credentials like we had before. In essence, this was a 'no privilege' scan.

reviews-gfi-languard-5

LANguard detected the no password administrator account, the Sasser backdoor, default sharing, Terminal Services active (we enabled it for the scenario). In short, it picked up on everything.

We purposely didn't give it any credentials as we wanted to test its patch deployment features last, since this was what we were really interested in. This was very impressive as more expensive scanners (notably Retina) missed out on a lot of things when given no credentials.

To further extend out scans, we though it would be a good idea to scan our VLAN network that contained over 250 Cisco IP Phones and two Cisco Call Managers. LANguard was able to scan all IP Phones without a problem and also gave us some interesting findings as shown in this screenshot:

reviews-gfi-languard-6

LANguard detected with ease the http port (80) open and also included a sample of the actual page that would be downloaded should a client connect to the target host!

It is quite important to note at this point that the scan shown above was performed without any disruptions to our Cisco VoIP network. Even though no vulnerabilities were detected, something we expected, we were pleased enough to see Languard capable of working in our Cisco VoIP network without problems.

If you can't join them …... Patch them!

Perhaps one of the most neatest features of GFI's LANguard is the patch management system, designed to automatically patch the systems you have previously scanned. The automatic patching system works quite well, but you should download the online PDF file that contains instructions on how to proceed should you decide to use this feature.

The automatic patching requires the host to be previously scanned in order to find all missing patches, service packs and other vulnerabilities. Once this phase is complete, you're ready to select the workstation(s) you would like to patch!

As expected, you need the appropriate credentials in order to successfully apply all selected patches, and for this reason there is a small field in which you can enter your credentials for the remote machine.

We started by selectively scanning two hosts in order to proceed patching one of them. The target host was 10.0.0.54, a Windows 2000 workstation that was missing a few patches:

reviews-gfi-languard-7

LANguard successfully detected the missing patches on the system as shown on the screenshot above, and we then proceeded to patch the system. A very useful feature is the ability to select the patch(es) you wish to install on the target machine.

reviews-gfi-languard-8

As suggested by LANguard, we downloaded the selected patch and pointed our program to install it on the remote machine. The screen shot above shows the patch we wanted to install, followed by the machine on which we selected to install it. At the top of the screen we needed to supply the appropriate credentials to allow LANguard to do its job, that is, a username of 'Administrator' and a password of ..... sorry - can't tell :)

Because most patches require a system reboot, LANguard includes such options, ensuring that no input at all is required on the other side for the patching to complete. Advanced options such as ‘Warn user before deployment' and ‘Delete copied files from remote computer after deployment', are there to help cover all your needs:

reviews-gfi-languard-9

The deployment status tab is another smart feature; it allows the administrator to view the patching in progress. It clearly shows all steps taken to deploy the patch and will report any errors encountered.

It is also worth noting that we tried making life more difficult by running the patch management system from our laptop, which was connected to the remote network via the Internet, and securing it using a Cisco VPN tunnel with the IPSec as the encryption protocol. Our expectations were that GFI's LANguard would fail terribly, giving us the green light to note a weak point of the program.

To our surprise, it seems like GFI's developers had already forseen such situations and the results were simply amazing, allowing us to successfully scan and patch a Windows 2000 workstation located on the end of the VPN tunnel!

Summary

GFI without doubt has created a product that most administrators and network engineers would swear by. It's efficient, fast and very stable, able to perform its job whether you're working on the local or remote LAN.

Its features are very helpful: you won't find many network scanners pointing you to web pages where you can find out all the information on discovered vulnerabilities, download the appropriate patches and apply them with a few simple clicks of a mouse!

We've tried LANguard from small networks with 5 to 10 hosts up to large corporate network with more than 380 hosts, over WAN links and Cisco VPN tunnels and it worked like a charm without creating problems such as network congestions. We are confident that you'll love this product's features and it will quickly become one of your most necessary programs.

  • Hits: 21958

GFI EventsManager 7 Review

Imagine having to trawl dutifully through the event logs of twenty or thirty servers every morning, trying to spot those few significant events that could mean real trouble among that avalanche of operational trivia. Now imagine being able to call up all those events from all your servers in a single browser window and, with one click, open an event category to display just those events you are interested in…

Sounds good? Install this product, and you’ve got it.

A product of the well-known GFI stables, EventsManager 7 replaces their earlier LANguard Security Event Log Monitor (S.E.L.M.) which is no longer available. There’s also a Reporting Suite to go with it; but we haven’t reviewed that here.

In a nutshell the product enables you to collect and archive event logs across your organisation, but there’s so much more to it than that. It’s hard to condense the possibilities into a review of this size, but what you actually get is:

  • Automatic, scheduled collection of event logs across the network; not only from Windows machines but from Linux/Unix servers too, and even from any network kit that can generate syslog output;
  • The ability to group your monitored machines into categories and to apply different logging criteria to each group;
  • One tool for looking at event logs everywhere. No more switching the event log viewer between servers and messing around with custom MMCs;
  • The ability to display events by category or interest type regardless of where they occurred (for example just the Active Directory replication events, just the system health events, just the successful log-on events outside normal working hours);
  • Automated response actions for particular events or types of events including alerting staff by email or pager or running an external script to deal with the problem;
  • A back-end database into which you can archive raw or filtered events and which you can search or analyse against – great for legal compliance and for forensic investigation.

You can download the software from GFI’s website and, in exchange for your details, they’ll give you a thirty-day evaluation key that unlocks all the features; plenty of time to decide if it’s right for you. This is useful, because you do need to think about the deployment.

One key issue is the use of SQL-Server as the database back-end. If you have an existing installation you can use that if capacity permits, or you could download SQL Server Express from Microsoft. GFI do tell you about this but it’s hidden away in Appendix 3 of the manual, and an early section giving deployment examples might have been useful.

That said, once you get installed a handy wizard pops up to lead you through the key things you need to set up:

reviews-eventsmanager-1

Here again are things you’ll need to think about – such as who will get alerted, how, when and for what, and what actions need to be taken.

You’ll also need to give EventsManager a user that has administrative access to the machines you want to monitor and perhaps the safest way to do this is to set up a new user dedicated to that purpose.

Once you’ve worked through the wizard you can add your monitored machines under the various categories previously mentioned. Ready-made categories allow you to monitor according to the type, function or importance of the target machine and if you don’t like those you can edit them or create your own.

reviews-eventsmanager-2

The categories are more than just cosmetic; each one can be set up to define how aggressively EventsManager monitors the machines, their ‘working week’, (useful for catching unauthorised out-of-hours activity) and the types of events you’re interested in (you might not want Security logs from your workstations, for example). Encouragingly though, the defaults provided are completely sensible and can be used without worry.

reviews-eventsmanager-3

Once your targets are defined you’ll begin seeing logs in the Events Browser, and this is where the product really scores. To the left of the browser is a wealth of well-thought-out categories and types; click on one of these and you’ll see those events from across your enterprise. It’s as simple, and as wonderful as that.

reviews-eventsmanager-4

You can click on the higher-level categories to view, for example, all the SQL Server events, or you can expand that out and view the events by subcategory (just the Failed SQL Server Logons for example).

Again, if there are events of particular significance in your environment you can edit the categories to include them or even create your own, right down to the specifics of the event IDs and event types they collect. A particularly nice category is ‘Noise’, which you can use to collect all that day-to-day operational verbiage and keep it out of the way

For maximum benefit you’ll also want to assign actions to key categories or events. These can be real-time alerts, emails, corrective action scripts and log archiving. And again, you guessed it, this is fully customisable. The ability to run external scripts is particularly nice as with a bit of tweaking you can make the product do anything you like.

reviews-eventsmanager-5

Customisation is one of the real keys to this product. Install it out of the box, just as it comes, and you’ll find it useful. But invest some time in tailoring it to suit your organisation and you’ll increase its value so much you’ll wonder how you ever managed without it.

In operation the product proved stable though perhaps a little on the slow side when switching between screens and particularly when starting up. This is a testimony to the fact that the product is doing a lot of work on your behalf and, to get the best from it, you really should give it a decent system to run on. The benefits you’ll gain will more than make up for the investment.

  • Hits: 18542

GFI OneConnect – Stop Ransomware, Malware, Viruses, and Email hacks Before They Reach Your Exchange Server

gfi-oneconnect-ransomware-malware-virus-datacenter-protection-1aGFI Software has just revealed GFI OneConnect Beta – its latest Advanced Email Security Protection product. GFI OneConnect is a comprehensive solution that targets the safe and continuous delivery of business emails to organizations around the world.

GFI has leveraged its years of experience with its millions of business users around the globe to create a unique Hybrid solution consisting of an on-premise server and Cloud-based solution that helps IT admins and organizations protect their infrastructure from spam, malware threats, ransomware, virus and email service outages.  

GFI OneConnect not only takes care of filtering all incoming email for your Exchange server but it also works as a backup service in case your Exchange server or cluster is offline.

The solution consists of the GFI OneConnect Server that is installed on the customer’s premises. The OneConnect server connects to the local Exchange server on one side, and the GFI OneConnect Data Center on the other side as shown in the diagram below:

Deployment model of GFI OneConnect (Server & Data Center)

Figure 1. Deployment model of GFI OneConnect (Server & Data Center)

Email sent to the organization’s domain is routed initially through the GFI OneConnect . During this phase email is scanned by the two AntiVirus engines (ClamAV & Kaspersky) for virus, ransomware, malware etc. before forwarding them to the Exchange Server.

In case the Exchange server is offline GFI OneConnect’s Continuity mode will send and receive all emails, until the Exchange server is back online after which all emails are automatically synchronised. All emails received while your email server was down are available to users at any moment, thanks to the connection to the cloud and the GFI OneConnect’s Datacenter.

Deployment model of GFI OneConnect (Server & Data Center)

Figure 2. GFI OneConnect Admin Dashboard (click to enlarge)

While there is currently a beta version out - our first impressions show that this is an extremely promising solution that has been carefully designed with the customer and IT staff in mind. According to GFI – the best is yet to come – and we know that GFI always stands by its promises so we are really looking forward seeing the final version of this product in early 2017.

If you’ve been experiencing issues with your Exchange server continuity or have problems dealing with massive amounts of spam emails, ransomware and other security threats – give GFI OneConnectBeta a test run and discover how it can help offload all these problems permanently, leaving you time for other more important tasks.

  • Hits: 9700

Enforcing ICT Policies - How to Block Illegal & Unwanted Websites from your Users and Guests

Enforcing ICT Policies - How to Block Illegal & Unwanted Websites for your Users and GuestsEnsuring users follow company policies when accessing the internet has become a real challenge for businesses and IT staff. The legal implications for businesses not taking measures to enforce acceptable user policies (where possible) can become very complicated and businesses can, in fact, be held liable for damages caused by their users or guests.

A good example, found in almost every business around the world, is the offering of guest internet access to visitors. While they are usually unaware of the company’s ICT policies (nor do they really care about them) they are provided with free unrestricted access to the internet.

Sure, the firewall will only allow DNS, HTTP and HTTPS traffic in an attempt to limit internet access and its abuse but who’s ensuring they are not accessing illegal sites/content such as pornography, gambling, etc., which are in direct violation of the ICT policy?

This is where solutions like GFI WebMonitor help businesses cover this sensitive area by quickly filtering website categories in a very simple and effective way that makes it easy for anyone to add or remove specific website categories or urls.

How To Block Legal Liability Sites

Enforcing your ICT Internet Usage Policy via WebMonitor is a very simple and fast process. From the WebMonitor web-based dashboard, click on Manage and select Policies:

Note: Click on any image to enlarge it and view it in high-resolution

Adding a new Policy in GFI WebMonitorFigure 1. Adding a new Policy in GFI WebMonitor

At the next screen, click on Add Policy:

Click on the GFI WebMonitor Add Policy buttonFigure 2. Click on the GFI WebMonitor Add Policy button

At the next screen add the desired Policy Name and brief description below:

Creating the Web Policy in GFI WebMonitor using the WEBSITE elementFigure 3. Creating the Web Policy in GFI WebMonitor using the WEBSITE element

Now click and drag the WEBSITES element (on the left) into the center of the screen as shown above.

Next, configure the policy to Block traffic matching the filters we are about to create and optionally enable temporary access from users if you wish:

Selecting Website Categories to be blocked and actions to be takenFigure 4. Selecting Website Categories to be blocked and actions to be taken

Under the Categories section click inside the Insert a Site Category field to reveal a drop-down list of the different categories. Select a category by clicking on it and then click on the ‘+’ symbol to add the category to this policy. Optionally you can click on the small square icon next to the ‘+’ symbol to get a pop-up window with all the categories.

Optionally select to enable full URL logging and then click on the Save button at the top right corner to save and enable the policy.

The new policy will now appear on the Policies dashboard:

enforce-ict-policies-block-illegal-and-unwanted-websites-5

Figure 5. Our new WebMonitor policy is now active

If for any reason you need to disable the policy all you need to do is click on the green power button on the left and the policy is disabled immediately. A very handy feature that allows administrators to take immediate action when they notice unwanted effects from the new policies.

After the policy was enabled we tried accessing a gambling website from one of our workstations and received the following message on our web browser:

Our new policy blocks users from accessing gambling sites

Figure 6. Our new policy blocks users from accessing gambling sites

The GFI WebMonitor Dashboard reporting Blocking/Warning hits on the company’s policies:

GFI WebMonitor reports our Internet usage ICT Policy is being hit

Figure 7. GFI WebMonitor reports our Internet usage ICT Policy is being hit (click for full dashboard image)

Summary

The importance of properly enforcing an ICT Internet Usage Policy cannot be underestimated. It can not only save the company from legal implications but also its users and guests from their very own actions. Solutions such as GFI WebMonitor are designed to help businesses effectively apply ICT Policies and control usage of high-risk resources such as the internet.

  • Hits: 12636

Minimise Internet Security Threats, Scan & Block Malicious Content, Application Visibility and Internet Usage Reporting for Businesses

gfi-webmonitor-internet-usage-reporting-block-malicious-content-1aFor every business, established or emerging, the Internet is an essential tool which has proved to be indispensable. The usefulness of the internet can be counteracted by abuse of it, by a business’s employees or guests. Activities such as downloading or sharing illegal content, visiting high risk websites and accessing malicious content are serious security risks for any business.

There is a very easy way of monitoring, managing and implementing effective Internet usage. GFI WebMonitor can not only provide the aforementioned, but also provide real – time web usage. This allows for tracking bandwidth utilisation and traffic patterns. All this information can then be presented on an interactive dashboard. It is also an effective management tool, providing a business with the internet usage records of its employees.

Such reports can be highly customised to provide usage information based on the following criteria/categories:

  • Most visited sites
  • Most commonly searched phrases
  • Where most bandwidth is being consumed
  • Web application visibility

Some of the sources for web abuse that can be a time sink for employees are social media and instant messaging (unless the business operates at a level where these things are deemed necessary). Such web sites can be blocked.

GFI WebMonitor can also achieve other protective layers for the business by providing the ability to scan and block malicious content. WebMonitor helps the business keep a close eye on its employees’ internet usage and browsing habits, and provides an additional layer of security.

On its main dashboard, as shown below, the different elements help in managing usage and traffic source and targets:

WebMonitor’s Dashboard provides in-depth internet usage and reporting

Figure 1. WebMonitor’s Dashboard provides in-depth internet usage and reporting

WebMonitor’s main dashboard contains a healthy amount of information allowing administrators and IT managers to obtain important information such as:

  • See how many Malicious Sites were blocked and how many infected files detected.
  • View the Top 5 Users by bandwidth
  • Obtain Bandwidth Trends such as Download/Upload, Throughput and Latency
  • Number of currently active web sessions.
  • Top 5 internet categories of sites visited by the users
  • Top 5 Web Applications used to access the internet

Knowing which applications are used to access the internet is very important to any business. Web applications like YouTube, Bittorrent, etc. can be clearly identified and blocked, providing IT managers and administrators a ringside view of web utilisation.

On the flip side, if a certain application or website is blocked and a user tries to access it, he/she will encounter an Access Denied page rendered by GFI WebMonitor. This notification should be enough for the user to be deterred from trying it again:

WebMonitor effectively blocks malicious websites while notifying users trying to access it

Figure 2. WebMonitor effectively blocks malicious websites while notifying users trying to access it

For the purpose of this article, a deliberate attempt was made to download an ISO file using Bittorent. As per the policy the download page was part of the block policy. Hence GFI WebMonitor not only blocked the user from accessing the file, it also displayed the violation stating the user’s machine IP Address and the policy that was violated. This is a clear demonstration of how the management of web application can be effective.

Some of the other great dashboards include bandwidth insight. The following image shows the total download and upload for a specific period. The projected values and peaks can be easily traced as well.

WebMonitor’s Bandwidth graphs help monitor the organisation’s upload/download trafficFigure 3. WebMonitor’s Bandwidth graphs help monitor the organisation’s upload/download traffic (click to enlarge)

Another useful dashboard is that of activity. This provides information about total users, their web request, and a projection of the next 30 days, as shown in the following image:

WebMonitor allows detailed tracking of current and projected user web requests with very high accuracyFigure 4. WebMonitor allows detailed tracking of current and projected user web requests with very high accuracy (click to enlarge)

The Security dashboard is perhaps one of the most important. This shows all the breaches based on category, type and top blocked web based applications that featured within certain policy violations.

The Security dashboard allows tracking of web security incidents and security policy violationsFigure 5. The Security dashboard allows tracking of web security incidents and security policy violations (click to enlarge)

Running Web Reports

The easiest way to manage and produce the information gathered is to run reports. The various categories provided allow the user to run and view information of Internet usage depending on management requirements. The following image shows the different options available on the left panel:

WebMonitor internet web usage reports are highly customisable and provide detailed informationFigure 6. WebMonitor internet web usage reports are highly customisable and provide detailed information (click to enlarge)

But often management would rather take a pulse of the current situation. GFI WebMonitor caters to that requirement very well. The best place to look for instant information regarding certain key aspects of resource usage is the Web Insights section.

If management wanted to review the bandwidth information, the following dashboard would give that information readily:

The Web Insight section keeps an overall track of internet usageFigure 7. The Web Insight section keeps an overall track of internet usage (click to enlarge)

This provides a percentage view of how much data contributes to download or upload.

Security Insights shows all current activities and concerns that needs attention:

WebMonitor Security Insights dashboard displaying important web security reportsFigure 8. WebMonitor Security Insights dashboard displaying important web security reports (click to enlarge)

Conclusion

There is no doubt GFI WebMonitor becomes a very effective tool that allows businesses to monitor and control internet access for employees, guests and other internet users. Its intuitive interface allows administrators and IT Managers to quickly obtain the information they require but also put the necessary security policies in place to minimise security threats and internet resource abuse.

  • Hits: 13365

Increase your Enterprise or SMB Organization Security via Internet Application & User Control. Limit Threats and Internet Abuse at the Workplace

gfi-webmonitor-internet-application-user-control-1aIn this era of constantly pushing for more productivity and greater efficiency, it is essential that every resource devoted to web access within a business is utilised for business benefit. Unless the company concerned is in the business of gaming or social media, etc. it is unwise to use resources like internet/web access, and the infrastructure supporting it, for a purpose other than business. Like they say, “Nothing personal, just business”

With this in mind, IT administrators have their hands full ensuring management of web applications and their communication with the Internet. The cost of not ensuring this is loss of productivity, misuse of bandwidth and potential security breaches. As a business it is prudent to block any unproductive web application e.g. gaming, social media etc. and restrict or strictly monitor file sharing to mitigate information leakages.

It is widely accepted that in this area firewalls are of little use. Port blocking is not the preferred solution as it has a similar effect to a sledge hammer. What is required is the fineness of a scalpel to parse out the business usage from the personal and manage those business requirements accordingly. To be able to manage web application at such a level, it is essential to be able to identify and associate the request with its respective web application. Anything in line with business applications goes through, the rest are blocked.

This is where WebMonitor excels in terms of delivering this level of precision and efficiency. It identifies access requests from supported applications using inspection technology and helps IT administrators to allow or block them. Hence, the administrators can allow certain applications for certain departments while blocking certain other applications as part of a blanket ban, thus enhancing the browsing experience of all users.

So, to achieve this, the process is to use the unified policy system of WebMonitor. The policies can be configured specifically for application control or, within the same policy, several application controls can be combined using other filtering technologies.

Let’s take a look at the policy panel of WebMonitor:

gfi-webmonitor-internet-application-user-control-1Figure 1. WebMonitor Policy Panel interface. Add, delete, create internet access policies with ease (click to enlarge)

In order to discover the controls that are available against a certain application, the application needs to be dragged into the panel. For example, if we were to create a policy to block Google Drive we would be dragging that into the panel itself.

Once the related controls show up, we can select an application or application category the policy will apply to.

The rest of the configuration from this point will allow creating definitions for the following:

  • Filter options
  • Scope of the policy
  • Actions to be taken
  • Handling of exceptions
  • Managing notifications

All of the above are ready to be implemented in a ‘drag – and – drop’ method. GFI WebMonitor will commence controlling access of the configured application to the Internet the moment the policy is saved.

So, going back to the example of creating the ‘block Google Drive’ policy, the steps are quite simple:

1. Click on ‘Add Policy’ as show in the following image:

gfi-webmonitor-internet-application-user-control-2

Figure 2. Click on the “Add Policy” button to being creating a policy to block internet access

Enter a Name and description in the relevant fields:

gfi-webmonitor-internet-application-user-control-3Figure 3. Adding policy name and description in WebMonitor to block an application network-wide (click to enlarge)

3. As this policy applies to ‘all’, at this moment there is no need to configure the scope. This can be done on a per user, group or IP address only basis.

4. Drag in the Application Block from the left panel (as shown in the following image), Select ‘Block’ in the ‘Allow, Block, Warn, Monitor’ section.

5. In the Application Category section, select ‘File Transfer’ as shown in the image below:

gfi-webmonitor-internet-application-user-control-4Figure 4. WebMonitor: Blocking the File Transfer application category from the internet (click to enlarge)

6. Click on the ‘Applications’ Tab and start typing ‘Google Drive’ in the field. The drop down list will include Google Drive. Select it and then press enter. The application will be added. Now Click on Save.

We need to keep in mind that the policy is operational the moment the Save button, located at the top right corner, is clicked.

Now if any user tries to access the web application Google Drive, he/she will be presented with the ‘Block Page’ rendered by GFI WebMonitor. At the same time, any Google Drive thick client installed on the user’s machine will not be able to connect to the Internet

As mentioned earlier, and reiterated through the above steps, the process of creating and implementing a web access management policy in WebMonitor is quite simple. Given the length and breadth of configuration options within the applications and the scope, this proves to be a very powerful tool that will make the task of managing and ensuring proper usage of web access, simple and effective for IT Administrators in small and large enterprise networks.

  • Hits: 11877

GFI WebMonitor Installation: Gateway / Proxy Mode, Upgrades, Supported O/S & Architectures (32/64bit)

WebMonitor is an awarded gateway monitoring and internet access control solution designed to help organizations deal with user internet traffic, monitor and control bandwidth consumption, protect computers from internet malware/viruses and other internet-based threats plus much more. GFI WebMonitor supports two different installation modes: Gateway mode and Simple Proxy mode. We’ll be looking into each mode and help administrators and engineers understand which is best, along with the prerequisites and caveats of each mode.

Proxy vs Gateway Mode

Proxy mode, also named Simple Proxy mode is the simplest way to install GFI WebMonitor. You can deploy this on any computer that has access to the internet. In Simple Proxy mode, all client web-browser traffic (HTTP/HTTPS) is directed through GFI WebMonitor. To enable this type of setup, you will need an internet facing router that can forward traffic and block ports.

With GFI WebMonitor functioning in Simple Proxy mode, each client machine must also be configured to use the server as a web proxy for HTTP and HTTPS protocols. GFI WebMonitor comes with built-in Web Proxy Auto-Discovery (WPAD) server functionality that makes the process easy - simply enable automatic discovery of proxy server for each of your client machines and they should automatically find and use WebMonitor as a proxy. In case of a domain environment, it is best to regulate this setting using a Group Policy Object (GPO).

When WebMonitor is configured to function in Internet Gateway mode, all inbound and outbound client traffic will pass through GFI WebMonitor, irrespective of whether the traffic is HTTP or non-HTTP. With Internet Gateway mode, the client browser does not need to point to any specific proxy – all that’s required is to enable the Transparent Proxy function in GFI WebMonitor.

Supported OS & Architectures

Whether functioning as a gateway or a web proxy, GFI WebMonitor processes all web traffic. For a smooth operation that amounts to using a server architecture capable of handling all the requests every day. When the environment is small (10-20 nodes), for instance, a 2 GHz processor and 4 GB RAM minimum with a 32-bit Windows operating system architecture will suffice.

Larger environments, such as those running the Windows Server operating system on a minimum of 8 GB RAM and multi-core CPU will require the 64-bit architecture. GFI WebMonitor works with both 32- as well as 64-bit Windows operating system architectures starting from Windows 2003 and Windows Vista.

Installation & Upgrading

When installing for the first time, GFI WebMonitor starts by detecting its prerequisites. If the business is already using GFI WebMonitor, the process determines the prerequisites according to the older product instance. If the installation kit encounters an older instance, it imports the previous settings and redeploys them after completing the installation.

Whether installing for the first time or upgrading an older installation, the installation kit looks for any setup prerequisites necessary and installs them automatically. However, some prerequisites may require user interaction and these will come up as separate installation processes with their own user interfaces.

Installing GFI WebMonitor

As with all GFI products, installation is a very easy follow-the-bouncing-ball process. Once the download of GFI WebMonitor is complete, execute the installer using an account with administrative privileges.

If WebMonitor has been recently downloaded, you can safely skip the newer build check. When ready, click Next to proceed:

gfi-webmonitor-installation-setup-gateway-proxy-mode-1

Figure 1. Optional check for a new WebMonitor edition during installation

You will need to fill in the username and/or the IP address that will have administrative access to the web-interface of GFI WebMonitor, then click Next to select the folder to install GFI WebMonitor and finally start the installation process:

gfi-webmonitor-installation-setup-gateway-proxy-mode-2

Figure 2. Selecting Host and Username that are allowed to access the WebMonitor Administration interface.

Once the installation process is complete, click Finish to finalize the setup and leave the Open Management Console checked:

gfi-webmonitor-installation-setup-gateway-proxy-mode-3

Figure 3. Installation complete – Open Management Console

After this, the welcome screen of the GFI WebMonitor Configuration Wizard appears. This will allow you to configure the server to operate in Simple Proxy Mode or Gateway Mode. At this point, it is recommended you enable JavaScript in Internet Explorer or the web browser of your choice before proceeding further:

gfi-webmonitor-installation-setup-gateway-proxy-mode-4aFigure 4. The welcome screen once WebMonitor installation has completed

After clicking on Get Started to proceed, we need to select which of the two modes GFI WebMonitor will be using. We selected Gateway mode to ensure we get the most out of the product as all internet traffic will flow through our server and provide us with greater granularity & control:

gfi-webmonitor-installation-setup-gateway-proxy-mode-5aFigure 5. Selecting between Simple Proxy and Gateway mode

The Transparent Proxy can be enabled at this stage, allowing web browser clients to automatically configure themselves using the WPAD protocol. WebMonitor shows a simple network diagram to help understand how network traffic will flow to and from the internet:

gfi-webmonitor-installation-setup-gateway-proxy-mode-6aFigure 6. Internet traffic flow in WebMonitor’s Gateway Mode

Administrators can select the port at which the Transparent Proxy will function and then click Save and Test Transparent Proxy. GFI WebMonitor will confirm Transparent Proxy is working properly.

Now, click Next to see your trial license key or enter a new license key. Click on Next to enable HTTPS scanning.

HTTPS Scanning gives you visibility into secure surfing sessions that can threaten the network's security. Malicious content may be included in sites visited or files downloaded over HTTPS. The HTTPS filtering mechanism within GFI WebMonitor enables you to scan this traffic. There are two ways to configure HTTPS Proxy Scanning Settings, via the integrated HTTPS Scanning Wizard or manually.

Thanks to GFI WebMonitor’s flexibility, administrators can add any HTTPS site to the HTTPS scanning exclusion list so that it bypasses inspection.

If HTTPS Scanning is disabled, GFI WebMonitor enables users to browse HTTPS websites without decrypting and inspecting their contents.

When ready, click Next again and provide the full path of the database. Click Next again to enter and validate the Admin username and password. Then, click Next to restart the services. You can now enter your email details and click Finish to end the installation.

gfi-webmonitor-installation-setup-gateway-proxy-mode-7aFigure 7. GFI WebMonitor’s main control panel

Once the installation and initial configuration of GFI WebMonitor is complete, the system will begin gathering useful information on our users’ internet usage.

In this article we examined WebMonitor Simple Proxy and Gateway installation mode and saw the benefits of each method. We proceeded with the Gateway mode to provide us with greater flexibility, granularity and reporting of our users’ internet usage. The next articles will continue covering in-depth functionality and reporting of GFI’s WebMonitor.

  • Hits: 16236

GFI WebMonitor: Monitor & Secure User Internet Activity, Stop Illegal File Sharing - Downloads (Torrents), Web Content Filtering For Organizations

gfi-webmonitor-internet-filtering-block-torrents-applications-websites-reporting-1In our previous article we analysed the risks and implications involved for businesses when there are no security or restriction policies and systems in place to stop users distributing illegal content (torrents). We also spoke about unauthorized access to company systems, sharing sensitive company information and more. This article talks about how specialized systems such as WebMonitor are capable of helping businesses stop torrent applications accessing the internet, control the websites users access, block remote control software (Teamviewer, Remote Desktop, Ammy Admin etc) and put a stop to users wasting bandwidth, time and company money while at work.

WebMonitor is more than just an application. It can help IT departments design and enforce internet security policies by blocking or allowing specific applications and services accessing the internet.

WebMonitor is also capable of providing detailed reports of users’ web activity – a useful feature that ensure users are not accessing online resources they shouldn’t, and provide the business with the ability to check users’ activities in case of an attack, malware or security incident.

WebMonitor is not a new product - it carries over a decade of development and has served millions of users since its introduction into the IT market. With awards from popular IT security magazines, Security Experts, IT websites and more, it’s the preferred solution when it comes to a complete web filtering and security monitoring solution.

Blocking Unwanted Applications: Application Control – Not Port Control

Blocking Unwanted Applications: Application ControlSenior IT Managers, engineers and administrators surely remember the days where controlling TCP/UDP ports at the Firewall level was enough to block or provide applications access to the internet. For some years now, this is no longer a valid way of application control, as most ‘unwanted’ applications can smartly use common ports such as HTTP (80) or HTTPS (443) to circumvent security policies, passing inspection and freely accessing the internet.

In order to effectively block unwanted applications, businesses must realize that it is necessary to have a security Gateway device that can correctly identify the applications requesting access to the internet, regardless the port they are trying to use – aka Application Control.

Application Control is a sophisticated technique that requires upper layer (OSI Model) inspection of data packets as they flow through the gateway or proxy, e.g. GFI WebMonitor. The gateway/proxy executes deep packet level inspection to identify the application requesting access to the internet.

In order to correctly identify the application the gateway must be aware of it, which means it has to be listed in its local database.

The Practical Benefits Of Internet Application Control & Web Monitoring Solution

Let’s take a more practical look at the benefits an organization has when implementing an Application Control & Web Monitoring solution:

  • Block file sharing applications such as Torrents
  • Stop users distributing illegal content (games, applications, movies, music, etc)
  • Block remote access applications such as TeamViewer, Remote Desktop, VNC, Ammy Admin and more.
  • Stop unauthorized access to the organization’s systems via remote access applications
  • Block access to online storage services such as DropBox, Google Drive, Hubic and others
  • Avoid users sharing sensitive information such as company documents via online storage services
  • Save valuable bandwidth for the organization, its users, remote branches and VPN users
  • Protect the network from malware, viruses and other harmful software downloadable via the internet
  • Properly enforce different security policies to different users and groups
  • Protect against possible security breaches and minimize responsibility in case of an infringement incident
  • And much more

The above list contains a few of the major benefits that solutions such as WebMonitor can offer to organizations.

Why Web Monitoring & Content Filtering is Considered Mandatory

Web Monitoring is a very sensitive topic for many organizations and its users, mainly because users do not want others to know what they are doing on their computer. The majority of users perceive web monitoring as spying on them to see what sites they are accessing and if they are wasting time on websites and internet resources unrelated to work, however, users do not understand the problems and security risks that are mostly likely to arise if no monitoring or content filtering mechanism is in place.

In fact the damage caused by users irresponsibly visiting high-risk sites and surfing the internet without any limits is way bigger than most companies might think and there are some really great examples that help prove this point. The USA FBI site has a page with examples of internet scams and risks from social media network sites.

If we assume your organization is one of the luckiest ones that hasn’t been hit (yet) from irresponsible user internet activities, then we are here to assure you that it’s simply a matter of time.

Stop wasting company bandwidth from user downloadsApart from the imminent security risk, users who have uncontrollable access are also wasting bandwidth – that’s bandwidth the organization is paying for - and are likely to slow down the internet for the rest who are legitimately trying to get work done. In cases where VPNs are running over the same lines then VPN users, remote branches and mobile users are most likely to experience slow connection speeds when accessing the organization’s resources over the internet.

This problem becomes even more evident when asymmetrical WAN lines are in use, such as ADSL lines. With asymmetrical WAN lines, a single user who is uncontrollably uploading photos, movies (via torrent) or other content can affect all other users downloading since bottlenecks can easily occur when one of the two streams (downstream or upstream) is in heavy usage. This is a main characteristic of asymmetrical WAN lines.

Finally, if there is an organization security policy in place it’s most likely to contain fair internet usage guidelines for users and specify what they can and cannot do using the organization’s internet resources. The only way to enforce such a policy is through a sophisticated web monitoring & policy enforcement mechanism such as GFI WebMonitor.

Summary

In this article we analysed how specialized web monitoring and control application software, such as WebMonitor, are able to control which user applications are able to access the internet, control websites users within an organization can access, block internet content while saving valuable bandwidth. With such solutions, organizations are able to enforce their internet security policies while at the same time protecting themselves from unauthorized access to their systems (remote desktop software), stop illegal activities such as torrent file sharing and more.

  • Hits: 15852

Dealing with User Copyright Infringement (Torrents), Data Loss Prevention (DLP), Unauthorized Remote Control Applications (Teamviewer, RDP) & Ransomware in the Business Environment

GFI WebMonitor - Control user copyright infringement in the Business EnvironmentOne of the largest problems faced by organizations of any size is effectively controlling user internet access (from laptops, mobile devices, workstations etc), minimizing the security threats for the organization (ransomware – data loss prevention), user copyright infringement (torrent downloading/sharing movies, games, music etc) and discover where valuable WAN-Internet bandwidth is being wasted.

Organizations clearly understand that using a Firewall is no longer adequate to control the websites its users are able to access, remote control applications (Teamviewer, Radmin, Ammyy Admin, Remote desktop etc), file sharing applications - Bittorrent clients (uTorrent, BitComet, Deluge, qBittorrent etc), online cloud storage services (Dropbox, OneDrive, Google Drive, Box, Amazon Cloud Drive, Hubic etc) and other services and applications.

The truth is that web monitoring applications such as GFI’s WebMonitor are a lot more than just a web proxy or internet monitoring solution.

Web monitoring applications are essential for any type or size of network as they offer many advantages:

  • They stop users from abusing internet resources
  • They block file-sharing applications and illegal content sharing
  • They stop users using cloud-based file services to upload sensitive documents, for example saving company files to their personal DropBox, Google Drive etc.
  • They stop remote control applications connecting to the internet (e.g Teamviewer, Remote Desktop, Ammy Admin etc)
  • They ensure user productivity is kept high by allowing access to approved internet resources and sites
  • They eliminate referral ad sites and block abusive content
  • They support reputation blocking to automatically filter websites based on their reputation
  • They help IT departments enforce security policies to users and groups
  • They provide unbelievable flexibility allowing any type or size of organization to customise its internet usage policy to its requirements

The Risk In The Business Environment – Illegal Downloading

GFI WebMonitor The Risk in the Business Environment – Illegal DownloadingMost Businesses are completely unaware of how serious these matters are and the risks they are taking while dealing with other ‘more important’ matters.

Companies such as the Motion Picture Association of America (MPAA) and the Recording Industry Association of America (RIAA) are in a continuous battle suing and fighting with companies, ISPs and even home users for illegally distributing movies and music.

Many users are aware of this and are now turning to their company’s internet resources, which in many cases offer faster and unlimited data transfer, to download their illegal content such as movies, games, music and other material.

An employer or business can be easily held responsible for the actions of its employees when it comes to illegal download activities, especially if no policies or systems are in place.

In the case of an investigation, if the necessary security policies and web monitoring systems are in place with the purpose of preventing copyright infringement and illegal downloading, businesses are less vulnerable to the illegal implications of their users, plus it allows them to track down and find the person responsible.

Data Loss Prevention (DLP) – Stop Users From Uploading Sensitive/Critical Documents

GFI WebMonitor Stop Users from Uploading Sensitive - Critical Company DocumentsWhile illegal downloading is one major threat for businesses, stopping users sharing company data and sensitive information (aka Data Loss Prevention or DLP) is another big problem.

With the explosion of (free) cloud-based storage services such as DropBox, OneDrive, Google Drive and others, users can quickly and easily upload any type of document directly from their workplace to their personal cloud storage and instantaneously share it with anyone in the world, without the company’s consent or knowledge.

The smartly designed cloud-storage applications are able to use HTTP & HTTPS to transfer files, and circumvent firewall security policies and other types of protection.

More specialised application proxies such as GFI’s WebMonitor can effectively detect and block these applications, saving businesses major security breaches and damages.

Block Unauthorized Remote Control Applications (TeamViewer, Ammy Admin, Remote Desktop, VNC etc) & Ransomware

Remote control applications such as Teamviewer, Ammy Admin, Remote Desktop and others have been causing major security issues in organizations around the world. In most cases, users run these clients so they can then remotely access and control their workstation from home, continuing their “downloads” or transfer files to their home PC and other unauthorized activities.

In other cases, these remote applications become targets for pirates and hackers, who try to hijack sessions that have been left running by users.

Ransomware is a new type of threat where, through an application running on the user’s workstations, hackers are able to gain access and encrypt files found on the computers, even network drives and shares within a company.

GFI WebMonitor - Avoid Ransomware - Hackers via Remote Desktop/Control ApplicationsIn late 2015, popular Ammy Admin – a remote control software, was injected with malicious code and unaware home and corporate users downloaded and used the free software. Infected by at least five different malware versions, they gave attackers full access and control over the PC. Some of the malware facilitated stealing banking details, encrypting user files in exchange for money to decrypt them and many more.

In another case during 2015, attackers began installing ransomware on computers running Remote Desktop Services. The attackers obtained access via brute-force attack and then installed their malware which started scanning for specific file extensions. A ransom of $1000 USD was requested in order to have the files decrypted.

Blocking this type of applications is a major issue for companies as users uncontrollably make use of them, not realizing they are putting their company at serious risk.

Use of such applications should be heavily monitored and restricted because they pose a significant threat to businesses.

GFI’s WebMonitor’s extensive application list has the ability to detect and effectively block these and many other similar applications, putting an end to this major security threat.

Summary

The internet today is certainly not a safe place for users or organizations. Security threats resulting from users downloading and distributing illegal content, sharing company sensitive information, uncontrollably accessing their systems from home or other locations and the potential hazard of attackers gaining access to internal systems via RDP programs, is real. Avoid getting your company caught with its pants down and seek ways to tighten and enforce security policies that will help protect them from these ever present threats.

  • Hits: 10590

Automate Software Deployment with the Help of GFI LanGuard. Quick & Easy Software Installation on all PCs – Workstations & Servers

Deploying a single application to hundreds of workstations or servers can become a very difficult and time-consuming task. Thankfully, remote deployment of software and applications is a feature offered by GFI LanGuard. With Remote Software Deployment, we can automate the installation of pretty much any software to any amount of computers on the network, including Windows servers (2003,2008, 2012), Domain Controllers, Windows workstations and other.

In this article we’ll show how easy it is to deploy any custom software using GFI LanGuard. For our demonstration purposes, we’ll deploy Mozilla Firefox to a Windows server.

To begin configuring the deployment, select the Remediate tab from GFI LanGuard, then select the Deploy Custom Software option as shown below:

Preparing the network-wide deployment of Mozilla Firefox through GFI LanGuard

Figure 1. Preparing the network-wide deployment of Mozilla Firefox through GFI LanGuard

Next, select the target machine from the left panel. We can select one or multiple targets using the CTRL key. For our demonstration, we selected the DCSERVER which is a Windows 2003 server.

Now, from the Deploy Custom Software section, click on Add to select the software to be deployed. This will present the Add Custom Software window where we can select the path to the installation file. GFI LanGuard also provides the ability to run the setup file using custom parameters – this handy feature allows the execution of silent installations (no window/prompt shown at the target machine desktop), if supported by the application to be installed. Mozilla Firefox supports silent installation using the ‘ –ms ‘ parameter:

GFI LanGuard custom software deployment using a parameter for silent installation Figure 2. GFI LanGuard custom software deployment using a parameter for silent installation

When done, click on the Add button to return back to the main screen where GFI LanGuard will display the target computer(s) & software selected, plus installation parameters:

GFI LanGuard ready to deploy Mozilla Firefox on a Windows Server

Figure 3. GFI LanGuard ready to deploy Mozilla Firefox on a Windows Server

Clicking on the Deploy button brings up the final window where we can either initiate the deployment immediately or schedule it for a later time. From here, we can also insert any necessary credentials but also select to notify the remote user, force a reboot after the installation and many other useful options:

Final configuration options for remote deployment of Mozilla Firefox via GFI LanGuard

Figure 4. Final configuration options for remote deployment of Mozilla Firefox via GFI LanGuard

GFI LanGuard’s remote software deployment is so sophisticated that it even allows the configuration of the number of threads that will be executed on the remote computer (under the Advanced options link), helping ensure minimum impact for the user working on the remote system.

Once complete, click on OK to proceed with the remote deployment. LanGuard will then return back to the Remediation window and provide real-time update of the installation process, along with a detailed log below:

GFI LanGuard Remote software deployment of Mozilla Firefox complete

Figure 5. GFI LanGuard Remote software deployment of Mozilla Firefox complete

Installation of Mozilla Firefox was incredibly fast and to our surprise, the impact on the remote host was undetectable. We actually didn’t realise the installation was taking place until the Firefox icon appeared on the desktop. CPU history also confirm there was no additional load on the server:

Successful installation of Mozilla Firefox, without any system performance impact!

Figure 6. Successful installation of Mozilla Firefox, without any system performance impact!

GFI LanGuard’s software deployment feature is truly impressive. It not only provides network administrators with the ability to deploy software on any machine on their network, but also gives complete control on the way the software will be deployed and resources that will be used on the remote computer during the installation. Additional options such as scheduling the deployment, custom user messages before or after the installation, remote reboot and many more, make GFI LanGuard it a necessary tool for any organization.

  • Hits: 12241

How to Manually Deploy – Install GFI LanGuard Agent When Access is Denied By Remote Host (Server – Workstation)

When IT Administrators and Managers are faced with the continuous failure of GFI LanGuard Agent deployment e.g (Access is denied), it is best to switch to manual installation in order to save valuable time and resources. The reason of failure can be due to incorrect credentials, disabled account, firewall settings, disabled remote access on the target computer and many more. Deploying GFI LanGuard Agents is the best way to scan your network for unpatched machines or machines with critical vulnerabilities.

GFI LanGuard Agent deployment failing with Access is denied

Figure 1. GFI LanGuard Agent deployment failing with Access is denied

Users interested can also check our article Benefits of Deploying GFI LanGuard Agents on Workstations & Servers. Automate Network-wide Agent Scanning and Deployment.

Step 1 – Locate Agent Package On GFI LanGuard Server

The GFI LanGuard Agent installation file is located in one of the following directories, depending on your operating system:

  • For 32bit operating systems: c:\Program Files\GFI\LanGuard 11\Agent\
  • For 64bit operating systems: c:\Program Files (x86)\GFI\LanGuard 11\Agent\

The location of GFI LanGuard Agent on our 64bit O/S.

Figure 2. The location of GFI LanGuard Agent on our 64bit O/S.

Step 2 – Copy The File To The Target Machine & Install

Once the file is copied to the target machine, execute it using the following single line command prompt:

c:\LanGuard11agent.msi /qn GFIINSTALLID="InstallationID" /norestart /L*v "%temp%\LANSS_v11_AgentKitLog.csv

Note: InstallationID is an ID that can be found in the crmiini.xml file located on the GFI LanGuard server directory for 32bit O/S: c:\Program Files\GFI\LanGuard 11 Agent  or c:\Program Files (x86)\GFI\LanGuard 11 Agent for 64bit O/S.

Following is a screenshot of the contents of our crmiini.xml file where the installation ID is clearly shown:

Installation ID in crmiini.xml file on our GFI LanGuard Server

Figure 3. Installation ID in crmiini.xml file on our GFI LanGuard Server

With this information, the final command line (DOS) for the installation of the Agent will be as follows:

LanGuard11agent.msi /qn GFIINSTALLID=" e86cb1c1-e555-40ed-a6d8-01564bdb969e" /norestart /L*v "%temp%\LANSS_v11_AgentKitLog.csv

Note: Make sure the command prompt is run with Administrator Privileges (Run as Administrator), to ensure you do not have any problems with the installation.

Here is a screenshot of the whole command executed:

Successfully Installing GFI LanGuard Agent On Workstations & Servers

Figure 4. Successfully Installing GFI LanGuard Agent On Workstations & Servers

Notice that the installation is a ‘silent install’ and will not present any message or prompt the user for a reboot. This makes it ideal for quick deployments where no reboot and minimum user interruption is required.

A restart will be necessary to complete the Agent initialization.

Important Notes

After completing the manual installation of the GFI LanGuard Agent, it is necessary to remote deploy the Agent from the GFI LanGuard console as well, otherwise the GFI LanGuard server will not be aware of the Agent manually installed on the remote host.

Also, it is necessary to deploy at least one Agent remotely via GFI LanGuard server console, before attempting the manual deployment, in order to initially populate the Crmiini.xml file with the installation id parameters.

This article covered the manual deployment of GFI’s LanGuard Agent on Windows-based machines. We took a look at common reasons why remote deployment of the Agent might fail, and covered step-by-step the manual installation process and prerequisites to ensure the Agent is able to connect to the GFI LanGuard server.

  • Hits: 22453

Benefits of Deploying GFI LanGuard Agents on Workstations & Servers. Automate Network-wide Agent Scanning & Deployment

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-1aGFI LanGuard Agents are designed to be deployed on local (network) or remote servers and workstations. Once installed, the GFI LanGuard Agents can then be configured via LanGuard’s main server console, giving the administrator full control as to when the Agents will scan the host they are installed on, and communicate their status to the GFI LanGuard server.

Those concerned about system resources will be pleased to know that the GFI LanGuard Agent does not consume any CPU cycles or resources while idle. During the time of scanning, once a day for a few minutes, the scan process is kept at a low priority to ensure that it does not interfere or impact the host’s performance.

GFI LanGuard Agents communicate with the GFI LanGuard server using TCP port 1070, however this can be configured.

Let’s see how we can install the GFI LanGuard Agent from the server’s console.

First open GFI LanGuard and select Agents Management from the Configuration tab:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-1

Figure 1. Select Agents Management and the Deploy Agents

Next, you can choose between Local domain or Custom to define your target(s):

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-2

Figure 2. Defining Target rules for GFI LanGuard Agent deployment

Since we’ve selected Custom, we need to click on Add new rule to add our targets.

The targets can be defined via their Computer name (shown below), Domain name or Organization Unit:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-3

Figure 3. Defining our target hosts using their Computer name

When complete, click on OK to return to the previous window.

We now see all computer hosts selected:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-4

Figure 4. Viewing selected hosts for Agent deployment

The Advance Settings option on the lower left area of the window, allows us to configure the automatic discovery of machines with Agents installed, setup up the Audit schedule of the agent (when it will scan its host and update the LanGuard server), Scan profile used by the Agent, plus an extremely handy feature called Auto Remediation which enables GFI LanGuard to automatically download and install missing updates, service packs, uninstall unauthorized applications and more, on the remote computers.

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-5

Figure 5. GFI LanGuard - Agent Advanced Settings – Audit Schedule tab

The screenshot below shows us the Auto Remediation tab settings:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-6

Figure 6. Agent Advanced Settings – Auto Remediation tab

When done, click on OK to save the selected settings and return back to the previous window.

Now click on Next to move to the next step. At this point, we need to enter the administrator credentials of the remote machine(s) so that GFI LanGuard can log into the remote machines and deploy the agent. Enter the username and password and hit Next and then Finish at the last window:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-7

Figure 7. Entering the necessary credentials for the Agent deployment

GFI LanGuard will now being the deployment of its Agent to the selected remote hosts:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-8

Figure 8. GFI LanGuard preparing for the Agent deployment

After a while, the LanGuard Agent will report its installation status. Where successfully, we will see the Installed message, otherwise a Pending install message will continue to be displayed along with an error if it was unsuccessful:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-9

Figure 9. LanGuard Agent installation status

Common problems not allowing the successful Agent deployment are incorrect credentials, Firewall or user rights.

To check the status of the installed Agent, we can simply select the desired host, right-click and select Agent Diagnostic as shown below:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-10

Figure 10. Accessing GFI LanGuard Agent Diagnostics

The Agent Diagnostic window is an extremely helpful feature as it provides a great amount of information on the Agent and the remote host. In addition, at the end of the Diagnosis Activity Window, we’ll find a zip file that contains all the presented information. This file can use email to GFI’s support in case of Agent problems:

gfi-languard-how-to-deploy-agent-network-wide-on-servers-workstations-11

Figure 11. Running the Agent Diagnostics report

The GFI LanGuard Agent is an extremely useful feature that allows the automatic monitoring, patching and updating of the host machine, leaving IT Administrators and Managers to deal with other important tasks. Thanks to its Domain & Workgroup support, GFI LanGuard it can handle any type and size of environment. If you haven’t used it yet, download your copy of GFI LanGuard and give it a try – you’ll be surprised how much valuable information you’ll get on your systems security & patching status and the time you’ll save!

  • Hits: 14981

How to Configure Email Alerts in GFI LanGuard 2015 – Automating Alerts in GFI LanGuard

One of the most important features in any network security monitoring and patch management application such as GFI’s LanGuard is the ability to automate tasks e.g automatic network scanning, email alerts etc. This allows IT Administrators, Network Engineers, IT Managers and other IT Department members, continue working on other important matters while they have their peace of mind that the security application is keeping things under control and will alert them instantly upon any changes detected within the network or even vulnerability status of the hosts monitored.

GFI LanGuard’s email alerting feature can be easily accessed either from the main Dashboard where usually the Alerting Options notification warning appears at the bottom of the screen:

gfi-languard-configure-automated-email-alert-option-1

Figure 1. GFI LanGuard email alerting Option Notification

Or alternatively, by selecting Configuration from the main menu and then Alerting Options from the left side area below:

gfi-languard-configure-automated-email-alert-option-2

Figure 2. Accessing Alerting Options via the menu

Once in the Alerting Options section, simply click on the click here link to open the Alerting Options Properties window. Here, we enter the details of the email account that will be used, recipients and smtp server details:

gfi-languard-configure-automated-email-alert-option-3

Figure 3. Entering email, recipient & smtp account details

Once the information has been correctly provided, we can click on the Verify Settings button and the system will send the recipients a test notification email. In case of an IT department, a group email address can be configured to ensure all members of the department receive alerts and notifications.

Finally, at the Notification tab we can enable and configure a daily report that will be sent at a specific time of the day and also select the report format. GFI LanGuard supports multiple formats such as PDF, HTML, MHT, RTF, XLS, XLSX & PNG.

gfi-languard-configure-automated-email-alert-option-4

Figure 4. GFI LanGuard Notification Window settings

When done, simply click on the OK button to return back to the Alerting Options window.

GFI LanGuard will now send an automated email alert on a daily basis whenever there are changes identified after a scan.

This article showed how GFI LanGuard, a network security scanner, vulnerability scanner and patch management application, can be configured to automatically send email alerts and reports on network changes after every scan.

  • Hits: 10795

How to Scan Your Network and Discover Unpatched, Vulnerable, High-Risk Servers or Workstations using GFI LanGuard 2015

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-1aThis article shows how any IT Administrator, network engineer or security auditor can quickly scan a network using GFI’s LanGuard and identify the different systems such as Windows, Linux, Android etc. More importantly, we’ll show how to uncover vulnerable, unpatched or high-risk Windows systems including Windows Server 2003, Windows Server 2008, Windows Server 2012 R2, Domain Controllers, Linux Servers such as RedHat Enterprise, CentOS, Ubuntu, Debian, openSuse, Fedora, any type of Windows workstation (XP, Vista, 7, 8, 8.1,10) and Apple OS X.

GFI’s LanGuard is a swiss-army knife that combines a network security tool, vulnerability scanner and patching management system all in one package. Using the network scanning functionality, LanGuard will automatically scan the whole network and use the provided credentials to log into every located host and discover additional vulnerabilities.

To begin, we launch GFI LanGuard and at the startup screen, select the Scan Tab as shown below:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-1

Figure 1. Launching GFI LanGuard 2015

Next, in the Scan Target section, select Custom target properties (box with dots) and click on Add new rule. This will bring us to the final window where we can add any IP address range or CIDR subnet:

 

Figure 2. Adding your IP Network – Subnet to LanGuard for scanning

Now enter the IP address range you would like LanGuard to scan, e.g 192.168.5.1 to 192.168.5.254 and click OK.

The new IP address range should now appear in the Custom target properties window:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-3

Figure 3. Custom target properties displays selected IP address range

Now click on OK to close the Custom target properties window and return back to the Scan area:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-4

Figure 4. Returning back to LanGuard’s Scan area

At this point, we can enter the credentials (Username/Password) to be used for remotely accessing hosts discovered (e.g domain administrator credentials is a great idea) and selectively click on Scan Options to reveal additional useful options to be used during our scan, such as Credential Settings and Power saving options. Click on OK when done:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-5

Figure 5. Additional Scan Options in GFI’s LanGuard 2015

We can now hit Scan to begin the host discovery and scan process:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-6

Figure 6. Initiating the discovery process in GFI LanGuard 2015

GFI LanGuard will being scanning the selected IP subnet and list all hosts found in the Scan Results Overview window area. As shown in the above screenshot, each host will be identified according to its operating system and will be accessed for open ports, vulnerabilities and missing operating system & application patches.

The full scan profile selected will force GFI LanGuard to run a complete detailed scan of every host.

Once complete, GFI LanGuard 2015 displays a full report summary for every host and an overal summary for the network:

gfi-languard-scan-network-and-discover-vulnerable-unpatched-high-risk-systems-7

Figure 7. GFI LanGuard 2015 overall scan summary and results

Users can select each host individually from the left window and their Scan Results will be diplayed on the right window area (Scan Results Details). This method allows quick navigation through each host, but also allows the administrator or network security auditor to quickly locate specific scan results they are after.

This article explained how to configure GFI LanGuard 2015 to scan an IP subnet network, identify host operating systems, log into remote systems, scan for vulnerabilities, missing operating system and application patches, open ports and other critical security issues. IT Managers, network engineers and security auditors should defiantely try GFI LanGuard and see how easy & automated their job can become with such a powerfull network security tool in their hands.

  • Hits: 14572

OpenMosix - Part 9: Interesting Ideals: Distributed Password Cracking & Encoding MP3s

Now that you hopefully have a nice powerful cluster running, there are hundreds of different ways you can use it. The most obvious use is any activity that takes a long time and uses a large amount of CPU processing power and/or RAM. We're going to show you a couple of projects that have benefited us in the real world.

Bear in mind that there are some applications that migrate very nicely over an openMosix cluster, for example 'make' when you're compiling can speed up your compile times significantly. If you do a little research on the net, you'll find examples of applications that will migrate well and which won't yield much speed increase. If you are a developer looking to take advantage of openMosix, applications that fork() child processes will migrate wonderfully, whereas multithreaded applications at present, do not seem to migrate threads.

Anyway, here are a couple of cool uses for your cluster:

Distributed Password Cracking

If your role is in security role or if you work as a penetration tester, you'll probably encounter the need to crack passwords at some point or other. We regularly use l0phtcrack for Windows passwords, but were interested by the opportunity to use our nice new 10 system cluster to significantly speed things up. After briefly hunting around the net, we discovered 'Cisilia', which is a Linux based Windows LM / NTLM password cracker designed specifically to take advantage of openMosix style clustering!

You can get a copy of cisilia by visiting the following site and clicking on the R&D Projects menu on the left: http://www.citefa.gov.ar/SitioSI6_EN/si6.htm

There you'll find two files, 'cisilia' which is the actual command line based password cracking engine and 'xisilia' which is an X based GUI for the same. We didn't install the X based GUI, since we were working with our cluster completely over SSH.

Once you download the RPMs, you can install them by typing:

rpm -ivh *isilia*.rpm

If you installed from the tarball sources like we did, it is just as simple:

1) Unzip the tarball

tar xvzf cisilia*.tar.gz

2) Enter the directory and configure the compilation process for your system:

./configure

3) Finally, start the compilation process:

make

Now you need to get a Windows password file to crack. For this you'll want to use pwdump to grab the encrypted password hashes. This is available at the following link:

https://packetstormsecurity.com/files/13790/pwdump2.zip.html

Unzip it and run it on the Windows box which has the passwords you want to crack. You will want to save the results to a file, so do the following:

pwdump2 > passwdfile

Now copy the file 'passwdfile' across to a node in your cluster. Fire up cisilia using the following command:

cisilia -l crack_file -n 20 <path to the passwdfile you copied>

•  -l   tells cisilia to save the results to a file called crack_file

•  -n  tells cisilia how many processes it should spawn. We started 20, since we wanted 2 processes to go to each node in the cluster.

We were pleasantly surprised by how quickly it started running through 6-7 character alphanumeric passwords. Enjoy !

Encoding MP3s

Do you get annoyed by how long it takes to convert a CD to MP3? Or to convert any kind of media file?

This is one of the places where a cluster excels. When you convert your rips to MP3, you only process one WAV file at a time, how about you run it on your cluster and let it simultaneously encode all your MP3s?

Someone has already taken this to the absolute extreme, check out http://www.rimboy.com/cluster/ for what he's got setup.

To quickly rip a CD and convert it to digital audio, you will need 2 programs:

A digital audio extractor, and an audio encoder.

For the digital audio extractor we recommend Cdparanoia. For the audio encoder, we're going to do things a bit differently:

In the spirit of the free open source movement, we suggest you check out the OGG Vorbis encoder. This is a free, open audio compression standard that will compress your WAV files much better than MP3, and still have a higher quality!

They also play perfectly in Winamp and other media players. Sounds too good to be true? Check out their website at the link below. Of course if you still aren't convinced that OGG is better than MP3, you can replace the OGG encoder with any MP3 encoder for this tutorial.

Get and install both the cdparanoia ripper and the oggenc encoder from the following URLs:

CDparanoia - http://www.xiph.org/paranoia/

OGG Vorbis Encoder - https://xiph.org/vorbis/

Now we just need to rip and encode on our cluster. Put the CD you want to convert in the drive on one node, and just run the following:

cdparanoia -B

for i in `ls *.wav`;

do oggenc $i &

done;

This encodes your WAV files to OGG format at the default quality level of 3, which produces an OGG file of a smaller size and significantly better sound quality than an MP3 at 128kbps. You can experiment with the OGG encoder options to figure out the best audio quality for your requirements.

This just about completes the openmosix tutorial we've prepaired for you.

We surely hope it has been an enlighting tutorial and will help most of you make some good use of those old 'mini super computers' you never knew you had :)

Back to the Linux/Unix Section or OpenMosix Section.

  • Hits: 17597

OpenMosix - Part 8: Using SSH Keys Instead Of Passwords

One of the things that you'll notice with openMosixview is that if you want to change the speed sliders of a remote node, you will have some trouble. This is because openMosixview uses SSH to remotely set the speed on the node. What you need to do is set up passwordless SSH authentication using public/private keys.

This is just a quick walk-through on how to do that, for a much more detailed explanation on public/private key SSH authentication, see our tutorial in the GNU/Linux Section.

First, generate your SSH public/private key-pair:

ssh-keygen -t dsa

Second, copy the public key into the authorized keys file. Since your home directory is shared between nodes, you only need to do this on one node:

cat ~/.ssh/*.pub >>~/.ssh/authorized_keys

However, for root, you will have to do this manually for each node (replace Node# with each node individually):

cat ~/.ssh/*.pub >>/mfs/Node#/root/.ssh/authorized_keys

After this, you have to start SSH-agent, to cache your password so you only need to type it once. Add the following to your .bash_profile or .profile:

ssh-agent $SHELL

Now each time after you login, just type ‘ ssh-add' and supply your password once. By following this you will be able to login passwordless to any of the nodes, and the sliders in openMosixview should work perfectly for you. Next: Interesting Ideals: Distributed Password Cracking & Encoding MP3s

  • Hits: 16069

OpenMosix - Part 7: The openMosix File System

You've probably been wondering how openMosix handles things like file read/writes when a process migrates to another node.

For example, if we run a process that needs to read some data from a file /etc/test.conf on our local machine, if this process migrates to another node, how will openMosix read that file ? The answer is in the openMosix File System, or OMFS.

OMFS does several things. Firstly, it shares your disk between all the nodes in the cluster, allowing them to read and write to the relevant files. It also uses what is known as Direct File System Access (DFSA), which allows a migrated process to run many system calls locally, rather than wasting time executing them on the home node. It works somewhat like NFS, but has features that are required for clustering.

If you installed openMosix from the RPMs, the omfs should already be created and automatically mounted. Have a look in /mfs, and you will see a subdirectory for every node in the cluster, named after the node ID. These directories will contain the shared disks of that particular node.

You will also see some symlinks like the following:

here -> maps to the current node where your process runs

home -> maps to your home node

If the /mfs directory has not been created, you can mount it manually with the following:

mkdir /mfs

mount /mfs /mfs -t mfs

If you want it to be automatically mounted at boot time, you can create the following entry in your /etc/fstab

mfs_mnt /mfs mfs dfsa=1 0 0

Bear in mind that this entry has to be on all the nodes in the cluster. Lastly, you can turn the openMosix file system off using the command:

mosctl nomfs

Now that we've got that all covered, it's time to take a look on how you can make the ssh login process less time consuming, allowing you to take control of all your cluster nodes any time you require, but also help the cluster system execute special functions. Next topic covers using SSH keys with openMosix instead of passwords.

  • Hits: 16436

OpenMosix - Part 6: Controlling Your OpenMosix Cluster

The openMosix team have provided a number of ways of controlling your cluster, both from the command line, as well as through GUI based tools in X.

From the command line, the main monitoring and control tools are:

  • mosmon – which shows you the load on each of the nodes, their speed, memory usage, etc. Pressing 'h' will bring up the help with the different options;
  • mosctl - which is a very powerful command that allows you to control how your system behaves in the cluster, some of the interesting options are:
    • mosctl block – this stops other people's processes being run on your system (a bit selfish don't you think ;))
    • mosctl -block – the opposite of the above
    • mosctl lstay – this stops your local processes migrating to other nodes for processing
    • mosctl nolstay – the opposite of the above
    • mosctl setspeed <number> - which sets the max processing speed to contribute. 10000 is a benchmark of a Pentium 3 1Ghz.
    • mosctl whois <node number> - this tells you the IP address of a particular node
    • mosctl expel – this expels any current remote processes and blocks new ones from coming in
    • mosctl bring – this brings back any of your own local processes that have migrated to other nodes
    • mosctl status <node number> - which shows you whether the node is up and whether it is 'blocking' processes, 'staying' them, etc.
    • mosrun - allows you to run a process controlling which nodes it should run on
    • mps - this is just like 'ps' to show you the process listing, but it also shows which node a process is running on
    • migrate - this command allows you to manually migrate a process to any node you like, the syntax for using it is 'migrate <pid> <node #>'. You can also use 'migrate <pid> balance' to load balance a process automatically.
    • dsh - Distributed Shell. This allows you to run a command on all the nodes simultaneously. For example ‘ dsh -a reboot ' will reboot all the nodes.

From the GUI, you can just start 'openmosixview'. This allows you to view and manage all the nodes in your cluster. It also shows you the load balancing efficiency of the cluster in near-real-time. You can also see what is the total speed and RAM that your cluster is providing you:

linux-openmosix-controlling-cluster-1

We should note that all cluster nodes that are online are represented with the green colour, while all offline cluster with the red colour.

One of the neatest things about 'openmosixview' is the GUI for controlling process migration.

linux-openmosix-controlling-cluster-2

It depicts your current node at the center, and other nodes in the cluster around it. The ring around your node represents the processes running on your local box. If you hover over any of them you can see the process name and PID. Whenever one of your processes migrates to another node, you will see it detach and appear on a new node with a line linking it to your system!

You can also manually control the migration. You can drag and drop your processes onto other nodes, even selecting multiple processes and then dragging them to another node is easy. If you double click on a process running on a remote node, it will come back home and execute locally.

You can also open the openMosix process monitor which shows you which process is running on which node.

There is also a history analyzer to show you the load over a period of time. This allows you to see how your cluster was being used at any given point in time:

linux-openmosix-controlling-cluster-3

As you can see, the GUI tools are very powerful, they provide you with a large amount of the functionality that the command line tools do. If, however, you want to make your own scripts, the command line tools are much more versatile. Managing a cluster can be a lot of fun, modify the options and play around with the GUI to tweak and optimize your raw processing power!! Our next article covers The openMosix File System.

  • Hits: 16792

OpenMosix - Part 5: Testing Your Cluster

Now let's actually make this cluster do some work! There is a quick tool you can use to monitor the load of your cluster.

Type 'mosmon' and press enter. You should see a screen similar to the screenshot below:

linux-openmosix-testing-cluster-1

 

Run mosmon in one VTY (press ctrl+alt+f1), then switch to another VTY (ctrl+alt+f2)

Let's run a simple awk command to run a nested loop and use up some processing power. If everything went well, we should see the load in mosmon jump up on one node, and then migrate to the other nodes.

The command you need to run is:

awk 'BEGIN {for(i=0;i<10000;i++)for(j=0;j<10000;j++);}'

If you choose to, you can start multiple awk processes by backgrounding them. Just append an ‘&' to the command line and run it a few times.

Go back to mosmon by pressing ctrl+alt+f1, you should see the load rising on your current node, and then slowly distributing to the other machines in the cluster like in the picture below:

linux-openmosix-testing-cluster-2

Congratulations! You are now taking advantage of multi system clustering!

If you want you can time the process running locally, turn off openmosix by entering the command:

/etc/init.d/openmosix stop

Then run the following script:

#!/bin/sh

date

awk 'BEGIN {for(i=0;i<10000;i++)for(j=0;j<10000;j++);}'

date

This will tell you how long it took to perform the task. You can modify the loop values to make it last longer. Now restart openmosix, using the command:

/etc/init.d/openmosix start

Re-run the script to see how long it takes to process. Remember that your network is a bottleneck for performance. If your process finishes really quickly, it won't have time to migrate to the other nodes over the network. This is where tweaking and optimizing your cluster becomes fun.

Next up we'll take a look on how you can control a OpenMosix cluster.

  • Hits: 15515

OpenMosix - Part 4: Starting Up Your OpenMosix Cluster

Okay, so now you've got a couple of machines with openMosix installed and booted, it's time to understand how to add systems to your cluster and make them work together.

OpenMosix has two ways of doing this:

1. Auto-discovery of Cluster Nodes

OpenMosix includes a daemon called 'omdiscd' which identifies other openMosix nodes on the network by using multicast packets (for more on multicasting, please see our multicast page). This means that you don't have to bother manually configuring the nodes. This is a simple way to get your cluster going as you just need to boot a machine and ensure it's on the network. When this stage is complete, it should then discover the existing cluster and add itself automatically!

Make sure you set up your network properly. As an example, if you are assigning an IP address of 192.168.1.10 to your first ethernet interface and your default gateway is 192.168.1.1 you would do something like this:

ifconfig eth0 192.168.1.1 netmask 255.255.255.0 broadcast 192.168.1.255 up (configure's your system's ethernet interface)

route add default gw 192.168.1.1 (adds the default gateway)

The auto-discovery daemon might have started automatically on bootup, check using:

ps aux | grep 'omdiscd'

The above command should reveal the 'omdiscd' process running on your system.

If it hasn't, you can manually start by typing 'omdiscd'. If you want to see the nodes getting added, you can choose to run omdiscd in the foreground by typing 'omdiscd -n'. This will help you troubleshoot the auto-discovery.

2. The /etc/openmosix.map File Configuration

If you don't want to use autodiscovery, you can manually manage your nodes using the openmosix.map file in the /etc directory. This file basically contains a list of the nodes on your cluster, and has to be the same across all the nodes in your cluster.

The syntax is very simple, it is a tab delimited list of the nodes in your cluster. There are 3 fields:

Node ID, IP Address and Number.

•  Node ID is the unique number for the node.

•  IP address is the IP address of the node.

•  Number specifies how many nodes in the range after the IP.

As an example, if you have nodes

192.168.1.10

192.168.1.11

192.168.1.12

192.168.1.50

your file would look like this:

1 192.168.1.10 2

2 192.168.1.50 1

We could have manually specified the IP's 192.168.1.11 and 192.168.1.12, but by using the 'number' field, openmosix counts up the last octet of the IP, and saves you the trouble of making individual entries.

Once you've done your configuration, you can control openMosix using the init.d script that should have been installed. If they were not, you can find it in the scripts directory of the userland tools you downloaded, make it executable and copy it to the init.d directory like this:

mv ./openmosix /etc/init.d

chmod 755 /etc/init.d/openmosix

You can now start, stop and restart openMosix with the following commands:

/etc/init.d/openmosix start

/etc/init.d/openmosix stop

/etc/init.d/openmosix restart

Next up we'll take a look on how you can test your new openMosix cluster!

  • Hits: 17610

OpenMosix - Part 3: Using ClusterKnoppix

So maybe none of those methods worked for you. Well, you'll be happy to know that you can get a cluster up and running within a few minutes using an incredible bootable Knoppix liveCD that is preconfigured for clustering. It's called ‘ClusterKnoppix' and a quick search on Google will reveal a number of sources from where you can download the ISO images.

The best thing about Cluster Knoppix, is that you can just boot a system with the CD and it will automatically add itself to the cluster. You don't even need to install the O/S to your hard disk. This makes it a very useful way to setup a cluster in a hurry using pre-existing systems.

Another really nice feature is that you don't need to burn 20 copies of the CD to make a 20 system cluster. Just boot one system with the CD, and then run the command

knoppix-terminalopenmosixserver

This will let you setup a clustering-enabled terminal-server. Now if you have any systems that can boot from their network card (PXE compliant booting), they will automatically download a kernel image and run Cluster Knoppix!

It's awesome to see this at work, especially since we were working with 2 systems that didn't have a CD-ROM drive or a hard-disk. They just became diskless clients and contributed their resources to the cause! Next page covers starting up your openMosix Cluster.

  • Hits: 21038

OpenMosix - Part 2: Building An openMosix Cluster

Okay, let's get down to the fun part! Although it may sound hard, setting up a cluster is not very difficult, we're going to show you the hard way (which will teach you more) as well as a very neat quick way to set up an instant cluster using a Knoppix Live CD. We suggest you try both out to understand the benefits of each approach.

We will require the following:

1. Two or more machines (we need to cluster something!), the configuration doesn't matter even if they are lower end. They will require network cards and need to be connected to each other over a network. Obviously, the more systems you have, the more powerful your cluster will be. Don't worry if you don't have many machines, we'll show you how to temporarily use resources from systems and schedule when they can contribute their processing power (this works very well in an office when you might want some systems to join the cluster only after office hours).

2. A Cluster Knoppix LiveCD for the second part of this tutorial. While this is not strictly necessary, we want to show you some of the advantages of using the LiveCD for clustering. It also makes setting up the cluster extremely easy. You can get a fully working cluster up in the amount of time it takes you to boot a system ! You can get Cluster Knoppix from the following link: https://distrowatch.com/table.php?distribution=clusterknoppix

Getting & Installing openMosix

OpenMosix consists of two parts, the first is the kernel patch which does the actual clustering and the second is the userland tools that allow you to monitor and control your cluster.

There are a variety of ways to install openMosix, we've chosen to show three of them:

1. Patching the kernel and installing from the source

2. Installing from RPM's

3. Installing in Debian

1. Installing from source

The latest version of openMosix at the time of this writing works with the kernel version 2.4.24. If you want to do this the proper way, get the plain kernel sources for 2.4.24 from https://www.kernel.org/ and the openMosix patch for the same version of the kernel from https://sourceforge.net/projects/openmosix/

At the time of writing this, the direct kernel source link is

http://www.kernel.org/pub/linux/kernel/v2.4/linux-2.4.24.tar.bz2

Once you've got the kernel sources, unpack them to your kernel source directory, in this case that should be:

/usr/src/linux-2.4.24

Now move the openMosix patch to the kernel source directory and apply it, like so:

mv /root/openMosix-2.4.24.gz /usr/src/linux-2.4.24

cd /usr/src/linux-2.4.24

zcat openMosix-2.4.24.gz | patch -Np1

NOTE: If you downloaded a bzip zipped file, you might need to use 'bzcat' rather than 'zcat' in the last line.

Now your kernel sources are patched with openMosix.

Now you have to configure your kernel sources, using one of the following commands:

make config

make menuconfig (uses an ncurses interface)

make xconfig (uses a TCL/TK GUI interface)

If you use X and have a recent distribution, 'make xconfig' is your best bet. Once you get the kernel configuration screens, enable the following openMosix options in the kernel configuration:

CONFIG_MOSIX=y

# CONFIG_MOSIX_TOPOLOGY is not set

CONFIG_MOSIX_UDB=y

# CONFIG_MOSIX_DEBUG is not set

# CONFIG_MOSIX_CHEAT_MIGSELF is not set

CONFIG_MOSIX_WEEEEEEEEE=y

CONFIG_MOSIX_DIAG=y

CONFIG_MOSIX_SECUREPORTS=y

CONFIG_MOSIX_DISCLOSURE=3

CONFIG_QKERNEL_EXT=y

CONFIG_MOSIX_DFSA=y

CONFIG_MOSIX_FS=y

CONFIG_MOSIX_PIPE_EXCEPTIONS=y

CONFIG_QOS_JID=y

Feel free to tweak your other kernel settings based on your hardware and requirements just as you would when installing a new kernel.

Finally, finish it all off by compiling the kernel with:

make dep bzImage modules modules_install

Now install your new kernel in your bootloader. For example, if you use LILO, edit your /etc/lilo.conf and create a new entry for your openMosix enhanced kernel. If you simply copy the entry for your regular kernel and change the kernel image to point to your new kernel, this should be enough. Don't forget to run 'lilo' when you finish editing the file.

After you have completed this, reboot, and if all went well, you should be able to select the openMosix kernel you just installed and boot with it. If something didn't work right, you can still select your regular kernel and boot normally to troubleshoot.

2. Installing from RPM

If you have an RPM based distribution, you can directly get a pre-compiled kernel image with openMosix enabled from the openMosix site (https://sourceforge.net/projects/openmosix/).

This is a fairly easy way to install openMosix as you just need to install 2 RPMs. This should work with Red Hat, SUSE etc. Get the two latest RPMs for the

a) openmosix-kernel

b) openmosix-tools

Now you can simply install both of these by using the command:

rpm -Uvh openmosix*.rpm

If you are using GRUB, the RPM's will even make the entry in your GRUB config so you can just reboot and select the new kernel. If you use LILO you will have to manually make the entry in /etc/lilo.conf. Simply copying the entry for your regular kernel and changing the kernel image to point to your new kernel should be enough. Don't forget to run 'lilo' when you finish editing the file.

That should be all you need to do for the RPM based installation. Just reboot and choose the openMosix kernel when you are given the choice.

3. Installing in Debian

You can install the RPM's in Debian as well as using Alien, but it is better to use apt-get to install the kernel sources and openmosix kernel patch. You can also use the 'apt-get' command to install openmosixview, which will give you a GUI to manage the cluster.

This is the basic procedure needed to follow for installing openMosix under Debian.

First, get the packages:

cd /usr/src

apt-get install kernel-source-2.4.24 kernel-package \

openmosix kernel-patch-openmosix

Untar them and create the links:

tar vxjf kernel-source-2.4.24.tar.bz2

ln -s /usr/src/kernel-source-2.4.24 /usr/src/linux

Apply the patch:

cd /usr/src/linux

../kernel-patches/i386/apply/openmosix

Install the kernel:

make menuconfig

make-kpkg kernel_image modules_image

cd ..

dpkg -i kernel-image-*-openmosix-*.deb

After this you can use 'apt-get' to install the openmosixview GUI utility for managing your cluster using the following command:

apt-get install openmosixview

Assuming you've successfully installed ClusterKnoppix, your ready to start using it - which also happens to be the topic of the next section:  Using ClusterKnoppix

  • Hits: 26914

OpenMosix - Part 1: Understanding openMosix

As we said before, openMosix is a single system image clustering extension for the Linux kernel. It has its roots in the extremely popular MOSIX clustering project, the main difference being that it is distributed under the GNU General Public License.

It allows a cluster of computers to behave like one big multi-processor computer. For example, if you run 2 processes on a single machine, each process will only get 50% of the CPU time. However, if you run both these processes over a 2 node cluster, each process will get 100% CPU time since there are two processors available. In essence, this behavior is very similar to SMP (Symmetric Multi-Processor) systems.

Diving Deeper

What openMosix does is balance the processing load over the systems in the cluster, taking into account the speed of the systems and the load they already have. Note however, that it does not parallelize the processing. Each individual process only runs on one computer at a time.

To quote the openMosix website example :

'If your computer could convert a WAV to a MP3 in a minute, then buying another nine computers and joining them in a ten-node openMosix cluster would NOT let you convert a WAV in six seconds. However, what it would allow you to do is convert 10 WAVs simultaneously. Each one would take a minute, but since you can do lots in parallel you'd get through your CD collection much faster.'

This simultaneous processing has a lot of uses, as there are many tasks that adapt extremely well to being used on a cluster. In the later sections, we'll show you some practical and fun uses for an openMosix based GNU/Linux cluster. Next: Building An openMosix Cluster

 

  • Hits: 16753

FREE WEBINAR: Microsoft Azure Certifications Explained - A Deep Dive for IT Professionals in 2020

It’s common knowledge, or at least should be, that certifications are the most effective way for IT professionals to climb the career ladder and it’s only getting more important in an increasingly competitive professional marketplace. Similarly, cloud-based technologies are experiencing unparalleled growth and the demand for IT professionals with qualifications in this sector are growing rapidly. Make 2020 your breakthrough year - check out this free upcoming FREE webinar hosted by two Microsoft cloud experts to plan your Azure certification strategy in 2020

microsoft azure certifications explained

The webinar features a full analysis of the Microsoft Azure certification landscape in 2020, giving you the knowledge to properly prepare for a future working with cloud-based workloads. Seasoned veterans Microsoft MVP Andy Syrewicze and Microsoft cloud expert Michael Bender will be hosting the event which includes Azure certification tracks, training and examination costs, learning materials, resources and labs for self-study, how to gain access to FREE Azure resources, and more. 

Altaro’s webinars are always well attended and one reason for this is the encouragement for attendee participation. Every single question asked is answered and no stone is left unturned by the presenters. They also present the event live twice to allow as many people as possible to have the chance of attending the event and asking their questions in person! 

For IT professionals in 2020, and especially those with a Microsoft ecosystem focus, this event is a must-attend! 

The webinar will be held on Wednesday February 19, at 3pm CET/6am PST/9am EST and at again 7pm CET/10am PST/1pm EST. I’ll be attending so I’ll see you there!

While the event date has passed, it has been recorded and is available for viewing. All material are available as direct downloads. Click here to access the event.

  • Hits: 3948

Free Webinar: Azure Security Center: How to Protect Your Datacenter with Next Generation Security

azure security center webinarSecurity is a major concern for IT admins and if you’re responsible for important workloads hosted in Azure, you need to know your security is as tight as possible. In this free webinar, presented by Thomas Maurer, Senior Cloud Advocate on the Microsoft Azure Engineering Team, and Microsoft MVP Andy Syrewicze, you will learn how to use Azure Security Center to ensure your cloud environment is fully protected.

There are certain topics in the IT administration world which are optional but security is not one of them. Ensuring your security knowledge is ahead of the curve is an absolute necessity and becoming increasingly important as we are all becoming exposed to more and more online threats every day. If you are responsible for important workloads hosted in Azure, this webinar is a must!

The webinar covers:

  • Azure Security Center introductions
  • Deployment and first steps
  • Best practices
  • Integration with other tools
  • And much more!

Being an Altaro-hosted webinar, expect this webinar to be packed full of actionable information presented via live demos so you can see the theory put into practice before your eyes. Also, Altaro put a heavy emphasis on interactivity, encouraging questions from attendees and using engaging polls to get instant feedback on the session. To ensure as many people as possible have this opportunity, Altaro present the webinar live twice so pick the best time for you and don’t be afraid to ask as many questions as you like!

Webinar: Azure Security Center: How to Protect Your Datacenter with Next Generation Security
Date: Tuesday, 30th July
Time: Webinar presented live twice on the day. Choose your preferred time:

  • 2pm CEST / 5am PDT / 8am EDT
  • 7pm CEST / 10am PDT / 1pm EDT

 While the event date has passed, it has been recorded and is available for viewing. All material are available as direct downloads. Click here to access the event.

azure security center webinar

  • Hits: 8512

Major Cisco Certification Changes - New Cisco CCNA, CCNP Enterprise, Specialist, DevNet and more from Feb. 2020

new cisco certification paths Feb 2020Cisco announced a major update to their CCNA, CCNP and CCIE certification program at Cisco Live last week, with the changes happening on the 24th  February 2020.

CCNA & CCENT Certification

The 10 current CCNA tracks (CCNA Routing and Switching, CCNA Cloud, CCNA Collaboration, CCNA Cyber Ops, CCNA Data Center, CCNA Industrial, CCNA Security, CCNA Service Provider, CCNA Wireless and CCNA Design) are being retired and replaced with a single ‘CCNA’ certification. The new CCNA exam combines most of the information on the current CCNA Routing and Switching with additional wireless, security and network automation content.

A new Cisco Certified DevNet Associate certification is also being released to satisfy the increasing demand in this area.

The current CCENT certification is being retired. There hasn’t been an official announcement from Cisco yet but rumours are saying that we might be seeing new ‘Foundations’ certifications which will focus on content from the retiring CCNA tracks.

CCNP Certification

Different technology tracks remain at the CCNP level. CCNP Routing and Switching, CCNP Design and CCNP Wireless are being consolidated into the new CCNP Enterprise, and CCNP Cloud is being retired. A new Cisco Certified DevNet Professional certification is also being released.

Only two exams will be required to achieve each CCNP certification – a Core and a Concentration exam. Being CCNA certified will no longer be a prerequisite for the CCNP certification.

If you pass any CCNP level exams before February 24 2020, you’ll receive badging for corresponding new exams and credit toward the new CCNP certification.

new cisco certification roadmap 2020

Click to Enlarge

CCIE Certification

The format of the CCIE remains largely the same, with a written and lab exam required to achieve the certification. The CCNP Core exam will be used as the CCIE Written exam though, there will no longer be a separate written exam at the CCIE level. Automation and Network Programmability are being added to the exams for every track.

All certifications will be valid for 3 years under the new program so you will no longer need to recertify CCIE every 2 years.

How the Changes Affect You

If you’re currently studying for any Cisco certification the advice from Cisco is to keep going. If you pass before the cutover your certification will remain valid for 3 years from the date you certify. If you pass some but not all CCNP level exams before the change you can receive credit towards the new certifications.

We've added a few resources to which you can turn to an obtain additional information:

The Flackbox blog has a comprehensive video and text post covering all the changes.

The official Cisco certification page is here.

  • Hits: 31552

Free Azure IaaS Webinar with Microsoft Azure Engineering Team

free azure iaas webinar with microsoft azure engineering teamImplementing Infrastructure as a Service (IaaS) is a great way of streamlining and optimizing your IT environment by utilizing virtualized resources from the cloud to complement your existing on-site infrastructure. It enables a flexible combination of the traditional on-premises data center alongside the benefits of cloud-based subscription services. If you’re not making use of this model, there’s no better opportunity to learn what it can do for you than in the upcoming webinar from Altaro: How to Supercharge your Infrastructure with Azure IaaS.

The webinar will be presented by Thomas Maurer, who has recently been appointed Senior Cloud Advocate, on the Microsoft Azure Engineering Team alongside Altaro Technical Evangelist and Microsoft MVP Andy Syrewicze.

The webinar will be primarily focused on showing how Azure IaaS solves real use cases by going through the scenarios live on air. Three use cases have been outlined already, however, the webinar format encourages those attending to suggest their own use cases when signing up and the two most popular suggestions will be added to the list for Thomas and Andy to tackle. To submit your own use case request, simply fill out the suggestion box in the sign up form when you register!

Once again, this webinar is going to presented live twice on the day (Wednesday 13th February). So if you can’t make the earlier session (2pm CET / 8am EST / 5am PST), just sign up for the later one instead (7pm CET / 1pm EST / 10am PST) - or vice versa. Both sessions cover the same content but having two live sessions gives more people the opportunity to ask their questions live on air and get instant feedback from these Microsoft experts.

Save your seat for the webinar!

Free IaaS Webinar with Microsoft Azune Engineering Team

While the event date has passed, it has been recorded and is available for viewing. All material are available as direct downloads. Click here to access the event.

  • Hits: 5795

Altaro VM Backup v8 (VMware & Hyper-V) with WAN-Optimized Replication dramatically reduces Recovery Time Objective (RTO)

Altaro, a global leader in virtual machine data protection and recovery, has introduced WAN-Optimized Replication in its latest version, v8, allowing businesses to be back up and running in minimal time should disaster strike. Replication permits a business to make an ongoing copy of its virtual machines (VMs) and to access that copy with immediacy should anything go wrong with the live VMs. This dramatically reduces the recovery time objective (RTO).

VMware and Hyper-V Backup

Optimized for WANs, Altaro's WAN-Optimized Replication enables system administrators to replicate ongoing changes to their virtual machines (VMs) to a remote site and to seamlessly continue working from the replicated VMs should something go wrong with the live VMs, such as damage due to severe weather conditions, flooding, ransomware, viruses, server crashes and so on.

Drastically Reducing RTO

"WAN-Optimized Replication allows businesses to continue accessing and working in the case of damage to their on-premise servers. If their office building is hit by a hurricane and experiences flooding, for instance, they can continue working from their VMs that have been replicated to an offsite location," explained David Vella, CEO and co-founder of Altaro Software.

"As these are continually updated with changes, businesses using Altaro VM Backup can continue working without a glitch, with minimal to no data loss, and with an excellent recovery time objective, or RTO."

Click here to download your free copy now of Altaro VMware Backupfree copyClick here to download your free copy now of Altaro VMware Backup

Centralised, Multi-tenant View For MSPs

Managed Service Providers (MSPs) can now add replication services to their offering, with the ability to replicate customer data to the MSP's infrastructure. This way, if a customer site goes down, that customer can immediately access its VMs through the MSP's infrastructure and continue working.

With Altaro VM Backup for MSPs, MSPs can manage their customer accounts through a multi-tenant online console for greater ease, speed and efficiency, enabling them to provide their customers with a better, faster service.

How To Upgrade

WAN-Optimized Replication is currently available exclusively for customers who have the Unlimited Plus edition of Altaro VM Backup. It is automatically included in Altaro VM Backup for MSPs.

Upgrading to Altaro VM Backup v8 is free for Unlimited Plus customers who have a valid Software Maintenance Agreement (SMA). The latest build can be downloaded from this page. If customers are not under active SMA, they should contact their Altaro Partner for information about how to upgrade.

New users can benefit from a fully-functional 30-day trial of Altaro VM Backup Unlimited Plus.

  • Hits: 6600

Free Live Demo Webinar: Windows Server 2019 in Action

windows server 2019 webinarSo you’ve heard all about Windows Server 2019 - now you can see it in action in a live demo webinar on November 8th! The last WS2019 webinar by Altaro was hugely popular with over 4,500 IT pros registering for the event. Feedback from gathered with that webinar and the most popular features will now be tested live by Microsoft MVP Andy Syrewicze. And you’re invited!

This deep-dive webinar will focus on:

  • Windows Admin Center
  • Containers on Windows Server
  • Storage Migration Service
  • Windows Subsystem for Linux
  • And more!

Demo webinars are a really great way to see a product in action before you decide to take the plunge yourself. It enables you to see the strengths and weaknesses first-hand and also ask questions that might relate specifically to your own environment. With the demand so high, the webinar is presented live twice on November 8th to help as many people benefit as possible.

altaro windows server 2019 in action webinar

The first session is at 2pm CET/8am EST/5am PST and the second is at 7pm CET/1pm EST/10am PST. With the record number of attendees for the last webinar, some people were unable to attend the sessions which were maxed out. It is advised you save your seat early for this webinar to keep informed and ensure you don’t miss the live event.

Save your seat: https://goo.gl/2RKrSe

While the event date has passed, it has been recorded and is available for viewing. All material are available as direct downloads. Click here to access the event.  

  • Hits: 6732

Windows Server 2019 Free Webinar

With Microsoft Ignite just around the corner, Windows Server 2019 is set to get its full release and the signs look good. Very good. Unless you’re part of the Windows Server insider program - which grants you access to the latest Windows Server Preview builds - you probably haven’t had a hands-on experience yet with Windows Server 2019 but the guys over at Altaro have and are preparing to host a webinar on the 3rd of October to tell you all about it.

altaro windows server 2019 webinar

The webinar will be held a week after Microsoft Ignite so it will cover the complete feature set included in the full release as well as a more in-depth look at the most important features in Windows Server 2019. Whenever a new version of Windows Server gets released there’s always a lot of attention and media coverage so it’s nice to have an hour long session where you can sit back and let a panel of Microsoft experts cut through the noise and give you all the information you need.

It’s also a great chance to ask your questions direct to those with the inside knowledge and receive answers live on air. Over 2000 people have now registered for this webinar and we’re going to be joining too. It’s free to register - what are you waiting for?

Save your seat: https://goo.gl/V9tYYb

Note: While this event has passed, its still available to view and download all related/presented material. Click on the above link to access the event recording.

  • Hits: 5507

Download HP Service Pack (SPP) for ProLiant Servers for Free (Firmware & Drivers .ISO)– Directly from HP!

hp-service-pack-for-proliant-spp-free-download-1aDownloading all necessary drivers and firmware upgrades for your HP Proliant server is very important, especially if hardware compatibility is critical for new operating system installations or virtualized environments (VMwareHyperV). Til recently, HP customers could download the HP Service Pack (SPP) for Proliant servers free of charge, but that’s no longer the story as HP is forcing customers to pay up in order to get access to its popular SPP package.

For those who are unaware, the HP SPP is a single ISO image that contains all the latest firmware software and drivers for HP’s Proliant servers, supporting older and newer operating systems including Virtualization platforms such as VMware and HyperV.

From HP’s prospective, you can either search and download for free each individual driver you think is needed for your server, or you buy a support contract and get everything in one neat ISO with all the necessary additional tools to make life easy – sounds attractive right? Well, it depends which way you look at it… not everyone is happy to pay for firmware and driver updates considering they are usually provided free of charge.

A quick search for HP Proliant firmware or drivers on any search engine will bring up HP’s Enterprise Support Center where the impression is given that we are one step away from downloading our much wanted SPP:

HP Proliant SPP Driver and Firmware Free Download

Figure 1. Attempting to download the HP Service Pack for ProLiant (SPP) ISO

When clicking on the ‘Obtain Software’ link, users receive the bad news:

hp-service-pack-for-proliant-spp-free-download-2

Figure 2. Sorry, you need to pay up to download the HP Service Pack ISO image!

Well, this is not the case – at least for now.

Apparently HP has set up this new policy to ensure customers pay for their server driver upgrades, however, they’ve forgotten (thankfully) one very important detail – securing the location of the HP Service Pack for ProLiant (SPP) ISO :)

To directly access the latest version of HP’s SPP ISO image simply click on the following URL or copy-paste it to your web browser:

ftp://ftp.hp.com/pub/softlib2/software1/cd-generic/p67859018/v113584/

HP’s FTP server is apparently wide-open allowing anonymous users to access and download not only the latest SPP ISO image, but pretty much browse the whole SPP repository and download any SSP version they want:

The latest (free) HP SPP ISO is just a click away!

Figure 3. The latest (free) HP SPP ISO is just a click away!

Simply click the “Up to higher level directory” link to move up and get access to all other versions of the SPP repository!

It’s great to see HP real cares about its customers and allows them to freely download the HP Service Pack (SPP) for Proliant servers. It’s not every day you get a vendor being so generous to its customers so if you’ve got a HP Proliant server, make sure you update its drivers and firmware while you still can!

Note: The above URL might not still be active - in this case you can download it from here:

https://www.systrade.de/download/SPP/

  • Hits: 312219

Colasoft Announces Release of Capsa Network Analyzer v8.2

colasoft-category-logoFebruary 23, 2016 – Colasoft LLC, a leading provider of innovative and affordable network analysis solutions, today announced the availability of Colasoft Capsa Network Analyzer v8.2, a real-time portable network analyzer for wired and wireless network monitoring, bandwidth analysis, and intrusion detection. The data flow display and protocols recognition are optimized in Capsa Network Analyzer 8.2.

Capsa v8.2 is capable of analyzing the traffic of wireless AP with 2 channels. Users can choose up to 2 wireless channels to analyze the total traffic which greatly enhances the accuracy of wireless traffic analysis. Hex display of decoded data is added in Data Flow sub-view in TCP/UDP Conversation view. Users can switch the display format between hex and text in Capsa v8.2.

Besides the optimizations of Data Flow sub-view in TCP/UDP Conversation view, with the continuous improvement of CSTRE (Colasoft Traffic Recognition Engine), Capsa 8.2 is capable of recognizing up to 1546 protocols and sub-protocols, which covers most of the mainstream protocols.colasoft-network-analyzer-v82

“We have also enhanced the interface of Capsa which improves user experience”, said Brian K. Smith, Vice President at Colasoft LLC, “the release of Capsa v8.2 provides a more comprehensive network analyze result to our customers.”

  • Hits: 9547

Safety in Numbers - Cisco & Microsoft

By Campbell Taylor

Recently I attended a presentation by Lynx Technology in London . The presentation was about the complimentary use of Cisco and Microsoft technology for network security. The title of the presentation was “End-to-end SecurityBriefing” and it set out to show the need for security within the network as well as at the perimeter. This document is an overview of that presentation but focuses on some key areas rather than covering the entire presentation verbatim. The slides for the original presentation can be found at http://www.lynxtec.com/presentations/.

The presentation opened with a discussion about firewalls and recommended a dual firewall arrangement as being the most effective in many situations. Their dual firewall recommendation was a hardware firewall at the closest point to the Internet. For this they recommended Cisco's PIX firewall. The recommendation for the second firewall was an application firewall. such as Microsoft's Internet Security and Acceleration server (ISA) 2004 or Checkpoint's NG products.

The key point made here is that the hardware firewall will typically filter traffic from OSI levels 1 – 4 thus easing the workload on the 2nd firewall which will filter OSI levels 1 – 7.

To elaborate, the first firewall can check that packets are of the right type but cannot look at the payload that may be malicious, malformed HTTP requests, viruses, restricted content etc.

This level of inspection is possible with ISA.

articles-members-contributions-sincm-1Figure 1. Dual firewall configuration
Provides improved performance and filtering for traffic from OSI levels 1 – 7.

 You may also wish to consider terminating any VPN traffic at the firewall so that the traffic can be inspected prior to being passed through to the LAN. End to end encryption is creating security issues, as some firewalls are not able to inspect the encrypted traffic. This provides a tunnel for malicious users through the network firewall.

Content attacks were seen as an area of vulnerability, which highlights the need to scan the payload of packets. The presentation particularly made mention of attacks via SMTP and Outlook Web Access (OWA)

Network vendors are moving towards providing a security checklist that is applied when a machine connects to the network. Cisco's version is called Network Access Control (NAC) and Microsoft's is called Network Access Quarantine Control (NAQC) although another technology called Network Access Protection (NAP) is to be implemented in the future.

Previously NAP was to be a part of Server 2003 R2 (R2 due for release end of 2005). Microsoft and Cisco have agreed to develop their network access technologies in a complementary fashion so that they will integrate. Therefore clients connecting to the Cisco network will be checked for appropriate access policies based on Microsoft's Active Directory and Group Policy configuration.

The following is taken directly from the Microsoft website: http://www.microsoft.com/windowsserver2003/techinfo/overview/quarantine.mspx

Note: Network Access Quarantine Control is not the same as Network Access Protection, which is a new policy enforcement platform that is being considered for inclusion in Windows Server "Longhorn," the next version of the Windows Server operating system.

Network Access Quarantine Control only provides added protection for remote access connections. Network Access Protection provides added protection for virtual private network (VPN) connections, Dynamic Host Configuration Protocol (DHCP) configuration, and Internet Protocol security (IPsec)-based communication.

 ISA Server & Cisco Technologies

ISA 2004 sits in front of the server OS that hosts the application firewall and filters traffic as it enters the server from the NIC. Therefore intercepting it before it is passed up OSI levels.

This means that ISA can still offer a secure external facing application firewall even when the underlying OS may be unpatched and vulnerable. Lynx advised that ISA 2000 with a throughput of 282 Mbps beat the next closest rival that was Checkpoint. ISA 2004 offers an even higher throughput of 1.59 Gbps (Network Computing Magazine March 2003)

articles-members-contributions-sincm-2

 

Cisco's NAC can be used to manage user nodes (desktops and laptops) connecting to your LAN. A part of Cisco's NAC is the Cisco Trust Agent which is a component that runs on the user node and talks to the AV server and RADIUS server. NAC targets the “branch office connecting to head office” scenario and supports AV vendor products from McAfee, Symantec and Trend. Phase 2 of Cisco's NAC will provide compliance checking and enforcement with Microsoft patching.

ISA can be utilized in these scenarios with any new connections being moved to a stub network. Checks are then run to make sure the user node meets the corporate requirements for AV, patching, authorisation etc. Compliance is enforced by NAC and NAQC/NAP. Once a connecting user node passes this security audit and any remedial actions are completed the user node is moved from the stub network into the LAN proper.

Moving inside the private network, the “Defence in depth” mantra was reiterated. A key point was to break up a flat network. For example clients should have little need to talk directly to each other, instead it should be more of a star topology with the servers in the centre and clients talking to the servers. This is where Virtual Local Area Networks (VLANs) would be suitable and this type of configuration makes it more difficult for network worms to spread.

Patch Management, Wireless & Security Tools

Patch Management

Patch management will ensure that known Microsoft vulnerabilities can be addressed (generally) by applying the relevant hot fix or service pack. Although not much detail was given Hot Fox Network Checker (Hfnetchk) was highlighted as an appropriate tool along with Microsoft Baseline Security Analyser (MBSA).

Restrict Software

Active Directory is also a key tool for administrators that manage user nodes running WXP and Windows 2000. With Group Policies for Active Directory you can prevent specified software from running on a Windows XP user node.

To do this use the “Software Restriction Policy”. You can then blacklist specific software based on any of the following:

  • A hash value of the software
  • A digital certificate for the software
  • The path for to the executable
  • Internet Zone rules

File, Folder and Share access

On the server all user access to files, folders and shares should be locked down via NTFS (requires Windows NT or higher). Use the concept of minimal necessary privilege.

User Node Connectivity

The firewall in Service Pack 2 for Windows XP (released 25 August 2004) can be used to limit what ports are open to incoming connections on the Windows XP user node.

Wireless

As wireless becomes more widely deployed and integrated more deeply in day-to-day operations we need to manage security and reliability. It is estimated Lynx that wireless installations can provide up to a 40% reduction in installation costs over standard fixed line installations. But wireless and the ubiquity of the web means that the network perimeter is now on the user node's desktop.

NAC and NAP, introduced earlier, will work with Extensible Authentication Protocol-Transport Level Security (EAP-TLS). EAP-TLS is used as a wireless authentication protocol. This means the wireless user node can still be managed for patching, AV and security compliance on the same basis as fixed line (e.g. Ethernet) connected user nodes.

EAP-TLS is scalable but requires Windows 2000 and Active Directory with Group Policy. To encrypt wireless traffic, 802.1x is recommended and if you wanted to investigate single sign on for your users across the domain then you could look at Public Key Infrastructure (PKI).

As part of your network and security auditing you will want to check the wireless aspect and the netstumbler tool will run on a wireless client and report on any wireless networks that have sufficient strength to be picked up.

As a part of your physical security for wireless networking you should consider placing Wireless Access Points (WAPs) in locations that provide restricted user access, for example in the ceiling cavity. Of course you will need to ensure that ypu achieve the right balance of physical security and usability, making sure that the signal is still strong enough to be used.

Layer 8 of the OSI model

The user was jokingly referred to as being the eighth layer in the OSI model and it is here that social engineering and other non-technical reconnaissance and attack methods can be attempted. Kevin Mitnick has written “The Art Of Deception: Controlling The Human Element Of Security” which is highly regarded in the IT security environment.

One counter measure to employ for social engineering is ensuring that all physical material is disposed of securely. This includes internal phone lists, hard copy documents, software user manuals etc. User education is one of the most important actions so you could consider user friendly training with workshops and reminders (posters, email memo's, briefings) to create a security conscious work place.

Free Microsoft Security Tools

MBSA, mentioned earlier, helps audit the security configuration of a user/server node. Other free Microsoft tools are the Exchange Best Practice Analyser, SQL Best Practice Analyser and the Microsoft Audit Collection System.

For conducting event log analysis you could use the Windows Server 2003 Resource Kit tool called EventcombMT. User education can be enhanced with visual reminders like a login message or posters promoting password security.

For developing operational guidelines the IT Infrastructure Library (ITIL) provides a comprehensive and customisable solution. ITIL was developed by the UK government and is now used internationally. Microsoft's own framework, Microsoft Operations Framework draws from ITIL. There is also assistance in designing and maintaining a secure network provided free by Microsoft called “Security Operations Guide”

Summary

Overall then, the aim is to provide layers of defence. For this you could use a Cisco PIX as your hardware firewall (first firewall) with a Microsoft ISA 2004 as your application layer firewall (second firewall). You may also use additional ISA 2004's for internal firewalls to screen branch to Head Office traffic.

The user node will authenticate to the domain. Cisco NAC and Microsoft NAQC/NAP will provide a security audit, authentication and enforcement on these user nodes connecting to the LAN that gain authorisation. If any action is required to make the user node meet the specified corporate security policies this will be carried out by moving the user node to a restricted part of the network.

Once the user node is authenticated, authorised and compliant with the corporate security policy then it will be allowed to connect to its full, allowed rights as part of the Private network. If using wireless the EAP-TLS may be used for the authentication and 802.1x for the encryption of the wireless traffic.

To help strengthen the LAN if the outer perimeter is defeated you need to look at segmenting the network. This will help minimise or delay malicious and undesirable activity from spreading throughout your private network. VLANs will assist with creating workgroups based on job function, allowing you to restrict the scope of network access a user may have.

For example rather than any user being able to browse to the Payroll server you can use VLANs to restrict access to that server to only the HR department. Routers can help to minimise the spread of network worms and undesirable traffic by introducing Access Control Lists (ACLs).

To minimise the chance of “island hopping” where a compromised machine is used to target another machine, you should ensure that the OS of all clients and Servers are hardened as much as possible – remove unnecessary services, patch, remove default admin shares if not used and enforce complex passwords.

Also stop clients from having easy access to another client machine unless it is necessary. Instead build more secure client to server access. The server will typically have better security because it is part of a smaller group of machines, thus more manageable and its is also a more high profile machine.

Applications should be patched and counter measures put in place for known vulnerabilities. This includes Microsoft Exchange, SQL and IIS, which are high on a malicious hackers attack list. The data on the servers can then be secured using NTFS permissions to only permit those who are authorised to access the data in the manner you specify.

Overall the presentation showed me that a more integrated approach was being taken by vendors to Network security. Interoperability is going to be important to ensure the longevity of your solution but it is refreshing to see two large players in the IT industry like Cisco and Microsoft working together.

  • Hits: 40196

A Day In The Antivirus World

This article written by Campbell Taylor - 'Global', is a review of the information learnt from a one day visit to McAfee and includes personal observations or further information that he felt were useful to the overall article. He refers to malicious activity as a term to cover the range of activity that includes worms, viruses, backdoors, Trojans, and exploits. Italics indicate a personal observation or comment.

In December 2004 I was invited to a one day workshop at McAfee's offices and AVERT lab at Aylesbury in England . As you are probably aware McAfee is an anti-virus (AV) vendor and AVERT ( Anti-Virus Emergency Response Team) is McAfee's AV research lab.

This visit is the basis for the information in this document and is split into 4 parts:

1) THREAT TRENDS

2) SECURITY TRENDS

3) SOME OF TODAY'S SECURITY RESPONSES

4) AVERT LAB VISIT

Threat Trends

Infection by Browsing

Browsing looks set to become a bigger method of infection by a virus in the near future but there was also concern about the potential for a ‘media independent propagation by a virus', that I found very interesting.

 

Media Independent propagation

By media independent I mean that the virus is not constrained to travelling over any specific media like Ethernet or via other physical infrastructure installations. McAfee's research showed a security risk with wireless network deployment which is discussed in the Security Trends section of this document.

So what happens if a virus or worm were able to infect a desktop via any common method and that desktop was part of a wired and wireless network? Instead of just searching the fixed wire LAN for targets, the virus/worm looks for wireless networks that are of sufficient strength to allow it to jump into that network.

You can draw up any number of implications from this but my personal observation is that this means you have to consider the wireless attack vector as seriously as the fixed wire attack vector. This reinforces the concept that the network perimeter is no longer based on the Internet/Corporate LAN perimeter and instead it now sits wherever interaction between the host machine and foreign material exists. This could be the USB memory key from home, files accessed on a compromised server or the web browser accessing a website.

An interesting observation from the McAfee researcher was that this would mean a virus/worm distribution starting to follow a more biological distribution. In other words you would see concentrations of the virus in metropolitan areas and along key meeting places like cyber cafes or hotspots.

Distributed Denial of Service (DDos)

DDoS attacks are seen as continuing threat because of the involvement of criminals in the malicious hacker/cracker world. Using DDoS for extortion provides criminals with a remote control method of raising capital.

Virus writers are starting to instruct their bot armies to coordinate their time keeping by accessing Internet based time servers. This means that all bots are using a consistent time reference. In turn this makes any DDos that much more effective than relying on independent sources of time reference.

As a personal note, Network administrators and IT security people might consider who needs access to Internet based Time servers. You may think about applying an access control list (ACL) that only permits NTP from one specified server in your network and denying all other NTP traffic. The objective is to reduce the chances of any of your machines being used as part of a bot army for DDos attacks.

Identity Theft

This was highlighted as a significant likely trend in the near future and is part of the increase in Phishing attacks that have been intercepted by MessageLabs.

SOCKS used in sophisticated identify theft

McAfee did not go into a lot of detail about this but they pointed out that SOCKS is being used by malicious hackers to bypass corporate firewalls because SOCKS is a proxy service. I don't know much about SOCKS so this is more of a heads up about technologies being used maliciously in the connected world.

Privacy versus security

One of the speakers raised the challenge of privacy versus security. Here the challenge is promoting the use of encrypted traffic to provide protection for data whilst in transit but then the encrypted traffic is more difficult to scan with AV products. In some UK government networks no encrypted traffic is allowed so that all traffic can be scanned.

In my opinion this is going to become more of an issue as consumers and corporates create a demand for the perceived security of HTTPS, for example.

Flexibility versus security

In the McAfee speaker's words this is about “ease of use versus ease of abuse”. If security makes IT too difficult to use effectively then end users will circumvent security.

Sticky notes with passwords on the monitor anyone?


Security Trends

Wireless Security

Research by McAfee showed that, on average, 60% of all wireless networks were deployed insecurely (many without even the use of WEP keys)

The research was conducted by war driving with a laptop running net stumbler in London and Reading (United Kingdom) and Amsterdam (Netherlands). The research also found that in many locations in major metropolitan areas there was often an overlap of several wireless networks of sufficient strength to attempt a connection.

AV product developments

AV companies are developing and distributing AV products for Personal Digital Assistants (PDAs) and smart phones. For example, F-secure, a Finnish AV firm, is providing AV software for Nokia (which, not surprisingly is based in Finland).

We were told that standard desktop AV products are limited to being reactive in many instances, as they cannot detect a virus until it is written to hard disk. Therefore in a Windows environment - Instant Messaging, Outlook Express and web surfing with Internet Explorer, the user is exposed, as web content is not necessarily written to hard disk.

This is where the concept of desktop firewalls or buffer overflow protection is important. McAfee's newest desktop product, VirusScan 8.0i, offers access protection that is designed to prevent undesired remote connections; it also offers buffer overflow protection. However it is also suggested that a firewall would be useful to stop network worms.

An interesting program that the speaker mentioned (obviously out of earshot of the sales department) was the Proxomitron. The way it was explained to me was that Proxomitron is a local web proxy. It means that web content is written to the hard disk and then the web browser retrieves the web content from the proxy. Because the web content has been written to hard disk your standard desktop AV product can scan for malicious content.

I should clarify at this point that core enterprise/server AV solutions like firewall/web filtering and email AV products are designed to scan in memory as well as the hard disk.

I guess it is to minimise the footprint and performance impact that the desktop AV doesn't scan memory. No doubt marketing is another factor – why kill off your corporate market when it generates substantial income?

AV vendors forming partnerships with Network infrastructure vendors

Daily AV definition file releases

McAfee is moving daily definition releases in an attempt to minimise the window of opportunity for infection.

Malicious activity naming

A consistent naming convention that is vendor independent is run by CVE (Common Vulnerabilities and Exposures). McAfee will be including the CVE reference to malicious activity that is ranked by McAfee as being of medium threat or higher.

Other vendors may use a different approach but I feel the use of a common reference method will help people in the IT industry to correlate information data about malicious activity form different sources rather than the often painful (for me at least) hunting exercise we engage in to get material from different vendors or sources about malicious activity.

AV products moving from reactive detection to proactive blocking of suspect behaviour

New AV products from McAfee (for example VirusScan 8.0i) are including suspect behaviour detection and blocking as well as virus signature detection. This acknowledges that virus detection by a virus signature is a reactive action. So by blocking suspicious behaviour you can prevent potential virus activity before a virus signature has been developed. For example port blocking can be used to stop a mydoom style virus from opening ports for backdoor access.

A personal observation is that Windows XP Service Pack 2 does offer a Firewall but this is a limited firewall as it provides port blocking only for traffic attempting to connect to the host. Therefore it would not stop a network worm searching for vulnerable targets.

Some of Today's Security Responses

Detecting potential malicious activity - Network

Understand your network's traffic patterns and develop a baseline of network traffic. If you see a significant unexpected change in your network traffic you may be seeing the symptoms of malicious activity.

Detecting potential malicious activity - Client workstation

On a Windows workstation if you run “ netstat –a ” from the command line you can see the ports that the workstation has open and to whom it's trying to connect. If you see ports open that are unexpected, especially ones outside of the well known range (1 – 1024) or connections to unexpected IP addresses, then further investigation may be worthwhile.

Tightening Corporate Email security

With the prevalence of mass mailing worms and viruses McAfee offered a couple of no/low cost steps that help to tighten your email security.

  1. Prevent all SMTP traffic in/outwards that is not for your SMTP server
  2. Prevent MX record look up
  3. Create a honeypot email address in your corporate email address book so that any mass mail infections will send an email to this honeypot account and alert you to the infection. It was suggested that the email account be inconspicuous e.g. not containing any admin, net, help, strings in the address. Something like '#_#@your domain' would probably work.

AVERT LAB VISIT

We were taken to the AVERT labs where we were shown the path from the submission of a suspected malicious sample through to the testing of the suspect sample and then to the development of the removal tools and definition files, their testing and deployment.

Samples are collected by submission via email, removable media via mail (e.g. CD or floppy disk) or captured via AVERT's honeypots in the wild.

Once a sample is received a copy is run on a goat rig. A goat rig is a test/sacrificial machine. The phrase “goat rig” comes from the practice in the past of tethering a goat in a clearing to attract animals the hunter wanted to capture. In this case the goat rig was a powerful workstation running several virtual machines courtesy of VMware software that were in a simulated LAN. The simulation went so far as to include a simulated access point to the Internet and Internet based DNS server.

The sample is run on the goat rig for observational tests. Observational tests are the first tests conducted after the sample has been scanned for known malicious signature files. Naturally malicious activity is not often visible to the common end user, so observable activity means executing the sample and looking for files or registry keys created by the sample, new ports opened and unexpected suspicious network traffic from the test machine.

As a demonstration the lab technicians ran a sample of the mydoom virus and the observable behaviour at this point was the opening of port 3127 on the test host, unexpected network traffic from the test host and newly created registry keys. The lab technician pointed out that a firewall on the host, blocking unused ports, would have very easily prevented mydoom from spreading.

Following observational tests the sample will be submitted for reverse engineering if it's considered complex enough or it warrants further investigation.

AVERT engineers that carry out reverse engineering are located throughout the world and I found it interesting that these reverse engineers and Top AV researchers maintain contact with their peers in the other main AV vendors. This collaboration is not maintained by the AV vendors but by the AV engineers so that it is based on a trust relationship. This means that the knowledge about a sample that has been successfully identified and reverse engineered to identify payload, characteristics etc is passed to others in the AV trust group.

From the test lab we went through to the AV definition testing lab. After the detection rules and a new AV definition have been written the definition is submitted to this lab. The lab runs an automated test that applies the updated AV definition on most known Operating System platforms and against a wide reference store of known applications.

The intention is to prevent the updated AV definition from giving false positives on known safe applications.

Imagine the grief if an updated AV definition provided a false positive on Microsoft's Notepad!

One poor soul was in a corner busy surfing the web and downloading all available material to add to their reference store of applications for testing future AV definitions.

After passing the reference store test an email is sent to all subscribers of the McAfee DAT notification service and the updated AV definition is made available on the McAfee website for download.

In summary, the AVERT lab tour was an informative look behind the scenes, without much of a sales pitch, and I found the co-operation amongst AV researchers of different AV companies very interesting.

  • Hits: 42414

Code-Red Worms: A Global Threat

The first incarnation of the Code-Red worm (CRv1) began to infect hosts running unpatched versions of Microsoft's IIS webserver on July 12th, 2001. The first version of the worm uses a static seed for it's random number generator. Then, around 10:00 UTC in the morning of July 19th, 2001, a random seed variant of the Code-Red worm (CRv2) appeared and spread. This second version shared almost all of its code with the first version, but spread much more rapidly. Finally, on August 4th, a new worm began to infect machines exploiting the same vulnerability in Microsoft's IIS webserver as the original Code-Red virus. Although the new worm shared almost no code with the two versions of the original worm, it contained in its source code the string "CodeRedII" and was thus named CodeRed II. The characteristics of each worm are explained in greater detail below.

The IIS .ida Vulnerability

On June 18, 2001 eEye released information about a buffer-overflow vulnerability in Microsoft's IIS webservers.

The remotely exploitable vulnerability was discovered by Riley Hassell. It allows system-level execution of code and thus presents a serious security risk. The buffer-overflow is exploitable because the ISAPI (Internet Server Application Program Interface) .ida (indexing service) filter fails to perform adequate bounds checking on its input buffers.

Code-Red version 1 (CRv1)

On July 12, 2001, a worm began to exploit the aforementioned buffer-overflow vulnerability in Microsoft's IIS webservers. Upon infecting a machine, the worm checks to see if the date (as kept by the system clock) is between the first and the nineteenth of the month. If so, the worm generates a random list of IP addresses and probes each machine on the list in an attempt to infect as many computers as possible. However, this first version of the worm uses a static seed in its random number generator and thus generates identical lists of IP addresses on each infected machine.

The first version of the worm spread slowly, because each infected machine began to spread the worm by probing machines that were either infected or impregnable. The worm is programmed to stop infecting other machines on the 20th of every month. In its next attack phase, the worm launches a Denial-of-Service attack against www1.whitehouse.gov from the 20th-28th of each month.

On July 13th, Ryan Permeh and Marc Maiffret at eEye Digital Security received logs of attacks by the worm and worked through the night to disassemble and analyze the worm. They christened the worm "Code-Red" both because the highly caffeinated "Code Red" Mountain Dew fueled their efforts to understand the workings of the worm and because the worm defaces some web pages with the phrase "Hacked by Chinese". There is no evidence either supporting or refuting the involvement of Chinese hackers with the Code-Red worm.

The first version of the Code-Red worm caused very little damage. The worm did deface web pages on some machines with the phrase "Hacked by Chinese." Although the worm's attempts to spread itself consumed resources on infected machines and local area networks, it had little impact on global resources.

The Code-Red version 1 worm is memory resident, so an infected machine can be disinfected by simply rebooting it. However, once-rebooted, the machine is still vulnerable to repeat infection. Any machines infected by Code-Red version 1 and subsequently rebooted were likely to be reinfected, because each newly infected machine probes the same list of IP addresses in the same order.

Code-Red version 2

At approximately 10:00 UTC in the morning of July 19th, 2001 a random seed variant of the Code-Red worm (CRv2) began to infect hosts running unpatched versions of Microsoft's IIS webserver. The worm again spreads by probing random IP addresses and infecting all hosts vulnerable to the IIS exploit. Code-Red version 2 lacks the static seed found in the random number generator of Code-Red version 1. In contrast, Code-Red version 2 uses a random seed, so each infected computer tries to infect a different list of randomly generated IP addresses. This seemingly minor change had a major impact: more than 359,000 machines were infected with Code-Red version 2 in just fourteen hours.

Because Code-Red version 2 is identical to Code-Red version 1 in all respects except the seed for its random number generator, its only actual damage is the "Hacked by Chinese" message added to top level webpages on some hosts. However, Code-Red version 2 had a greater impact on global infrastructure due to the sheer volume of hosts infected and probes sent to infect new hosts. Code-Red version 2 also wreaked havoc on some additional devices with web interfaces, such as routers, switches, DSL modems, and printers. Although these devices were not infected with the worm, they either crashed or rebooted when an infected machine attempted to send them a copy of the worm.

Like Code-Red version 1, Code-Red version 2 can be removed from a computer simply by rebooting it. However, rebooting the machine does not prevent reinfection once the machine is online again. On July 19th, the probe rate to hosts was so high that many machines were infected as the patch for the .ida vulnerability was applied.

CodeRedII

On August 4, 2001, an entirely new worm, CodeRedII began to exploit the buffer-overflow vulnerability in Microsoft's IIS webservers. Although the new worm is completely unrelated to the original Code-Red worm, the source code of the worm contained the string "CodeRedII" which became the name of the new worm.

Ryan Permeh and Marc Maiffret analyzed CodeRedII to determine its attack mechanism. When a worm infects a new host, it first determines if the system has already been infected. If not, the worm initiates its propagation mechanism, sets up a "backdoor" into the infected machine, becomes dormant for a day, and then reboots the machine. Unlike Code-Red, CodeRedII is not memory resident, so rebooting an infected machine does not eliminate CodeRedII.

After rebooting the machine, the CodeRedII worm begins to spread. If the host infected with CodeRedII has Chinese (Taiwanese) or Chinese (PRC) as the system language, it uses 600 threads to probe other machines. All other machines use 300 threads.

CodeRedII uses a more complex method of selecting hosts to probe than Code-Red. CodeRedII generates a random IP address and then applies a mask to produce the IP address to probe. The length of the mask determines the similarity between the IP address of the infected machine and the probed machine. 1/8th of the time, CodeRedII probes a completely random IP address. 1/2 of the time, CodeRedII probes a machine in the same /8 (so if the infected machine had the IP address 10.9.8.7, the IP address probed would start with 10.), while 3/8ths of the time, it probes a machine on the same /16 (so the IP address probed would start with 10.9.).

Like Code-Red, CodeRedII avoids probing IP addresses in 224.0.0.0/8 (multicast) and 127.0.0.0/8 (loopback). The bias towards the local /16 and /8 networks means that an infected machine may be more likely to probe a susceptible machine, based on the supposition that machines on a single network are more likely to be running the same software as machines on unrelated IP addresses.

The CodeRedII worm is much more dangerous than Code-Red because CodeRedII installs a mechanism for remote, root-level access to the infected machine. Unlike Code-Red, CodeRedII neither defaces web pages on infected machines nor launches a Denial-of-Service attack. However, the backdoor installed on the machine allows any code to be executed, so the machines could be used as zombies for future attacks (DoS or otherwise).

A machine infected with CodeRedII must be patched to prevent reinfection and then the CodeRedII worm must be removed. A security patch for this vulnerability is available from Microsoft at http://www.microsoft.com/technet/treeview/default.asp?url=/technet/itsolutions/security/topics/codealrt.asp. A tool that disinfects a computer infected with CodeRedII is also available: http://www.microsoft.com/Downloads/Release.asp?ReleaseID=31878.

CAIDA Analysis

CAIDA's ongoing analysis of the Code-Red worms includes a detailed analysis of the spread of Code-Red version 2 on July 19, 2001, a follow-up survey of the patch rate of machines infected on July 19th, and dynamic graphs showing the prevalence of Code-Red version 2 and CodeRedII worldwide.

The Spread of the Code-Red Worm (CRv2)

An analysis of the spread of the Code-Red version 2 worm between midnight UTC July 19, 2001 and midnight UTC July 20, 2001.

On July 19, 2001 more than 359,000 computers were infected with the Code-Red (CRv2) worm in less than 14 hours. At the peak of the infection frenzy, more than 2,000 new hosts were infected each minute. 43% of all infected hosts were in the United States, while 11% originated in Korea followed by 5% in China and 4% in Taiwan. The .NET Top Level Domain (TLD) accounted for 19% of all compromised machines, followed by .COM with 14% and .EDU with 2%. We also observed 136 (0.04%) .MIL and 213 (0.05%) .GOV hosts infected by the worm. An animation of the geographic expansion of the worm is available.

Animations

To help us visualize the initial spread of Code-Red version 2, Jeff Brown created an animation of the geographic spread of the worm in five minute intervals between midnight UTC on July 19, 2001 and midnight UTC on July 20, 2001. For the animation, infected hosts were mapped to latitude and longitude values using ipmapper, and aggregated by the number at each unique location. The radius of each circle is sized relative to the infected hosts mapped to the center of the circle using the formula 1+ln(total-infected-hosts). When smaller circles are obscured by larger circles, their totals are not combined with the larger circle; the smaller data points are hidden from view.

Although we attempted to identify the geographic location of each host as accurately
as possible, in many cases the granularity of the location was limited to the country of origin. We plot these hosts at the center of their respective countries. Thus, the rapidly expanding central regions of most countries is an artifact of the localization method.

Animations created by Jeff Brown (UCSD CSE department), based on analysis by David Moore (CAIDA at SDSC).
Copyright UC Regents 2001.

About Code-Red

The first incarnation of the Code-Red worm (CRv1) began to infect hosts running unpatched versions of Microsoft's IIS webserver on July 12th, 2001. The first version of the worm uses a static seed for it's random number generator. Then, around 10:00 UTC in the morning of July 19th, 2001, a random seed variant of the Code-Red worm (CRv2) appeared and spread. This second version shared almost all of its code with the first version, but spread much more rapidly. Finally, on August 4th, a new worm began to infect machines exploiting the same vulnerability in Microsoft's IIS webserver as the original Code-Red virus. Although the new worm shared almost no code with the two versions of the original worm, it contained in its source code the string "CodeRedII" and was thus named CodeRed II. The characteristics of each worm are explained in greater detail below.

The IIS .ida Vulnerability

Detailed information about the IIS .ida vulnerability can be found at eEye
(http://www.eeye.com/html/Research/Advisories/AD20010618.html).

On June 18, 2001 eEye released information about a buffer-overflow vulnerability in Microsoft's IIS webservers.

The remotely exploitable vulnerability was discovered by Riley Hassell. It allows system-level execution of code and thus presents a serious security risk. The buffer-overflow is exploitable because the ISAPI (Internet Server Application Program Interface) .ida (indexing service) filter fails to perform adequate bounds checking on its input buffers.

A security patch for this vulnerability is available from Microsoft at
http://www.microsoft.com/technet/treeview/default.asp?url=/technet/itsolutions/security/topics/codealrt.asp.


Code-Red version 1 (CRv1)

Detailed information about Code-Red version 1 can be found at eEye
(http://www.eeye.com/html/Research/Advisories/AL20010717.html).

On July 12, 2001, a worm began to exploit the aforementioned buffer-overflow vulnerability in Microsoft's IIS webservers. Upon infecting a machine, the worm checks to see if the date (as kept by the system clock) is between the first and the nineteenth of the month. If so, the worm generates a random list of IP addresses and probes each machine on the list in an attempt to infect as many computers as possible. However, this first version of the worm uses a static seed in its random number generator and thus generates identical lists of IP addresses on each infected machine.

The first version of the worm spread slowly, because each infected machine began to spread the worm by probing machines that were either infected or impregnable. The worm is programmed to stop infecting other machines on the 20th of every month. In its next attack phase, the worm launches a Denial-of-Service attack against www1.whitehouse.gov from the 20th-28th of each month.

On July 13th, Ryan Permeh and Marc Maiffret at eEye Digital Security received logs of attacks by the worm and worked through the night to disassemble and analyze the worm. They christened the worm "Code-Red" both because the highly caffeinated "Code Red" Mountain Dew fueled their efforts to understand the workings of the worm and because the worm defaces some web pages with the phrase "Hacked by Chinese". There is no evidence either supporting or refuting the involvement of Chinese hackers with the Code-Red worm.

The first version of the Code-Red worm caused very little damage. The worm did deface web pages on some machines with the phrase "Hacked by Chinese." Although the worm's attempts to spread itself consumed resources on infected machines and local area networks, it had little impact on global resources.

The Code-Red version 1 worm is memory resident, so an infected machine can be disinfected by simply rebooting it. However, once-rebooted, the machine is still vulnerable to repeat infection. Any machines infected by Code-Red version 1 and subsequently rebooted were likely to be reinfected, because each newly infected machine probes the same list of IP addresses in the same order.


Code-Red version 2

Detailed information about Code-Red version 2 can be found at eEye
(http://www.eeye.com/html/Research/Advisories/AL20010717.html) and silicon defense (http://www.silicondefense.com/cr/).

At approximately 10:00 UTC in the morning of July 19th, 2001 a random seed variant of the Code-Red worm (CRv2) began to infect hosts running unpatched versions of Microsoft's IIS webserver. The worm again spreads by probing random IP addresses and infecting all hosts vulnerable to the IIS exploit. Code-Red version 2 lacks the static seed found in the random number generator of Code-Red version 1. In contrast, Code-Red version 2 uses a random seed, so each infected computer tries to infect a different list of randomly generated IP addresses. This seemingly minor change had a major impact: more than 359,000 machines were infected with Code-Red version 2 in just fourteen hours.

Because Code-Red version 2 is identical to Code-Red version 1 in all respects except the seed for its random number generator, its only actual damage is the "Hacked by Chinese" message added to top level webpages on some hosts. However, Code-Red version 2 had a greater impact on global infrastructure due to the sheer volume of hosts infected and probes sent to infect new hosts. Code-Red version 2 also wreaked havoc on some additional devices with web interfaces, such as routers, switches, DSL modems, and printers. Although these devices were not infected with the worm, they either crashed or rebooted when an infected machine attempted to send them a copy of the worm.

Like Code-Red version 1, Code-Red version 2 can be removed from a computer simply by rebooting it. However, rebooting the machine does not prevent reinfection once the machine is online again. On July 19th, the probe rate to hosts was so high that many machines were infected as the patch for the .ida vulnerability was applied.


CodeRedII

Detailed information about CodeRedII can be found at eEye (http://www.eeye.com/html/Research/Advisories/AL20010804.html) and http://aris.securityfocus.com/alerts/codered2/.

On August 4, 2001, an entirely new worm, CodeRedII began to exploit the buffer-overflow vulnerability in Microsoft's IIS webservers. Although the new worm is completely unrelated to the original Code-Red worm, the source code of the worm contained the string "CodeRedII" which became the name of the new worm.

Ryan Permeh and Marc Maiffret analyzed CodeRedII to determine its attack mechanism. When a worm infects a new host, it first determines if the system has already been infected. If not, the worm initiates its propagation mechanism, sets up a "backdoor" into the infected machine, becomes dormant for a day, and then reboots the machine. Unlike Code-Red, CodeRedII is not memory resident, so rebooting an infected machine does not eliminate CodeRedII.

After rebooting the machine, the CodeRedII worm begins to spread. If the host infected with CodeRedII has Chinese (Taiwanese) or Chinese (PRC) as the system language, it uses 600 threads to probe other machines. All other machines use 300 threads.

CodeRedII uses a more complex method of selecting hosts to probe than Code-Red. CodeRedII generates a random IP address and then applies a mask to produce the IP address to probe. The length of the mask determines the similarity between the IP address of the infected machine and the probed machine. 1/8th of the time, CodeRedII probes a completely random IP address. 1/2 of the time, CodeRedII probes a machine in the same /8 (so if the infected machine had the IP address 10.9.8.7, the IP address probed would start with 10.), while 3/8ths of the time, it probes a machine on the same /16 (so the IP address probed would start with 10.9.).

Like Code-Red, CodeRedII avoids probing IP addresses in 224.0.0.0/8 (multicast) and 127.0.0.0/8 (loopback). The bias towards the local /16 and /8 networks means that an infected machine may be more likely to probe a susceptible machine, based on the supposition that machines on a single network are more likely to be running the same software as machines on unrelated IP addresses.

The CodeRedII worm is much more dangerous than Code-Red because CodeRedII installs a mechanism for remote, root-level access to the infected machine. Unlike Code-Red, CodeRedII neither defaces web pages on infected machines nor launches a Denial-of-Service attack. However, the backdoor installed on the machine allows any code to be executed, so the machines could be used as zombies for future attacks (DoS or otherwise).

A machine infected with CodeRedII must be patched to prevent reinfection and then the CodeRedII worm must be removed. A security patch for this vulnerability is available from Microsoft at http://www.microsoft.com/technet/treeview/default.asp?url=/technet/itsolutions/security/topics/codealrt.asp. A tool that disinfects a computer infected with CodeRedII is also available: http://www.microsoft.com/Downloads/Release.asp?ReleaseID=31878.

CAIDA Analysis

CAIDA's ongoing analysis of the Code-Red worms includes a detailed analysis of the spread of Code-Red version 2 on July 19, 2001, a follow-up survey of the patch rate of machines infected on July 19th, and dynamic graphs showing the prevalence of Code-Red version 2 and CodeRedII worldwide.

The Spread of the Code-Red Worm (CRv2)

An analysis of the spread of the Code-Red version 2 worm between midnight UTC July 19, 2001 and midnight UTC July 20, 2001.

On July 19, 2001 more than 359,000 computers were infected with the Code-Red (CRv2) worm in less than 14 hours. At the peak of the infection frenzy, more than 2,000 new hosts were infected each minute. 43% of all infected hosts were in the United States, while 11% originated in Korea followed by 5% in China and 4% in Taiwan. The .NET Top Level Domain (TLD) accounted for 19% of all compromised machines, followed by .COM with 14% and .EDU with 2%. We also observed 136 (0.04%) .MIL and 213 (0.05%) .GOV hosts infected by the worm. An animation of the geographic expansion of the worm is available.

Animations

To help us visualize the initial spread of Code-Red version 2, Jeff Brown created an animation of the geographic spread of the worm in five minute intervals between midnight UTC on July 19, 2001 and midnight UTC on July 20, 2001. For the animation, infected hosts were mapped to latitude and longitude values using ipmapper, and aggregated by the number at each unique location. The radius of each circle is sized relative to the infected hosts mapped to the center of the circle using the formula 1+ln(total-infected-hosts). When smaller circles are obscured by larger circles, their totals are not combined with the larger circle; the smaller data points are hidden from view.

Although we attempted to identify the geographic location of each host as accurately
as possible, in many cases the granularity of the location was limited to the country of origin. We plot these hosts at the center of their respective countries. Thus, the rapidly expanding central regions of most countries is an artifact of the localization method.

Animations created by Jeff Brown (UCSD CSE department), based on analysis by David Moore (CAIDA at SDSC).
Copyright UC Regents 2001.

Quicktime animation of growth by geographic breakdown (200K .mov - requires QuickTime v3 or newer )

  • Hits: 18063

Windows Bugs Everywhere!

Vulnerabilities, bugs and exploits will keep you on your toes

Every day a new exploit, bug, or vulnerability is found and reported on the Internet, in the news and on TV. Although Microsoft seems to get the greatest number of bug reports and alerts, they are not alone. Bugs are found in all of the operating systems, whether it is server software, desktop software or imbedded systems.

Here is a list of bugs and flaws affecting Microsoft products that have been uncovered just in the month of June 2001:

  • MS Windows 2000 LDAP SSL Password Modification Vulnerability
  • MS IIS Unicode .asp Source Code Disclosure Vulnerability
  • MS Visual Studio RAD Support Buffer Overflow Vulnerability
  • MS Index Server and Indexing Service ISAPI Extension
  • Buffer Overflow Vulnerability
  • MS SQL Server Administrator Cached Connection Vulnerability
  • MS Windows 2000 Telnet Privilege Escalation Vulnerability
  • MS Windows 2000 Telnet Username DoS Vulnerability
  • MS Windows 2000 Telnet System Call DoS Vulnerability
  • MS Windows 2000 Telnet Multiple Sessions DoS Vulnerability
  • MS W2K Telnet Various Domain User Account Access Vulnerability
  • MS Windows 2000 Telnet Service DoS Vulnerability
  • MS Exchange OWA Embedded Script Execution Vulnerability
  • MS Internet Explorer File Contents Disclosure Vulnerability
  • MS Outlook Express Address Book Spoofing Vulnerability


The mere frequency and number of bugs that are being found does not bode well for Microsoft and the security of their programming methods. These are just the bugs that have been found and reported, but bugs like the Internet Explorer bug may have been around and exploited for months and hidden from discovery by the underground community.

But it isn't just Microsoft that is plagued with bugs and vulnerabilities. All flavors of Linux have their share of serious bugs also. The vulnerabilities below have also been discovered or reported for the month of June 2001:

  • Procfs Stream Redirection to Process Memory Vulnerability
  • Samba remote root vulnerability
  • Buffer overflow in fetchmail vulnerability
  • cfingerd buffer overflow vulnerability
  • man/man-db MANPATH bugs exploit
  • Oracle 8i SQLNet Header Vulnerability
  • Imap Daemon buffer overflow vulnerability
  • xinetd logging code buffer overflow vulnerability
  • Open SSH cookie file deletion vulnerability
  • Solaris libsldap Buffer Overflow Vulnerability
  • Solaris Print Protocol buffer overflow vulnerability


These are not all of the bugs and exploits that affect *nix systems, there are at least as many *nix bugs found in the month of June as there are for Microsoft products. Even the Macintosh OS, the operating system that is famous for being almost hacker proof, is also vulnerable. This is especially true with the release of OS X. This is because OS X is built on an OpenBSD Linux core. Many of the Linux/BSD specific vulnerabilities can also affect the Macintosh OS X. As an example the Macintosh OS X is subject to the SUDO buffer overflow vulnerability.

Does all of this mean that you should just throw up your hands and give up? Absolutely not! Taken as a whole the sheer number of bugs and vulnerabilities is massive and almost overwhelming. The point is that if you keep up with the latest patches and fixes, your job of keeping your OS secure is not so daunting.

Keeping up is simple if you just know where to look. Each major OS keeps a section of their Web site that is dedicated to security, fixes and patches. Here is a partial list categorized by operating system:

Windows
The Microsoft TechNet section on security contains information on the latest vulnerabilities, bugs, patches and fixes. It also has a searchable database that you can search by product and service pack.

Linux
Since there are so many different flavors of Linux I will list some of the most popular ones here.

RedHat

Alerts and Errata
RedHat lists some of the most recent vulnerabilities here as well as other security links on the RedHat site and security links that can be found elsewhere on the Web.

Slackware

Security Mailing List Archives
Although not as well organized as the Microsoft or RedHat sites, the mailing list archives contain a wealth of information. The archive is organized by year and then by month.

Apple Product Security
Even though the Mac is not as prone to security problems as other OSs, you should still take steps to secure your Mac. With the introduction of OS X, security will be more of a concern.

Keeping up is simple if you just know where to look. Each major OS keeps a section of their Web site that is dedicated to security, fixes and patches. Here is a partial list categorized by operating system:

Windows

TechNet Security Bulletins
The Microsoft TechNet section on security contains information on the latest vulnerabilities, bugs, patches and fixes. It also has a searchable database that you can search by product and service pack.

Linux

Since there are so many different flavors of Linux I will list some of the most popular ones here.

RedHat

Alerts and Errata
RedHat lists some of the most recent vulnerabilities here as well as other security links on the RedHat site and security links that can be found elsewhere on the Web.

Slackware

Security Mailing List Archives
Although not as well organized as the Microsoft or RedHat sites, the mailing list archives contain a wealth of information. The archive is organized by year and then by month.

SuSe

SuSE Linux Homepage
Included here is an index of alerts and announcements on SuSe security. There is also a link for you to subscribe to the SuSe Security Mailing list.

Solaris

Security
This is one of the most comprehensive and complete security sites of all of the OSs. If you can't find it here, you won't find it anywhere.

Macintosh

Apple Product Security
Even though the Mac is not as prone to security problems as other OSs, you should still take steps to secure your Mac. With the introduction of OS X, security will be more of a concern.
  • Hits: 15083

The Cable Modem Traffic Jam

Tie-ups that slow broadband Internet access to a crawl are a reality--but solutions are near at hand
The Cable Modem Traffic Jam

articles-connectivity-cmtj-1-1 Broadband access to the Internet by cable modem promises users lightning-fast download speeds and an always-on connection. And recent converts to broadband from dial-up technology are thrilled with complex Web screens that download before their coffee gets cold.

But, these days, earlier converts to broadband are noticing something different. They are seeing their Internet access rates slow down, instead of speed up. They are sitting in a cable modem traffic jam. In fact, today, a 56K dial-up modem can at times be faster than a cable modem and access can be more reliable.

Other broadband service providers--digital subscriber line (DSL), integrated-services digital networks (ISDNs), satellite high-speed data, and microwave high-speed data--have their own problems. In some cases, service is simply not available; in other situations, installation takes months, or the costs are wildly out of proportion. Some DSL installations work fine until a saturation point of data subscribers per bundle of twisted pairs is reached, when the crosstalk between the pairs can be a problem. 

In terms of market share, the leaders in providing broadband service are cable modems and DSL as shown below:

articles-connectivity-cmtj-2-1

But because the cable modem was the first broadband access technology to gain wide popularity, it is the first to face widespread traffic tie-ups. These tie-ups have been made visible by amusing advertisements run by competitors, describing the "bandwidth hog" moving into the neighborhood. In one advertisement, for example, a new family with teenagers is seen as a strain on the shared cable modem interconnection and is picketed. (The message is that this won't happen with DSL, although that is only a half-truth.)

So, today, the cable-modem traffic jam is all too real in many cable systems. In severe cases, even the always-on capability is lost. Still, it is not a permanent limitation of the system. It is a temporary problem with technical solutions, if the resources are available to implement the fixes. But during the period before the corrections are made, the traffic jam can be a headache.

Cable modem fundamentals

Today's traffic jam stems from the rapid acceptance of cable broadband services by consumers. A major factor in that acceptance was the 1997 standardization of modem technology that allowed consumers to own the in-home hardware and be happy that their investment would not be orphaned by a change to another cable service provider.

A cable modem system can be viewed as having several components:

articles-connectivity-cmtj-3-1

The cable modem connects to the subscriber's personal computer through the computer's Ethernet port. The purpose of this connection is to facilitate a safe hardware installation without the need for the cable technician to open the consumer's PC. If the PC does not have an Ethernet socket, commercially available hardware and software can be installed by the subscriber or by someone hired by the subscriber.

Downstream communication (from cable company headend to cable subscriber's modem) is accomplished with the same modulation systems used for cable digital television. There are two options, both using packetized data and quadrature amplitude modulation (QAM) in a 6-MHz channel, the bandwidth of an analog television channel. QAM consists of two sinusoidal carriers that are phase shifted 90 degrees with respect to each other (that is, the carriers are in quadrature with each other) and each is amplitude modulated by half of the data. The slower system uses 64 QAM with an approximate raw data rate of 30 Mb/s and a 27-Mb/s payload information rate (which is the actual usable data throughput after all error correction and system control bits are removed). The faster system uses 256 QAM with an approximate raw data rate of 43 Mb/s and a payload information rate of 39 Mb/s.

With 64 QAM, each carrier is amplitude modulated with one of eight amplitude levels. The product of the two numbers of possible amplitude levels is 64, meaning that one of 64 possible pieces of information can be transmitted at a time. Since 2^6 is 64, with 64 QAM modulation, 6 bits of data are transmitted simultaneously. Similarly, with 256 QAM, each carrier conveys one of 16 amplitude levels, and since 256 is 2^8, 8 bits of data are transmitted simultaneously. The higher speed is appropriate for newer or upgraded cable plant, while the lower speed is more tolerant of plant imperfections, such as the ingress of interfering signals and reflected signals from transmission line impedance discontinuities.

The upstream communications path (from cable modem to cable headend) resides in a narrower, more challenged spectrum. A large number of sources of interference limits the upstream communication options and speeds. Signals leak into the cable system through consumer-owned devices, through the in-home wiring, the cable drop, and the distribution cable. Fortunately, most modern cable systems connect the neighborhood to theheadend with optical fiber, which is essentially immune to interfering electromagnetic signals. A separate fiber is usually used for the upstream communications from each neighborhood. Also, the upstream bandwidth is not rigorously partitioned into 6-MHz segments.

Depending on the nature of the cable system, one or more of a dozen options for upstream communications are utilized. The upstream bandwidth and frequency are chosen by the cable operator so as to avoid strong interfering signals.

The cable modem termination system (CMTS) is an intelligent controller that manages the system operation. Managing the upstream communications is a major challenge because all of the cable modems in the subscriber's area are potentially simultaneous users of that communications path. Of course, only one cable modem can instantaneously communicate upstream on one RF channel at a time. Since the signals are packetized, the packets can be interleaved, but they must be timed to avoid collisions.

The 1997 cable modem standard included the possibility of an upstream telephone communications path for cable systems that have not implemented two-way cable. Such one-way cables have not implemented an upstream communications path from subscriber to headend. Using a dial-up modem is a practical solution since most applications involve upstream signals that are mainly keystrokes, while the downstream communications includes much more data-intensive messages that fill the screen with colorful graphics and photographs and even moving pictures and sound. The CMTS system interfaces with a billing system to ensure that an authorized subscriber is using the cable modem and that the subscriber is correctly billed.

The CMTS manages the interface to the Internet so that cable subscribers have access to more than just other cable subscribers' modems. This is accomplished with a router that links the cable system to the Internet service provider (ISP), which in turn links to the Internet. The cable company often dictates the ISP or may allow subscribers to choose from among several authorized ISPs. The largest cable ISP is @Home, which was founded in 1995 by TCI (now owned by AT&T), Cox Communications, Comcast, and others. Another ISP, Road Runner, was created by Time Warner Cable and MediaOne, which AT&T recently purchased.

Cable companies serving 80 percent of all North American households have signed exclusive service agreements with @Home or Road Runner. Two more cable ISPs--High Speed Access Corp. and ISP Channel--serve the remaining U.S. and Canadian broadband households. And other major cable companies, CableVision and Adelphia in the United States and Videotron in Canada, offer their own cable modem service.

Cable modem bottlenecks

If there were just one cable modem in operation, it could in principle have an ultimate data download capacity of 27 Mb/s in a 64 QAM cable system or 39 Mb/s in a 256 QAM cable system. While the 256 is four times 64, the data capacity does not scale by this factor since the 8 bits simultaneously transmitted by 256 QAM are not four times the 6 bits simultaneously transmitted by 64 QAM. The 256 QAM data rates are only about 50 percent larger than the 64 QAM rates. Of course, if the cable modem is not built into a PC but is instead connected with an Ethernet link, the Ethernet connection is a bottleneck, albeit at 10 Mb/s. In any case, neither of these bottlenecks is likely to bring any complaints since downloads at these speeds would be wonderful.

A much more likely bottleneck is in the cable system's connection to the Internet or in the Internet itself or even the ultimate Web site. For example, Ellis Island recently opened its Web site to citizens to let them search for their ancestors' immigration records, and huge numbers of interested users immediately bogged down the site. No method of subscriber broadband access could help this situation since the traffic jam is at the information source. A chain is only as strong as its weakest link; if the link between the cable operator and the ISP has insufficient capacity to accommodate the traffic requested by subscribers, it will be overloaded and present a bottleneck.

This situation is not unique to a cable modem system. Any system that connects subscribers to the Internet will have to contract for capacity with an ISP or a provider of connections to the Internet backbone, and that capacity must be shared by all the service's subscribers. If too little capacity has been ordered, there will be a bottleneck. This limitation applies to digital subscriber line systems and their connections to the Internet just as it does to cable systems. If the cable operator has contracted with an ISP, the ISP's Internet connection is a potential bottleneck, because it also serves other customers. Of course, the Internet itself can be overloaded as it races to build infrastructure in step with user growth.

Recognizing that the Internet itself can slow things down, cable operators have created systems that cache popular Web sites closer to the user and that contain local sites of high interest. These sites reside on servers close to the subscriber and reduce dependence on access to the Internet. Such systems have been called walled gardens because they attempt to provide a large quantity of interesting Web pages to serve the subscriber's needs from just a local server. Keeping the subscriber within the walled garden not only reduces the demand on the Internet connection, but can also make money for the provider through the sale of local advertising and services. This technique can become overloaded as well. But curing this overload is relatively easy with the addition of more server capacity (hardware) at the cache site.

Two cable ISPs, Road Runner and @Home, were designed to minimize or avoid Internet bottlenecks. They do it by leasing virtual private networks (VPNs) to provide nationwide coverage. VPNs consist of guaranteed, dedicated capacity, which will ensure acceptable levels of nationwide data transport to local cable systems. @Home employs a national high-speed data backbone through leased capacity from AT&T. Early on, a number of problems caused traffic jams, but these are now solved.

Other potential bottlenecks are the backend systems that control billing and authorization of the subscriber's service. As cable modem subscriber numbers grow, these systems must be able to handle the load.

The capacity on the cable system is shared by all the cable modems connected to a particular channel on a particular node. Cable systems are divided into physical areas of several hundred to a few thousand subscribers, each of which is served by a node. The node converts optical signals coming from (and going to) the cable system's headend into radio frequency signals appropriate for the coaxial cable system that serves the homes in the node area:

articles-connectivity-cmtj-4-1

Only the cable modems being used at a particular time fight for sizable amounts of the capacity. Modems that are connected but idle are not a serious problem, as they use minimal capacity for routine purposes.

Clearly, success on the part of a cable company can be a source of difficulty if it sells too many cable modems to its subscribers for the installed capacity. The capacity of a given 6-MHz channel assigned to the subscribers' neighborhood and into their premises is limited to the amounts previously discussed (27 Mb/s in a 64 QAM cable system or 39 Mb/s in a 256 QAM cable system) and the demand for service can exceed that capacity. Both upstream and downstream bandwidth limitations can hinder performance. Upstream access is required to request downloads and to upload files. Downstream access provides the desired information.

Usually, it is the downstream slowdown that is noticed. Some browsers (the software that interprets the data and paints the images on the computer screen) include so-called fuel gages or animated bar graphs that display the progress of the download. They can be satisfying when they zip along briskly, but rub salt in the wound when they crawl slowly and remind the user that time is wasting.

Bandwidth hogs in a subscriber's neighborhood can be a big nuisance. As subscribers attempt to share large files, like music, photos, or home movies, they load up the system. One of the rewards of high-speed Internet connections is the ability to enjoy streaming video and audio. Yet these applications are a heavy load on all parts of the system, not just the final link. System capacity must keep up with both the number of subscribers and the kinds of applications they demand. As the Internet begins to look more like television with higher-quality video and audio, it will require massive downstream capacity to support the data throughput. As the Internet provides more compelling content, it will attract even more subscribers. So the number of subscribers grows and the bandwidth each demands also grows. Keeping up with this growth is a challenge.

Impact of open access

Open access is the result of a fear on the part of the government regulators that cable system operators will be so successful in providing high-speed access to the Internet that other ISPs will be unable to compete. The political remedy is to require cable operators to permit competitive ISPs to operate on their systems. Major issues include how many ISPs to allow, how to integrate them into the cable system, and how to charge them for access. The details of how open access is implemented may add to the traffic jam.

A key component in dealing with open access is the CMTS. The ports on the backend of this equipment connect to the ISPs. But sometimes too few ports are designed into the CMTS for the number of ISPs wishing access. More recent CMTS designs accommodate this need. However, these are expensive pieces of equipment, ranging up to several hundreds of thousands of dollars. An investment in an earlier unit cannot be abandoned without great financial loss.

If the cost of using cable modem access is fairly partitioned between the cost of using the cable system and the access fees charged by the cable company, then the cable operator is fairly compensated for the traffic. With more ISPs promoting service, the likelihood is that there will be more cable modem subscribers and higher usage. This, of course, will contribute to the traffic jam. In addition, the backend processing of billing and cable modem authorization can be a strain on the system.

What to do about the traffic jam?

The most important development in dealing with all these traffic delays is the release of the latest version of the cable modem technical standard. Docsis Release 1.1 (issued by CableLabs in 1999) includes many new capabilities, of which the most pertinent in this context is quality of service (QoS). In most aspects of life, the management of expectations is critical to success. When early adopters of cable modem service shared a lightly loaded service, they became accustomed to lightning access. When more subscribers were added, the loading of the system lowered speed noticeably for each subscriber in peak service times.

Similarly, the difference between peak usage times and the late night or early morning hours can be substantial. It is not human nature to feel grateful for the good times while they last, but rather to feel entitled to good times all the time. The grades of service provided by QoS prevent the buildup of unreasonable expectations and afford the opportunity to contract for guaranteed levels of service. Subscribers with a real need for speed can get it on a reliable basis by paying a higher fee while those with more modest needs can pay a lower price. First class, business class, and economy can be implemented with prices to match.

Beefing up to meet demand

Network traffic engineering is the design and allocation of resources to satisfy demand on a statistical basis. Any economic system must deal with peak loads while not being wasteful at average usage times. Consumers find it difficult to get a dial tone on Mother's Day, because it would be impractically expensive to have a phone system that never failed to provide dial tone. The same is true of a cable modem system. At unusually high peaks, service may be temporarily delayed or even unavailable.

An economic design matches the capacity of all of the system elements so that no element is underutilized while other elements are under constant strain. This means that a properly designed cable modem system will not have one element reach its maximum capacity substantially before other elements are stressed. There should be no weakest links. All links should be of relatively the same capacity.

More subscribers can be handled by allocating more bandwidth. Instead of just one 6-MHz channel for cable modem service, two or more can be allocated along with the hardware and software to support this bandwidth. Since many cable systems are capacity limited, the addition of another 6-MHz channel can be accomplished only by sacrificing the service already assigned to it. A typical modem cable system would have a maximum frequency of about 750 MHz. This allows for 111 or so 6-MHz channels to be allocated to conflicting demands. Perhaps 60-75 of them carry analog television. The remainder are assigned to digital services such as digital television, video on demand, broadband cable service, and telephony.

Canceling service to free up bandwidth for cable modems may cause other subscriber frustrations. While adding another 6-MHz channel solves the downstream capacity problem, if the upstream capacity is the limiting factor in a particular cable system, merely adding more 6-MHz channels will still leave a traffic jam. The extra channels help with only one of the traffic directions.

Cable nodalization is another important option in cable system design for accommodating subscriber demand. Nodalization is essentially the dividing up of the cable system into smaller cable systems, each with its own path to the cable headend. The neighborhood termination of that path is called a node. In effect, then, several cables, instead of a single cable, come out of the headend to serve the neighborhoods.

Cable system nodes cater to anywhere from several thousand subscribers to just a few hundred. Putting in more nodes is costly, but the advantage of nodalization is that the same spectrum can be used differently at each node. A specific 6-MHz channel may carry cable modem bits to the users in one node while the same 6-MHz channel carries completely different cable modem bits to other users in an adjacent node. This has been called space-division multiplexing since it permits different messages to be carried, depending on the subscriber's spatial location.

An early example of this principle was deployed in the Time Warner Cable television system in Queens, New York City. Queens is a melting pot of nationalities. The immigrants there tend to cluster in neighborhoods where they have relatives and friends who can help them make the transition to the new world. The fiber paths to these neighborhoods can use the same 6-MHz channel for programs in different languages. So a given channel number can carry Chinese programming on the fiber serving that neighborhood, Korean programming on another fiber, and Japanese programming on still another fiber. As the 747s fly into the John F. Kennedy International Airport in Queens each night, they bring tapes from participating broadcasters in other countries that become the next day's programming for the various neighborhoods. (Note that this technique is impossible in a broadcast or satellite transmission system since such systems serve the entire broadcast area and cannot employ nodalization.)

The same concept of spectrum reuse is applied to the cable modem. A 6-MHz channel set aside for this purpose carries the cable modem traffic for the neighborhood served by its respective node. While most channels carry the same programming to all nodes, just the channel(s) assigned to the modem service carry specialized information directed to the individual nodes. Importantly, nodalization reuses the upstream spectrum as well as the downstream spectrum. So, given enough nodes, traffic jams are avoided in both directions.

However, nodalization is costly. Optical-fiber paths must be installed from the headend to the individual nodes. The fiber paths require lasers and receivers to convert the optical signals into electrical signals for the coaxial cable in the neighborhood. Additional modulators per node are required at the cable headend , as well as routers to direct the signals to their respective lasers. The capital investment is substantial. However, it is technically possible to solve the problem. (In principle, nodalization could be implemented in a fully coaxial cable system. But in practice coaxial cable has a lot higher losses than fiber and incurs even greater expense in the form of amplifiers and their power supplies.)

Other techniques for alleviating the traffic jam include upgrading the cable system so that 256 QAM can be used instead of 64 QAM downstream and 16 QAM can be used upstream instead of QPSK. If the ISP's connection to the Internet is part of the problem, a larger data capacity connection to the Internet backbone can be installed.

Also, non-Docsis high-speed access systems are under development for very heavy users. These systems will provide guaranteed ultrahigh speeds of multiple megabits per second in the downstream direction while avoiding the loading of the Docsis cable modem channels. The service can then be partitioned into commercial and residential or small business services that do not limit each other's capabilities.

Speculations on the future

The cable modem traffic jam is due to rapid growth that sometimes outpaces the resources available to upgrade the cable system. But solutions may be near at hand.

The next wave of standardization, Docsis 1.1 released in 1999, provides for quality-of-service segmentation of the market. Now that the standard is released, products are in development by suppliers and being certified by CableLabs. Release 1.1 products will migrate into the subscriber base over the next several years. Subscribers will then be able to choose the capacity they require for their purposes and pay an appropriate fee. The effect will be to discourage bandwidth hogs and ensure that those who need high capacity, and are willing to pay for it, get it. And market segmentation will provide financial justification to implement even more comprehensive nodalization. After enough time has passed for these system upgrades to be deployed, the traffic jam should resolve itself.

  • Hits: 19693
Cisco WLC & AP Compatibility Matrix Download

Complete Cisco WLC Wireless Controllers, Aironet APs & Software Compatibility Matrix - Free Download

cisco wlc ap compatibility list downloadFirewall.cx’s download section now includes the Cisco WLC Wireless Controllers Compatibility Matrix as a free download. The file contains two PDFs with an extensive list of all old and new Cisco Wireless Controllers and their supported Access Points across a diverse range of firmware versions.

WLCs compatibility list includes: WLC 2100, 2504, 3504, 4400, 5508, 5520, 7510, 8510, 8540, Virtual Controller, WiSM, WiSM2, SRE, 9800 series and more. 

Access Point series compatibility list includes: 700, 700W, 1000, 1100, 1220, 1230, 1240, 1250, 1260, 1300, 1400, 1520, 1530, 1540, 1550, 1560, 1600, 1700, 1800, 2600, 2700, 2800, 3500, 3600, 3700, 3800, 4800, IW6300, 9100, 9130, 9160,

The compatibility matrix PDFs provide an invaluable map, ensuring that your network components are supported across different software versions. Make informed choices, plan upgrades with precision, and optimize your network's performance effortlessly.

Check the compatibility between various WLC hardware & virtual versions, Access Points and a plethora of Cisco software offerings, such as Cisco Identity Services Engine (ISE), Cisco Prime Infrastructure, innovative Cisco Spaces, and the versatile Mobility Express. This compatibility matrix extends far beyond devices, painting a holistic picture of how different elements of your Cisco ecosystem interact with one another.  Make informed choices, plan upgrades with precision, and optimize your network's performance effortlessly.

Click here to visit the download page.

  • Hits: 6774

Firewall.cx: 15 Years’ Success – New Logo – New Identity – Same Mission

This December (2015) is a very special one. It signals 15 years of passion, education, learning, success and non-stop ‘routing’ of knowledge and technical expertise to the global IT community.

What began 15 years ago as a small pitiful website, with the sole purpose of simplifying complicated networking & security concepts and sharing them with students, administrators, network engineers and IT Managers, went on to become one of the most recognised and popular network security websites in the world.

Thanks to a truly dedicated and honest team, created mainly after our forums kicked in on the 24th of October 2001, Firewall.cx was able to rapidly expand and produce more high-quality content that attracted not only millions of new visitors but also global vendors.

Our material was all of a sudden used at colleges and universities, was referenced by thousands of engineers and sites around the world, then Cisco Systems referenced Firewall.cx resources in its official global CCNA Academy Program!

Today we look back and feel extremely proud of our accomplishment and, after all the recognition, positive feedback from millions and success stories from people who moved forward in their professional career thanks to Firewall.cx, we feel obligated to continue working hard to help this amazing IT community.

Readers who have been following Firewall.cx since the beginning will easily identify the colourful Firewall.cx logo that has been with us since the site first went online. While we’ve changed the site’s design & platform multiple times the logo has remained the same, a piece of our history to which users can relate.

Obviously times have changed since 2000 and we felt (along with many other members) that it was time to move forward and replace our logo with one that will better suit the current Firewall.cx design & community, but at the same time make a real statement about who we are and what our mission is.

So, without any further delay, we would like to present to our community the new Firewall.cx logo:

Firewall.cx - New Logo - The Site for Networking Professionals

 

Explaining Our New Logo

Our new logo communicates what Firewall.cx and its community are all about. The new slogan precisely explains what we do: Route (verb) Information (knowledge) and Expertise to our audience of Network Professionals – that’s you. Of course, we still remain The No.1 Site for Networking Professionals :)

The icon on the left is a unique design that tells two stories:

  1. It’s a router, similar to Cisco’s popular Visio router icons, symbolising the “routing” process of information & expertise mentioned in our slogan.
  2. It symbolises four IT professionals: three represent our community (red) – that’s you, and the fourth (blue) is the Firewall.cx team. All four IT professionals are connected (via their right arm) and share information with each other (the arrows).

We hope our readers will embrace the new logo as much as we did and continue to use Firewall.cx as a trusted resource for IT Networking and Security topics.

On behalf of the Firewall.cx Team - Thank you for all your support. We wouldn’t be here without you.

Chris Partsenidis
Founder & Editor-in-Chief
  • Hits: 7561

Firewall.cx Free Cisco Lab: Equipment Photos

Our Cisco lab equipment has been installed in a 26U - 19' inch rack, complemented by blue neon lighting and a 420VA UPS to keep everything running smoothly, should a blackout occur.

The pictures taken show equipment used in all three labs. Please click on the picture of your choice to load a larger version.

cisco-lab-pictures-3-small

The 2912XL responsible for segmenting the local network, ensuring each lab is kept in its own isolated environment.


cisco-lab-pictures-7
Cisco Lab No.1 - The lab's Catalyst 1912 supporting two cascaded 1603R routers, and a 501 PIX Firewall.



cisco-lab-pictures-6
Cisco Lab No.2 - The lab's two 1603R routers.




cisco-lab-pictures-6Cisco Lab No.3 - Three high-end Cisco switches flooded in blue lighting, making VLAN services a reality.




Cisco Lab No.3 - Optical links connecting the three switches together, permitting complex STP scenarios.

  • Hits: 21313

Firewall.cx Free Cisco Lab: Tutorial Overview

The Free Cisco lab tutorials were created to help our members get the most out of our labs by providing a step-by-step guide to completing specific tasks that vary in difficulty and complexity.

While you are not restricted to these tutorials, we do recommend you take the time to read through them as they cover a variety of configurations designed to enhance your knowledge and experience with these devices.

As one would expect, the first tutorials are simple and designed to help you move gradually into deeper waters. As you move on to the rest of the tutorials, the difficulty will increase noticeably, making the tutorials more challenging.

NOTE: In order to access our labs, you will need to open TCP ports 2001 to 2010. These ports are required so you can telnet directly into the equipment.

Following is a list of available tutorials:

Task 1: Basic Router & Switch Configuration

Router: Configure router's hostname and Ethernet interface. Insert a user mode and privilege mode password, enable secret password, encrypt all passwords, configure VTY password. Perform basic connectivity tests, check nvram, flash and system IOS version. Create a banner motd.

Switch: Configure switch's hostname, Ethernet interface, System name, Switching mode, Broadcast storm control, Port Monitoring, Port configuration, Port Addressing, Network Management, Check Utilisation Report and Switch statistics.

Task 2: Intermediate Router Configuration

Configure the router to place an ISDN call toward a local ISP using ppp authentication (CHAP & PAP). Set the appropriate default gateway for this stub network and configure simple NAT Overload to allow internal clients to access the Internet. Ensure the call is disconnected after 5 minutes inactivity.

Configure Access Control Lists to restrict telnet access to the router from the local network. Create a local user database to restrict telnet access to specific users.

Block all ICMP packets originating from the Local LAN towards the Internet and allow the following Internet services to the local LAN: www, dns, ftp, pop & smtp. Ensure you apply the ACL's to the router's private interface.

Block all incoming packets originating from the Internet.

  • Hits: 30287

Firewall.cx Free Cisco Lab: Our Partners

Our Cisco Lab project is a world first; there is no other Free Cisco Lab offered anywhere in the world! Our technical specifications and the quality of our lab marks a new milestone in free online education, matching the spirit in which this site was created.

While the development of our lab continues we publicly acknowledge and thank the companies that have made this dream a reality from which you can benefit, free of charge!

Each contributor is recognised as a Gold or Silver Partner.

 

cisco-lab-partners-1

logo-gfi
cisco-lab-partners-datavision

 

 

cisco-lab-partners-2

 

cisco-lab-partners-symantecpress

 

cisco-lab-partners-ciscopress

cisco-lab-partners-prenticehall

cisco-lab-partners-addison-wesley
  • Hits: 17380

Firewall.cx Free Cisco Lab: Access and Help

Connecting to the Lab Equipment

In order to access our equipment, the user must initiate a 'telnet' session to each device. The telnet session may be initiated using either of the following two ways:

1) By clicking on the equipment located on the diagram above. If your web browser supports external applications, once you click on a diagram's device, a dos-based telnet window will open and you'll receive the Cisco Lab welcome screen.

Note: The above method will NOT work with Internet Explorer 7, due to security restrictions.

2) Manually initiating a telnet session. On each diagram, note the device port list on the lower left hand corner. These are the ports to which you need to telnet into, so you may access the equipment your lab consists of. You can either use a program of your choice, or follow the traditional dos-based window by clicking on "Start" button, go to the "Run" selection and enter "command" (Windows 95, 98, Me) or "cmd" (Windows 2000, XP, 2003). At the DOS prompt enter:

c:\> telnet ciscolab.no-ip.org xxxx

where 'xxxx' is substituted with the device port number as indicated on the diagram.

For example, if you wanted to connect to a device that users device port 2003, the following would be the command required: telnet ciscolab.no-ip.org 2003

You need to repeat this step for each equipment you need to telnet into.

Cisco 'Secret' Passwords

Each lab requires you to set the 'enable secret' password. It is imperative you use the word "cisco" ,so our automated system is able to reset the equipment for the next user.

We ask that you kindly respect this request to ensure our labs are accessible and usable by everyone.

Since all access attempts are logged by our system, users found storing other 'enable secret' passwords will be blocked from the labs and site in general.

To report any errors or inconsistencies with regards to our lab system, please use the Cisco lab forum.

With your help, we can surely create the world's friendliest and resourceful Free Cisco Lab!

  • Hits: 15286

Firewall.cx Free Cisco Lab: Setting Your Account GMT Timezone

Firewall.cx's Free Cisco Labs make use of a complex system in order to allow users from all over the world create a booking in their local timezone. Prerequisits for a successful booking is the user to have the correct GMT Timezone setting in their Firewall.cx profile, as this is used to calculate and present the current scheduling system in the user's local time.

If you are unsure what GMT Timezone you are in, please visit https://greenwichmeantime.com/ and click on your country.

You can check your GMT Timezone by viewing your account profile. This can be easily done by firstly logging into your account and then clicking on "Your Account" from the site's main module:

cisco-lab-gmt-1

 

 

 

 

 

 

 

 

Next, click on "Your Info" as shown in the screenshot below:

cisco-lab-gmt-2

 

Finally, scroll down to the 'Forums Timezone' and click on the drop-down box to make your selection.

cisco-lab-gmt-3

Once you've select the correct timezone, scroll to the bottom of the page and click on "Save Changes".

Please note that you will need to adjust your GMT Timezone as you enter/exit daylight savings throughout the year.

You are now ready to create your Cisco Lab booking!

red-line

  • Hits: 14865

Firewall.cx Free Cisco Lab: Equipment & Device List

No lab is possible without the right equipment to allow coverage of simple to complex scenarios.

With limited income and our sponsors help, we've done our best to populate our lab with the latest models and technologies offered by Cisco. Our current investment exceeds $10,000 US dollars and we will continue to purchase more equipment as our budget permits.

We are proud to present to you the following equipment that will be made available in our lab:

Routers
3 x 1600 series routers including BRI S/T, Serial and Ethernet interfaces
1 x 1720 series router including BRI S/T, Serial and Fast Ethernet interfaces
1 x 2610 series routers with BRI S/T, Wic-1T, BRI 4B-S/T and Ethernet interfaces
1 x 2612 series router with BRI S/T, Wic-1T, Ethernet and Token Ring interfaces
1 x 2620 series router with Wic-1T and Fast Ethernet interfaces
2 x 3620 series routers with BRI S/T, Wic-1T, Wic-2T, Ethernet, Fast Ethernet interfaces
1 x 1760 series router supporting Cisco Call Manager Express with Fast Ethernet & Voice Wic
1 x Cisco 2522 Frame relay router simulator
Total: 11 Routers
 
Switches
1 x 1912 Catalyst switches with older menu-driven software
1 x 2950G-12T Catalyst switch with 12 Fast Ethernet ports, 2 Gigabit ports (GBIC)
2 x 3524XL Catalyst switch with 24 Fast Ethernet ports, 2 Gigabit ports (GBIC)
Total: 4 Switches
 
Firewall
1 x Pix Firewall 501 v6.3 software
 
Other Devices/Equipment
  • Gbics for connections between catalyst switches
  • Multimode and Singlemode fiber optic cables for connection between switches
  • DB60 x-over cables to simulate leased lines
  • 420 VA UPS to ensure lab availability during power shortage
  • CAT5 UTP cables & patch cords
  • 256/128K Dedicated ADSL Connection for Lab connectivity

red-line

  • Hits: 18006

Firewall.cx Free Cisco Lab: Equipment & Diagrams

Each lab has been designed to cover specific topics of the CCNA & CCNP curriculum, but are in no way limited, as you are given the freedom to execute all commands offered by the device's IOS.

While the lab tutorials exist only as guidelines to help you learn how to implement the services and features provided by the equipment, we do not restrict their usage in any way. This effectively means that full control is given to you and, depending on the lab, a multitude of variations to the lab's tutorial are possible.

Cisco Lab No.1 - Basic Router & Switch Configuration

The first Cisco Lab involves the configuration of one Cisco 1603R router and Catalyst 1912 switch. This equipment has been selected to suit the aim of this lab, which is to serve as an introduction to Cisco technologies and concepts.

The lab is in two parts, the first one covering basic IOS functions such as simple router and switch configuration (hostname, interface IP addresses, flash backup, banners etc).

The second part focuses on ISDN configuration and dialup, including ppp debugging, where the user is required to perform a dialup to an ISP via the lab's ISDN simulator. Basic access lists are covered to help enhance the lab further. Lastly, the user is able to ping real Internet IP Addresses from the 1603R due to the fact the back end router (ISP router) is connected to the lab's Internet connection.

cisco-lab-diagrams-lab-1

 

Equipment Configuration:

Cisco Catalyst 1912
FLASH: 1MB
IOS Version: v8.01.02 Standard Edition
Interfaces:12 Ethernet / 2 Fast Ethernet

 

Cisco 1603R
DRAM / FLASH: 16MB / 16MB
IOS Version: 12.3(22)
Interfaces: 1 Ethernet / 1 Serial / 1 ISDN BRI

red-line

Cisco Lab No.2 - Advanced Router Configuration

The second Cisco lab focuses on advanced router configuration by covering topics such as WAN connectivity (leased lines) with ISDN backup functionality thrown into the package. GRE (encrypted) tunnels, DHCP services with a touch of dynamic routing protocols such as RIPv2 are also included.

As you can appreciate, the complexity here is greater and therefore the lab is split into 4 separate tutorials to ensure you get the most out of all four tutorials.

You will utilise all three interfaces available on the routers, these include Ethernet, ISDN and Serial interfaces. The primary WAN link is simulated using a back-to-back serial cable and the ISDN backup capability is provided through our lab's dedicated ISDN simulator.

cisco-lab-diagrams-lab-2

 

Equipment Configuration:

Cisco 1603R (router 1)
DRAM / FLASH: 18MB / 16MB
IOS Version: 12.3(6a)
Interfaces: 1 Ethernet / 1 Serial / 1 ISDN BRI

 

Cisco 1603 (router 2)
DRAM / FLASH: 24MB / 16MB
IOS Version: 12.3(6a)
Interfaces: 1 Ethernet / 1 Serial / 1 ISDN BRI

red-line

Cisco Lab No.3 - VLANs - VTP & InterVLAN Routing

The third Cisco lab aims to cover the popular VLAN & InterVLAN routing services, which are becoming very common in large complex networks.

The lab consists of two Catalyst 3500XL switches and one Catalyst 2950G as backbone switches, attached to a Cisco 2620 router.

Our third lab has been designed to fully support the latest advanced services offered by Cisco switches such as the creation of VLANs and configuration of the popular InterVLAN Routing service amongst all VLANs and switches.

Advanced VLAN features, such as Virtual Trunk Protocol (VTP) and Trunk links throughout the backbone switches, are tightly integrated into the lab's specifications and extend to support a number of VLAN related services just as it would in a real-world environment.

Further extending this lab's potential, we've added Etherchannel support to allow you to gain experience in creating high-bandwidth links between switches with multiple low-bandwidth interfaces (100Mbps), aggregating these links to form one large pipe (400Mbps in our example).

Lastly, STP (Spanning Tree Protocol) is fully supported. The lab guides you understand the use of STP in order to create fully redundant connections between backbone switches. You are able to disable backbone links, simulating link loss and monitoring the STP protocol, while it activates previously blocked links.

cisco-lab-diagrams-lab-3

This lab requires you to perform the following tasks:

- Basic & advanced VLAN configuration

- Trunk & Access link configuration

- VLAN Database configuration

- VTP (VLAN Trunk Protocol) Server, client and transparent mode configuration

- InterVLAN routing using a 2620 router (Router on a stick)

- EtherChannel link configuration

- Simple STP configuration, Per VLAN STP Plus (PVST+) & link recovery

 

Equipment Configuration:

Cisco 2620 (router 1)
DRAM / FLASH: 48MB / 32MB
IOS Version: 12.2(5d)
Interfaces: 1 Fast Ethernet
 
Cisco Catalyst 3500XL (switch 1)
DRAM / FLASH: 8MB / 4MB
IOS Version: 12.0(5.2)XU - Enterprise Edition Software
Interfaces: 24 Fast Ethernet / 2 Gigabit Ethernet with SX GBIC modules installed
 
Cisco Catalyst 3500XL (switch 2)
DRAM / FLASH: 8MB / 4MB
IOS Version: 12.0(5.4)WC(1) - Enterprise Edition Software
Interfaces: 24 Fast Ethernet / 2 Gigabit Ethernet with SX & LX GBIC modules installed
 
Cisco Catalyst 2950G-12-EI (switch 3)
DRAM / FLASH: 20MB / 8MB
IOS Version: 12.1(6)EA2
Interfaces: 12 Fast Ethernet / 2 Gigabit Ethernet with SX & LX GBIC modules installed
  • Hits: 31202

Firewall.cx Free Cisco Lab: Online Booking System


The Online Booking System is the first step required for any user to access our lab. The process is fairly straightforward and designed to ensure even novice users can use it without problems.

How Does It Work?

To make a valid booking on our system you must be a registered Firewall.cx user. Existing users are able to access the Online Booking System from inside their Firewall.cx account.

Once registered, you will be able to log into your Firewall.cx account and access the Online Booking System.

The Online Booking System was customised to suit our lab's needs and provide a booking schedule for all resources (labs) available to our community. Once logged in, you are able to select the resource (lab) you wish to access, check its availability and finally proceed with your booking.

There are a number of parameters that govern the use of our labs to ensure fair usage and avoid the abuse of this free service. The maximum session time for each lab depends on its complexity. Naturally, the more complex, the more time you will be allowed. When your time has expired you will automatically be logged off and the lab equipment will be reset for the next scheduled user.

Following is a number of screen shots showing how a booking is created. You will also find the user's control panel, from where you can perform all functions described here.

Full instructions are always provided by the use of our 'Help' link located on the upper right corner of the booking system's page.

The Online Booking System login page:

cisco-lab-booking-system-1

red-line

 

The booking system control panel:

cisco-lab-booking-system-2

red-line

The lab scheduler/calendar:

cisco-lab-booking-system-3

red-line

Creating a booking:

cisco-lab-booking-system-4

red-line

User control panel showing current reservations:

cisco-lab-booking-system-5

red-line

  • Hits: 21617