Tuesday, April 26, 2011

Why Wireless Interference is an important consideration in Wi-Fi networks

Unlike a Wired Network, where adding more network switches gives better performance, a wireless network cannot be optimized for performance by adding more access points/ denser deployment of access points – mainly due to the Wireless Interference. In this article, we’ll try to understand frequency bands, interference, interference from 802.11 Wi-Fi enabled devices, interference from Non-Wi-Fi devices and how to identify and mitigate wireless interference.

Understanding Wireless Frequency Bands:

You might be familiar with the concept of frequency tuning in radio. When you tune your receiver to a certain frequency, you are able to hear the programs from a particular channel. So, when you use an analog rotary tuner to switch channels, you might have noticed that as you rotate the tuner, first a faint sound appears, then you get a strong signal, and then the signal weakens. So, the signals are received (with varying range of amplitudes) over a range of frequencies. When you consider many channels, the used range of frequencies becomes wider.
Similarly, Wireless (Wi-Fi networks) operate mainly in two major frequency bands (ranges) – 2.4Ghz and 5 Ghz. Both are unlicensed ISM band frequencies (Industrial, Scientific and Medical RF band) – Which means, any device / technology can use that band for communications.
2.4 Ghz & 5 Ghz are frequency bands (range of frequencies). The actual communications happen in sub-frequencies called channels, within each spectrum (frequency band). For example, in the 2.4 Ghz spectrum, Channel center frequencies might be like : Channel 1 – 2.412 Ghz; Channel 2 – 2.417 Ghz…… Channel 13 – 2.472 Ghz, etc. A Wireless Radio (on wireless access point) & client radio (wireless client on a laptop) operates in one of these channels to transmit information between them.
Every channel (sub-frequency) overlaps with its adjacent channels. So, Channel 6 for example, might overlap strongly with channels 5, 4 but weakly with channels 3, 2. In the 2.4 Ghz spectrum, Channels 1,6 & 11 are non-overlapping channels. That brings us to the next topic – Interference.

Wireless Interference:

Consider that there are three operational access points situated at a distance of 1 meter from each other (for example). If they operate in channels 1, 2 & 3 (respectively) or channels 1, 1 & 1 (respectively) – there would be a lot of interference that will affect all the clients connecting to these three access points. That’s because, generally access points and clients receive all the communications that are transmitted and reject those that are not in its frequency (channel) of operation. But if different access points operate in same channels (or) adjacent channels, they get confused if messages sent to them were meant for them or not!
But if the three access points are operating in channels 1, 6 & 11 (respectively), even if they are placed very close to each other, there would not be much interference because, the sub-frequencies used by each channel are far apart. In other words, these three channels are non-overlapping channels.
Interference might not allow you to connect to a wireless access point/ network, disconnect you from an existing connection (requiring you to re-connect to the network) or might slow down/ choke the wireless connectivity. Wireless Interference causes noticeable problems with real time applications like voice/ video transmitted over the wireless network. Interference is both a performance issue and a security concern (Rogue Access Points, Wireless DOS attacks, etc).
There are two types of wireless interference – Interference from Wi-Fi (802.11) Sources & Interference from Non-Wi-Fi Sources.

Interference from Wi-Fi (802.11) Sources:

Wi-Fi devices that interfere with the wireless network are – Access Points that are in the range of each other (and operating in overlapping channels); Neighboring Access Points that might be operating in overlapping channels & Wireless Jammers that intentionally operate in overlapping channels.
So, when two access points operate in same channel/ adjacent channels, and are in the range of each other, there would be interference. With 802.11 Wi-Fi based networks and devices, people might still be accessing and working on the wireless network even if there is considerable interference but with reduced throughput levels. 802.11 networks are resilient enough to retransmit the lost packets, but that might reduce the total available bandwidth.
Similarly, the access points across the street or in neighboring office, might as well be operating in the same channel, causing some interference. There are certain wireless jammers which cause interference in the network with the intention of disrupting wireless services.
Since the latest 802.11n network and devices use multiple antennas, they might be in a slightly better position to reduce interference by comparing the received signals from multiple antennas and averaging out the interfering signals.

Interference from Non-Wi-Fi Sources:

Since 2.4 Ghz and 5 Ghz are unlicensed frequency bands (spectrum), a lot of other technologies like Bluetooth, Zigbee & lot of devices like microwave ovens, wireless cameras, cordless phones, wireless headsets, wireless device controllers, etc operate in these frequency bands as well, thereby causing interference to Wi-Fi network communications.
Microwave ovens operate in multiple frequencies (wideband) and consistently interfere with the Wi-Fi devices. Wireless Cameras operate in narrow band and hence interfere on particular Wi-Fi frequencies, Bluetooth headset keeps hopping across the frequency band but still causes interference temporarily.
Even if a complete site-survey is done prior to the implementation of Wi-Fi network, it is still difficult to find out the Non-Wi-Fi sources of interference because, newer/smaller wireless devices are appearing in the market which could be brought by the employees at any time, thereby causing (unintentional) disturbance to the corporate Wi-Fi network.

Detecting and Mitigating Wireless Interference:

5 Ghz is a relatively clean spectrum without much interference from non Wi-Fi sources. But most of the commercially available Wi-Fi network devices operate in the more popular 2.4 Ghz spectrum. It might be better to implement Wi-Fi networks to operate in 5 Ghz frequency band (For this, both the client adapter on the laptops and access point should support 5 Ghz operation), especially with 802.11n high performance networks.
Some vendors fit sensors on access points that detect interference in their channel of operation (if any) and switch to other channels. But this may not be a solution for interference from non Wi-Fi sources. Its possible to reduce the chances of interference by controlling (reducing) the (transmission) power levels of access points. Using multiple/ multi-sector antennas might also improve the SNR.
Wi-Fi Sources: The interference from other Wi-Fi sources are relatively easier to detect, and in some cases even mitigate. The basic principle with Wi-Fi sources is to avoid any neighboring access points operating in the same channel (and adjacent channels). As far as possible, neighboring access points need to operate in non-overlapping channels (Like 1,6,11).
Its quite difficult to monitor each access point manually, and change the frequency of operation manually for all access points (though its possible). Even if they are set manually, if an access point reboots (due to power loss etc), it will choose an arbitrary frequency (channel) which may not be the same as manually set frequency. So, the process (assigning channels manually) needs to be repeated.
To automate this process, a Wireless Controller, that provides centralized management can be used in a network to continuously gauge the channel of operation for all the neighboring access points and adjust their channel settings dynamically. Most of the Wireless Controllers can manage only their own make of access points, but there are wireless management softwares available to manage multi-vendor access points/ controllers.
Non Wi-Fi Sources: The normal Wireless management softwares/ controllers may not detect interference from non Wi-Fi sources (some of them do) but there are specialized spectrum analyzers that can be employed for this purpose. But unlike the Wi-Fi sources of interference, simply changing the frequency channel of operation of access points may not be a solution for non Wi-FI based interference and hence the best way to tackle them might be to physically remove the sources / shield the sources from spreading out, hence restricting them to a certain area.
There are certain open source based spectrum analyzers which can be used for detecting interference like Netstumbler, Kismet, inSSIDer etc. Commercial spectrum analyzers are also available for the same.

Monday, April 18, 2011

Tips for Planning a Wireless Network

Know Your Building’s Bones

Do you know what your building is made of?  Before you install your wireless network, you should.  Dense building materials like filled cinder blocks, brick, rock walls, adobe or stucco construction can significantly reduce the strength of your wireless signal, and increase the number of access points needed to ensure a fast, reliable connection. Also anything that holds water, like pipes, bathrooms and elevator shafts tend to limit the range of wireless signals. 

Count Heads and Balance the Load

Typically, small and medium-sized businesses (SMBs) require fewer than 24 access points, but businesses must consider bandwidth in the overall plan.  Without adequate bandwidth to handle traffic, you may not realize expected productivity gains. IT staff should also be able to manage multiple access points and balance the load accordingly; centrally-managed wireless controller appliances can do this dynamically to boost performance and save time.

Power Up

After deciding the number of necessary WLAN access points you need, determine the power requirements necessary to support these points, typically 15 watts or less.  While power requirements differ for each business, power injectors are still a great option for powering the access points.  The injectors can be placed anywhere along the line within 100 meters and provide greater flexibility by eliminating the need for external AC Adapter power supply.

Safe and Secure Networking

Who among us hasn’t searched for an unsecured wireless network to jump on when we are away from home or work?  Keeping the wireless network safe is a top priority, so avoid using obsolete protocols for wireless security, like WEP (Wired Equivalent Privacy). Better alternatives include WPA (Wi-Fi Protected Access) and WPA2, which will help safeguard against hackers.  For increased protection, IT departments should configure access points to use the strongest available AES 256 bit encryption.

Common Wireless Networking Missteps

Are you ready to jump online?  Avoid some of the more common wireless network pitfalls:

This Access Point Worked at Home

Depending on the size of the business, wireless devices designed for home use may not be a fit for the business environment.  Although home access points are less expensive, they are not designed to achieve the results necessary beyond a small home office.  Businesses with multiple access points require devices designed to achieve a seamless connection.  Home access points are designed for single deployments and will interfere with other access points in multiple access point scenarios.

Just Add Access Points

The easiest locations for access points are not necessarily the best locations.  While a comprehensive wireless site survey is ideal, it may be cost prohibitive for most small businesses, ranging from $2,000 - $3,000, so consider these approaches to the access point challenge:
  • Install multiple access points and err on the side of over-coverage.  The initial investment in multiple access points will save money in the long run, compared to commissioning a site survey
  • Perform a rudimentary site survey independently by setting up one access point, charting its coverage using one laptop and using its coverage range as a guideline for access points throughout the facility
  • Consider a wireless LAN controller.  The controller recognizes all of the connected access points and sets the appropriate channel and power setting.  Some controllers even let you load a diagram of the floor plan, providing a heat map that shows the signal strength of each access point

Do What You’ve Always Done

It’s easy to become complacent with wireless routines.  Network equipment is constantly improving, with networked devices becoming smarter and more complex -- just like the technologies that hackers use to attack networks.  Don’t put your small business at risk -- understand exactly where the wireless marketplace stands and where the technology is headed to avoid exposing the business to security risks that waste time and money.

Don’t Plan to Grow

When implementing a WLAN, think about current and future networking needs, and be prepared to grow with the technology. One benefit of a wireless infrastructure is that it is fairly simple to reconfigure an office space during times of growth or change.  The equipment and the configuration should be driven by business goals -- be mindful of what potential needs will be six months to a year into the future
A wireless network can be a great asset to your business, but be careful to consider the objectives, limitations and the potential future benefits.  Also, be aware of the possible pitfalls to avoid disappointment and lost productivity time.  When done right, a wireless implementation can translate into a successful business plan.

Friday, April 15, 2011

Protecting website using basic authentication

Apache uses auth_mod to protect the whole or part of a site.
Here we will see how to provide access to your website to only authenticated users. I will demonstrate and explain the use of basic authentication.
In Apache’s main configuration file located at /etc/httpd/conf/httpd.conf or inside <VirtualHost></VirtualHost> directives, put in the following:
<Directory />
AuthName "Authentication Needed"
AuthType Basic
AuthUserFile /etc/httpd/conf/security_users
require valid-user
</Directory>
Let me explain the above directives one by one:
<Directory /> means that the directives applies to / , that is to the DocumentRoot of the site
AuthName creates a label that is displayed by web browsers to users.
AuthUserFile sets the file that Apache will consult to check user names and passwords for authenticating users.
AuthType specifies what type of authentication scheme to use
require directive stats that only valid users are allowed access to the site.
Now we have create the file that will hold the users and their passwords with the following command
htpasswd -c /etc/httpd/conf/security_users testuser
New password:
Re-type new password:
Adding password for user testuser
-c is meant to create the linuxgravity_access and testuser is the user to be created. The flag -c is not needed when adding any further users in the same file.
Now restart Apache.
/etc/init.d/apache2 restart
Access the site e.g. http://localhost or http://IP_of_apache

Monday, April 11, 2011

What is Server Virtualization?

Servers rarely run at full capacity. Server Virtualization enables multiple applications running on multiple operating systems to run on the same server, utilizing that unused additional capacity. Lets find out more about Server Virtualization, in this article.

What is Server Virtualization?

Server Virtualization Architecture DiagramIn the above diagram, the three servers on the left hand side represent stand-alone servers – There is an Operating System & an Application on each of them. That’s the conventional set up. Well, almost. There are some drawbacks with this set up – Some operating systems/ applications do not use all the resources of an entire server. So, the additional capacity goes under-used. Also, unless more physical servers are introduced, there is no back up for these stand-alone servers/applications – should they fail.
The three servers on the right hand side represent virtualized servers. In each server (for the sake of simplicity) lets consider that multiple applications are running on multiple operating systems. Each OS/Application is isolated from the others. Further, server resources like processor capacity/ RAM/ Hard-disk capacity etc, are reserved (or allocated) separately for each OS/Application.
The OS/Application pairs run over a software module called Hypervisor. The Hypervisor resides between the bare metal hardware and the virtual systems. It basically de-couples the Operating System/ Applications from the underlying physical hardware and provides a common management / operating platform for multiple operating systems/ applications.
So, in a nutshell, Server Virtualization can be defined as – Multiple instances of different operating systems/ applications running on a physical server hardware. This approach, has a lot of advantages as discussed below.

Advantages of Server Virtualization:

  • Since the resources (RAM, Processor, etc) of the virtualized servers are utilized more (than stand-alone OS/App) as multiple operating systems and applications share the same server resources, Server Virtualization improves the resource utilization ratio.
  • Server Virtualization enables server consolidation – resulting in the usage of less number of servers for the same OS/application(s).
  • If a server is down (due to either hardware or application failure) (or) due to maintenance activities, application downtime can be avoided by migrating the virtual systems (OS/application pairs) to other servers. This ensures high availability of the applications.
  • The applications can also be transferred from a primary data center to a secondary data center (by certain virtualization softwares, if up-to date copies are kept at the secondary data center) enabling effective disaster recovery strategy.
  • Server Virtualization avoids over-purchasing/ over-allocation of servers for certain applications.
  • On-demand resource allocation is possible along with the ability to scale up / scale down resources.
  • The time required for getting an application up and running is greatly reduced, especially for smaller applications that can be provisioned in one of the existing virtual servers.
  • Server Virtualization is an Operating System neutral technology – multiple operating systems can reside alongside in the same server.
  • Even though various operating systems/ applications reside in the same server, they are logically isolated from each other thereby enhancing  security.
  • The operating systems/ applications (virtual systems) are hardware independent. They just need to communicate with the hypervisor and the hypervisor communicates with the hardware components.
  • Server Virtualization is useful for testing applications / using them in the production environment temporarily as there is no need to buy additional servers for doing that.

Limitations of Server Virtualization:

  • The resource allocation for each virtual system needs to be planned carefully. If very less resources are allocated, the application performance might be affected and if too much resources are allocated, it will result in under-utilization. The servers that are to be virtualized should have sufficient resources, in the first place.
  • 32-bit processors/ operating systems/ applications can make use of only limited memory resources in the server (4 GB) and hence 64-bit computing is preferred for server virtualization. But not all the applications have been migrated to 64-bit computing yet.
  • Only a few processors (that support virtualization) can be used to virtualize servers. And for migrating the virtual systems from one server to another, some vendors require similar model/make of processors.
  • The hypervisor itself utilizes some processing power. This is in addition to the processing power required for the applications.
  • The cost of virtualization software, management applications, management expertise required etc, might limit the usage of server virtualization in smaller environments with very few servers.
  • Sometimes, a separate SAN/NAS network might be required for storage as there may not be sufficient storage capacity inside the server for multiple OS/ Applications.
  • The software switch running inside the hypervisor to connect the various virtual systems (Operating System/ Application) may not be able to integrate with the existing network settings like VLAN/ QoS settings, etc. At least, they cannot implement all the features of a specialized network switches connecting to individual servers in a full fledged way.

Monday, April 4, 2011

Squid Access Controls

Tag Name acl
Usage acl aclname acltype string1 ... | "file"
Description
This tag is used for defining an access List. When using "file" the file should contain one item per line By default, regular expressions are CASE-SENSITIVE. To make them case-insensitive, use the -i option.

Acl Type: src
Description
This will look client IP Address.
Usage acl aclname src ip-address/netmask.
Example
1.This refers to the whole Network with address 172.16.1.0 - acl aclname src 172.16.1.0/24
2.This refers specific single IP Address - acl aclname src 172.16.1.25/32
3.This refers range of IP Addresses from 172.16.1.25-172.16.1.35 - acl aclname src 172.16.1.25-172.16.1.35/32

Note
While giving Netmask caution must be exerted in what value is given

Acl Type: dst
Description
This is same as src with only difference refers Server IPaddress. First Squid will dns-lookup for IPAddress from the domain-name, which is in request header. Then this acl is interpreted.

Usage acl aclname dst ip-address/netmask.

Acl Type: srcdomain
Description
Since squid needs to reverse dns lookup (from client ip-address to client domain-name) before this acl is interpreted, it can cause processing delays. This lookup adds some delay to the request.

Usage acl aclname srcdomain domain-name
Example
acl aclname srcdomain .kovaiteam.com

Note
Here "." is more important.

Acl Type: dstdomain
Description
This is the effective method to control specific domain

Usage acl aclname dstdomain domain-name
Example
acl aclname dstdomain .kovaiteam.com
Hence this looks for *.kovaiteam.com from URL
Hence this looks for *.kovaiteam.com from URL
Note
Here "." is more important.

Acl Type: srcdom_regex
Description
Since squid needs to reverse dns lookup (from client ip-address to client domain-name) before this acl is interpreted, it can cause processing delays. This lookup adds some delay to the request

Usage acl aclname srcdom_regex pattern
Example
acl aclname srcdom_regex kovai
Hence this looks for the word kovai from the client domain name
Note
Better avoid using this acl type to be away from latency.

Acl Type: dstdom_regex
Description
This is also an effective method as dstdomain

Usage acl aclname dstdom_regex pattern
Example
acl aclname dstdom_regex kovai
Hence this looks for the word kovai from the client domain name

Acl Type: time
Description
Time of day, and day of week

Usage acl aclname time [day-abbreviations] [h1:m1-h2:m2]
day-abbreviations:
S - Sunday
M - Monday
T - Tuesday
W - Wednesday
H - Thursday
F - Friday
A - Saturday
h1:m1 must be less than h2:m2
Example
acl ACLTIME time M 9:00-17:00
ACLTIME refers day of Monday from 9:00 to 17:00.

Acl Type: url_regex
Description
The url_regex means to search the entire URL for the regular expression you specify. Note that these regular expressions are case-sensitive. To make them case-insensitive, use the -i option.

Usage acl aclname url_regex pattern
Example
acl ACLREG url_regex cooking
ACLREG refers to the url containing "cooking" not "Cooking"

Acl Type: urlpath_regex
Description
The urlpath_regex regular expression pattern matching from URL but without protocol and hostname. Note that these regular expressions are case-sensitive

Usage acl aclname urlpath_regex pattern
Example
acl ACLPATHREG urlpath_regex cooking
ACLPATHREG refers only containing "cooking'' not "Cooking"; and without referring protocol and hostname.
If URL is http://www.visolve.com/folder/subdir/cooking/first.html then this acltype only looks after http://www.visolve.com .
In other words, if URL is http://www.visolve.com/folder/subdir/cooking/first.html then this acltype's regex must match /folder/subdir/cooking/first.html .

Acl Type: port
Description
Access can be controlled by destination (server) port address

Usage acl aclname port port-no
Example
This example allows http_access only to the destination 172.16.1.115:80 from network 172.16.1.0

acl acceleratedhost dst 172.16.1.115/255.255.255.255
acl acceleratedport port 80
acl mynet src 172.16.1.0/255.255.255.0
http_access allow acceleratedhost acceleratedport mynet
http_access deny all

Acl Type: proto
Description
This specifies the transfer protocol

Usage acl aclname proto protocol
Example
acl aclname proto HTTP FTP
This refers protocols HTTP and FTP

Acl Type: method
Description
This specifies the type of the method of the request

Usage acl aclname method method-type
Example
acl aclname method GET POST
This refers get and post methods only

Acl Type: browser
Description
Regular expression pattern matching on the request's user-agent header

Usage acl aclname browser pattern
Example
acl aclname browser MOZILLA
This refers to the requests, which are coming from the browsers who have "MOZILLA" keyword in the user-agent header.

Acl Type: ident
Description
String matching on the user's name

Usage acl aclname ident username ...
Example
You can use ident to allow specific users access to your cache. This requires that an ident server process runs on the user's machine(s). In your squid.conf configuration file you would write something like this:

ident_lookup on
acl friends ident kim lisa frank joe
http_access allow friends
http_access deny all

Acl Type: ident_regex
Description
Regular expression pattern matching on the user's name. String match on ident output. Use REQUIRED to accept any non-null ident

Usage acl aclname ident_regex pattern
Example
You can use ident to allow specific users access to your cache. This requires that an ident server process run on the user's machine(s). In your squid.conf configuration file you would write something like this:

ident_lookup on
acl friends ident_regex joe
This looks for the pattern "joe" in username


Acl Type: src_as
Description
source (client) Autonomous System number



Acl Type: dst_as
Description
destination (server) Autonomous System number



Acl Type: proxy_auth
Description
User authentication via external processes. proxy_auth requires an EXTERNAL authentication program to check username/password combinations (see authenticate_program ).

Usage acl aclname proxy_auth username...
use REQUIRED instead of username to accept any valid username
Example
acl ACLAUTH proxy_auth usha venkatesh balu deepa

This acl is for authenticating users usha, venkatesh, balu and deepa by external programs.
Warning
proxy_auth can't be used in a transparent proxy. It collides with any authentication done by origin servers. It may seem like it works at first, but it doesn't. When a Proxy-Authentication header is sent but it is not needed during ACL checking the username is NOT logged in access.log.


Acl Type: proxy_auth_regex
Description
This is same as proxy_auth with a difference. That is it matches the pattern with usernames, which are given in authenticate_program

Usage acl aclname proxy_auth_regex [-i] pattern...

Acl Type: snmp_community
Description
SNMP community string matching

Example
acl aclname snmp_community public
snmp_access aclname


Acl Type: maxconn
Description
A limit on the maximum number of connections from a single client IP address. It is an ACL that will be true if the user has more than maxconn connections open. It is used in http_access to allow/deny the request just like all the other acl types.

Example
acl someuser src 1.2.3.4
acl twoconn maxconn 5
http_access deny someuser twoconn
http_access allow !twoconn

Note
maxconn acl requires client_db feature, so if you disabled that (client_db off) maxconn won't work.


Acl Type: req_mime_type
Usage acl aclname req_mime_type pattern
Description
Regular expression pattern matching on the request content-type header

Example
acl aclname req_mime_type text

This acl looks for the pattern "text" in request mime header

Acl Type: arp
Usage acl aclname arp ARP-ADDRESS
Description
Ethernet (MAC) address matching This acl is supported on Linux, Solaris, and probably BSD variants.

To use ARP (MAC) access controls, you first need to compile in the optional code.
Do this with the --enable-arp-acl configure option:
% ./configure --enable-arp-acl ...
% make clean
% make

If everything compiles, then you can add some ARP ACL lines to your squid.conf
Default acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl SSL_ports port 443 563
acl Safe_ports port 80 21 443 563 70 210 1025-65535
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
Example
acl ACLARP arp 11:12:13:14:15:16
ACLARP refers
MACADDRESS of the ethernet 11:12:13:14:15:16
Note
Squid can only determine the MAC address for clients that are on the same subnet. If the client is on a different subnet, then Squid cannot find out its MAC address.

Tag Name http_access
Usage http_access allow|deny [!]aclname ...
Description
Allowing or denying http access based on defined access lists

If none of the "access" lines cause a match, the default is the opposite of the last line in the list. If the last line was deny, then the default is allow. Conversely, if the last line is allow, the default will be deny. For these reasons, it is a good idea to have a "deny all" or "allow all" entry at the end of your access lists to avoid potential confusion
Default http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny all
If there are no "access" lines present, the default is to allow the request


Caution
The deny all line is very important. After all the http_access rules, if access isn't denied, it's ALLOWED !! So, specifying a LOT of http_access allow rules, and forget the deny all after them, is the same of NOTHING. If access isn't allowed by one of your rules, the default action ( ALLOW ) will be triggered. So, don't forget the deny all rule AFTER all the rules.

And, finally, don't forget rules are read from top to bottom. The first rule matched will be used. Other rules won't be applied. 

Tag Name icp_access
Usage icp_access allow|deny [!]aclname ...
Description
icp_access allow|deny [!]aclname ...

Default icp_access deny all
Example
icp_access allow all - Allow ICP queries from everyone

Tag Name miss_access
Usage miss_access allow|deny [!]aclname...
Description
Used to force your neighbors to use you as a sibling instead of a parent. For example:

acl localclients src 172.16.0.0/16
miss_access allow localclients
miss_access deny !localclients
This means that only your local clients are allowed to fetch MISSES and all other clients can only fetch HITS.
Default By default, allow all clients who passed the http_access rules to fetch MISSES from us.
miss_access allow all


Tag Name cache_peer_access
Usage cache_peer_access cache-host allow|deny [!]aclname ...
Description
Similar to 'cache_peer_domain ' but provides more flexibility by using ACL elements.

The syntax is identical to 'http_access' and the other lists of ACL elements. See 'http_access ' for further reference.
Default none
Example
The following example could be used, if we want all requests from a specific IP address range to go to a specific cache server (for accounting purposes, for example). Here, all the requests from the 10.0.1.* range are passed to proxy.visolve.com, but all other requests are handled directly.

Using acls to select peers,
acl myNet src 10.0.0.0/255.255.255.0
acl cusNet src 10.0.1.0/255.255.255.0
acl all src 0.0.0.0/0.0.0.0
cache_peer proxy.visolve.com parent 3128 3130
cache_peer_access proxy.visolve.com allow custNet
cache_peer_access proxy.visolve.com deny all

Tag Name proxy_auth_realm
Usage proxy_auth_realm string
Description
Specifies the realm name, which is to be reported to the client for proxy authentication (part of the text the user will see when prompted for the username and password).

Default proxy_auth_realm Squid proxy-caching web server
Example
proxy_auth_realm My Caching Server

Tag Name ident_lookup_access
Usage ident_lookup_access allow|deny aclname...
Description
A list of ACL elements, which, if matched, cause an ident (RFC 931) lookup to be performed for this request. For example, you might choose to always perform ident lookups for your main multi-user Unix boxes, but not for your Macs and PCs

Default
By default, ident lookups are not performed for any requests
ident_lookup_access deny all
Example
To enable ident lookups for specific client addresses, you can follow this example:

acl ident_aware_hosts src 198.168.1.0/255.255.255.0
ident_lookup_access allow ident_aware_hosts
ident_lookup_access deny all
Caution
This option may be disabled by using --disable-ident with the configure script.


Examples:
(1) To allow http_access for only one machine with MAC Address 00:08:c7:9f:34:41
To use MAC address in ACL rules. Configure with option -enable-arp-acl.
acl all src 0.0.0.0/0.0.0.0
acl pl800_arp arp 00:08:c7:9f:34:41
http_access allow pl800_arp
http_access deny all
(2) To restrict access to work hours (9am - 5pm, Monday to Friday) from IP 192.168.2/24
acl ip_acl src 192.168.2.0/24
acl time_acl time M T W H F 9:00-17:00
http_access allow ip_acl time_acl
http_access deny all
(3) Can i use multitime access control list for different users for different timing.
AclDefnitions
acl abc src 172.161.163.85
acl xyz src 172.161.163.86
acl asd src 172.161.163.87
acl morning time 06:00-11:00
acl lunch time 14:00-14:30
acl evening time 16:25-23:59

Access Controls
http_access allow abc morning
http_access allow xyz morning lunch
http_access allow asd lunch

This is wrong. The description follows:
Here access line "http_access allow xyz morning lunch" will not work. So ACLs are interpreted like this ...

http_access RULE statement1 AND statement2 AND statement3 OR
http_access ACTION statement1 AND statement2 AND statement3 OR
........
So, the ACL "http_access allow xyz morning lunch" will never work, as pointed, because at any given time, morning AND lunch will ALWAYS be false, because both morning and lunch will NEVER be true at the same time. As one of them is false, and acl uses AND logical statement, 0/1 AND 0 will always be 0 (false).
That's because this line is in two. If now read:
http_access allow xyz AND morning OR
http_access allow xyz lunch
If request comes from xyz, and we're in one of the allowed time, one of the rules will match TRUE. The other will obviously match FALSE. TRUE OR FALSE will be TRUE, and access will be permitted.
Finally Access Control looks...
http_access allow abc morning
http_access allow xyz morning
http_access allow xyz lunch
http_access allow asd lunch
http_access deny all
(4) Rules are read from top to bottom. The first rule matched will be used. Other rules won't be applied.
Example:
http_access allow xyz morning
http_access deny xyz
http_access allow xyz lunch

If xyz tries to access something in the morning, access will be granted. But if he tries to access something at lunchtime, access will be denied. It will be denied by the deny xyz rule, that was matched before the 'xyz lunch' rule.

Saturday, March 26, 2011

Understand IPv6 Addresses

IPv6 Address Types

Increasing the IP address pool was one of the major forces behind developing IPv6. It uses a 128-bit address, meaning that we have a maximum of 2¹²⁸ addresses available, or 340,282,366,920,938,463,463,374,607,431,768,211,456, or enough to give multiple IP addresses to every grain of sand on the planet. So our friendly old 32-bit IPv4 dotted-quads don't do the job anymore; these newfangled IPs require eight 16-bit hexadecimal colon-delimited blocks. So not only are they longer, they use numbers and letters. At first glance, those mondo IPv6 addresses look like impenetrable secret code:
2001:0db8:3c4d:0015:0000:0000:abcd:ef12 
 


Under IPv4 we have the old familiar unicast, broadcast and multicast addresses. In IPv6 we have unicast, multicast and anycast. With IPv6 the broadcast addresses are not used anymore, because they are replaced with multicast addressing.

IPv6 Unicast

This is similar to the unicast address in IPv4 – a single address identifying a single interface. There are four types of unicast addresses:
  • Global unicast addresses, which are conventional, publicly routable address, just like conventional IPv4 publicly routable addresses.
  • Link-local addresses are akin to the private, non-routable addresses in IPv4 (10.0.0.0/8, 172.16.0.0/12, 192.168.0.0/16). They are not meant to be routed, but confined to a single network segment. Link-local addresses mean you can easily throw together a temporary LAN, such as for conferences or meetings, or set up a permanent small LAN the easy way.
  • Unique local addresses are also meant for private addressing, with the addition of being unique, so that joining two subnets does not cause address collisions.
  • Special addresses are loopback addresses, IPv4-address mapped spaces, and 6-to-4 addresses for crossing from an IPv4 network to an IPv6 network.
If you read about site-local IPv6 addresses, which are related to link-local, these have been deprecated, so you don't need to bother with them.

Multicast

Multicast in IPv6 is similar to the old IPv4 broadcast address   a packet sent to a multicast address is delivered to every interface in a group. The IPv6 difference is it's targeted   instead of annoying every single host on the segment with broadcast blather, only hosts who are members of the multicast group receive the multicast packets. IPv6 multicast is routable, and routers will not forward multicast packets unless there are members of the multicast groups to forward the packets to. Anyone who has ever suffered from broadcast storms will appreciate this mightily.

Anycast

An anycast address is a single address assigned to multiple nodes. A packet sent to an anycast address is then delivered to the first available node. This is a slick way to provide both load-balancing and automatic failover. The idea of anycast has been around for a long time; it was proposed for inclusion in IPv4 but it never happened.
Several of the DNS root servers use a router-based anycast implementation, which is really a shared unicast addressing scheme. (While there are only thirteen authoritative root server names, the total number of actual servers is considerably larger, and they are spread all over the globe.) The same IP address is assigned to multiple interfaces, and then multiple routing tables entries are needed to move everything along.
IPv6 anycast addresses contain fields that identify them as anycast, so all you need to do is configure your network interfaces appropriately. The IPv6 protocol itself takes care of getting the packets to their final destinations. It's a lot simpler to administer than shared unicast addressing.

Address Dissection

Let's take another look at our example IPv6 address:

2001:0db8:3c4d:0015:0000:0000:abcd:ef12
______________|____|___________________
global prefix subnet  Interface ID
The prefix identifies it as a global unicast address. It has three parts: the network identifier, the subnet, and the interface identifier.
The global routing prefix comes from a pool assigned to you, either by direct assignment from a Regional Internet Registry like APNIC, ARIN, or RIPE NCC, or more likely from your Internet service provider. The subnet and interface IDs are controlled by you, the hardworking local network administrator.

You'll probably be running mixed IPv6/IPv4 networks for some time. IPv6 addresses must total 128 bits. IPv4 addresses are represented like this:
0000:0000:0000:0000:0000:0000:192.168.1.25
Eight blocks of 16 bits each are required in an IPv6 address. The IPv4 address occupies 32 bits, so that is why there are only seven colon-delimited blocks.
The localhost address is 0000:0000:0000:0000:0000:0000:0000:0001.
Naturally we want shortcuts, because these are long and all those zeroes are just dumb-looking. Leading zeroes can be omitted, and contiguous blocks of zeroes can be omitted entirely, so we end up with these:
2001:0db8:3c4d:0015:0:0:abcd:ef12
2001:0db8:3c4d:0015::abcd:ef12
::192.168.1.25
::1
I usually end up counting on my fingers, which is probably not the best method. ipv6calc is invaluable for checking your work. Suppose you're not sure if your compressed notation is correct. ipv6calc displays the uncompressed notation:
$ ipv6calc --in ipv6addr --out ipv6addr --printuncompressed ::1
0:0:0:0:0:0:0:1
$ ipv6calc --in ipv6addr --out ipv6addr --printfulluncompressed 2001:0db8:3c4d:0015::abcd:ef12
2001:0db8:3c4d:0015:0000:0000:abcd:ef12

Thursday, March 10, 2011

Domain Authentication for Centos,redhat and fedora

1. login as a root to the machine  or else get the root privilege  using " su - "

2. copy the "LikewiseIdentityServiceOpen-5.1.0.5249-linux-i386-rpm.sh" package on the Desktop of the user

3.  cd username/Desktop

4. Run the command "sh LikewiseIdentityServiceOpen-5.1.0.5249-linux-i386-rpm.sh"

5. type 'yes' to accept the license agreement and installation porcess.

6. check the hostname using 'hostname' command 

7. change the hostname by vim /etc/sysconfig/network 

eg:-
 hostname=09cpu0129L
:wq (save and quit )

8. To make the changes take place restart the network and close the terminal. open a new terminal

9. type 'hostname' to check the hostname is right?
   
  note:- Make sure that hostname is not there in the Active Directory.

10. domainjoin-cli join amritavidya.edu ictsadmin 
    give the password...
    
  note:- You can find SUCCESS If the system is added to the domain sucessfully

11. vim /etc/likewise/lsassd.conf
   uncomment the following line approx:- 81 
eg :- 
    assume-default-domain = yes ( remove the # )

12. Restart the following services to implement the changes
    /etc/init.d/lsassd restart
    /etc/init.d/lwrdrd restart
    /etc/init.d/netlogond restart

13. reboot 

14. will be able to logon to the domain 

Thursday, March 3, 2011

Samba: How to share files for your LAN without user/password

This tutorial will show how to set samba to allow read-only file sharing for your LAN computers as guest (without be prompted for a password).
Because users won't be prompted for a user/password, this tutorial is meant to be installed in a LAN where all host are to be trusted.
There is many advantages of sharing files in a LAN. For instance, when you have a multimedia box (playing music, movies....) it is great to be able to access the music on that box from any machines in your LAN.
Let's get started. In the first place, you need to have samba installed.
$sudo apt-get install samba or yum install samba
Because we are going to make samba security insecure, make sure only your local network can access samba service. To do so, open and edit /etc/samba/smb.conf
$sudo vi /etc/samba/smb.conf
and set interfaces to lo and your local network interface. In my case: eth1.
interfaces = lo eth1
bind interfaces only = true
Now, it is time to smoothen samba default security by changing the security variable: security and make sure it is set to share instead of user and that guest account is enabled:
security = share
...
...
guest account = nobody
Now, we can create a share to be accessible to guest users:
[Guest Share]
        comment = Guest access share
        path = /path/to/dir/to/share
        browseable = yes
        read only = yes
        guest ok = yes
You can now test that your configuration is good using testparm:
$ testparm
If everything is fine, it is time to reload samba service to have your new configuration taken into account:
$sudo /etc/init.d/samba reload  or service smb restart
That's it, anybody in your LAN can now access your share.

Tuesday, March 1, 2011

Network Storage - The Basics

Direct Attached Storage (DAS)
Direct attached storage is the term used to describe a storage device that is directly attached to a host system. The simplest example of DAS is the internal hard drive of a server computer, though storage devices housed in an external box come under this banner as well. DAS is still, by far, the most common method of storing data for computer systems. Over the years, though, new technologies have emerged which work, if you'll excuse the pun, out of the box.

Network Attached Storage (NAS)
Network Attached Storage, or NAS, is a data storage mechanism that uses special devices connected directly to the network media. These devices are assigned an IP address and can then be accessed by clients via a server that acts as a gateway to the data, or in some cases allows the device to be accessed directly by the clients without an intermediary.
The beauty of the NAS structure is that it means that in an environment with many servers running different operating systems, storage of data can be centralized, as can the security, management, and backup of the data. An increasing number of companies already make use of NAS technology, if only with devices such as CD-ROM towers (stand-alone boxes that contain multiple CD-ROM drives) that are connected directly to the network.
Some of the big advantages of NAS include the expandability; need more storage space, add another NAS device and expand the available storage. NAS also bring an extra level of fault tolerance to the network. In a DAS environment, a server going down means that the data that that server holds is no longer available. With NAS, the data is still available on the network and accessible by clients. Fault tolerant measures such as RAID, which we'll discuss later), can be used to make sure that the NAS device does not become a point of failure.
Storage Area Network (SAN)
A SAN is a network of storage devices that are connected to each other and to a server, or cluster of servers, which act as an access point to the SAN. In some configurations a SAN is also connected to the network. SAN's use special switches as a mechanism to connect the devices. These switches, which look a lot like a normal Ethernet networking switch, act as the connectivity point for SAN's. Making it possible for devices to communicate with each other on a separate network brings with it many advantages. Consider, for instance, the ability to back up every piece of data on your network without having to 'pollute' the standard network infrastructure with gigabytes of data. This is just one of the advantages of a SAN which is making it a popular choice with companies today, and is a reason why it is forecast to become the data storage technology of choice in the coming years.
Irrespective of whether the network storage mechanism is DAS, NAS or SAN, there are certain technologies that you'll find in almost every case. The technologies that we are referring to are things like SCSI and RAID. For years SCSI has been providing a high speed, reliable method for data storage. Over the years, SCSI has evolved through many standards to the point where it is now the storage technology of choice. Related, but not reliant on SCSI, is RAID. RAID (Redundant Array of Independent Disks) is a series of standards which provide improved performance and/or fault tolerance for disk failures. Such protection is necessary as disks account for 50% of all hardware device failures on server systems. Like SCSI, RAID, or the technologies used to implement it, have evolved, developed and matured over the years.
In addition to these mainstays of storage technology, other technologies feature in our network storage picture. One of the most significant of these technologies is Fibre channel (yes, that that's fiber with an 're'). Fibre Channel is a technology used to interconnect storage devices allowing them to communicate at very high speeds (up to 10Gbps in future implementations). As well as being faster than more traditional storage technologies like SCSI, Fibre Channel also allows for devices to be connected over a much greater distance. In fact, Fibre Channel can be used up to six miles. This allows devices in a SAN to be placed in the most appropriate physical location.

Basic Info on Network Racks – Wall Mounted, Floor Standing & Accessories

What is a Network Rack?

A Local Area Network (LAN) is comprised of multiple networking equipments like network switches, routers, UTM appliances, Servers, patch panels, cables, modems, etc. These equipments are generally kept inside a network rack, which is a closed or open enclosure that can hold them. The size occupied by networking hardware equipments follow certain industry standards so that they could fit in to the network racks, which too follow those standards. The common width of a network rack (and the networking equipments) is 19″ (Inches) – most of the racks are made to accommodate any equipment that can fit in to this space. Also, the networking equipments have fixed heights that are mentioned in terms of Rack Units.
1 Rack Unit (RU)  =  1.7 inches / 4.4 cm.
So, if a networking equipment is specified as 2U, then it has a height of 3.4 inches (approx). So, if one has the sizes (In RU) of all the networking equipments that needs to be placed in a rack, the required height of the Rack (in RU) can be easily calculated as the sum of the heights of all the individual equipments – generally slightly more than that, in order to accommodate the networking equipments freely in the rack and also to provide for future expansion.

Why are Network Racks required?

Network Racks are an important component of the structured cabling system.
Network racks are required for neatly, efficiently and safely holding all the networking equipments. If there are no network racks/ patch panels, then the cabling would look cluttered. Network racks can hold many components in a relatively smaller space, which enables one to utilize the available storage space very efficiently. Network racks are required for the physical safety of all the equipments kept within, as most of them could be locked and access denied for unauthorized personnel.
Network racks are also required for improving the health of the networking equipments stored inside. For example, when the cables are taken carefully and neatly through the cable managers in the racks, there is little chance of data loss due to excessive cable bends. Also, the cooling fans in the network racks provide additional cooling to prevent any damage to the networking equipments kept inside them, due to over heating.

Wall Mounted Network Racks:

Photo of a Wall Mount Network Rack
  • Wall Mounted Network Racks are useful for housing edge devices in individual departments with fewer networking equipments.
  • Common sizes: 6U, 9U, 12U, 15U.
  • The front panel generally has a hardened glass door to view they equipments inside clearly and also has a lock to ensure physical security.
  • There are two common types of wall mounted racks – Single Section Racks, which have one glass door in the front, that can be fully opened and the cable entry/exit is via the holes in the top and bottom of the racks & Double Section Racks, which are like the single section racks but have an additional opening behind the rack (actually, a rear panel is fixed to the wall, and the whole rack is fixed to one side of the rear panel firmly, and can be turned front/ back to enable one to open and view the rear side of the rack).
  • Network Racks are generally made up with steel body (sometimes with aluminum enclosures) with powder coated paint finishes.
  • Network Racks generally have provisions for ventilation in the top/bottom/sides through vents/ holes.
  • They contain some accessories as well, which is discussed in the last section of this article.

Floor Standing Network/Server Racks:

photo of floor standing network server rack
  • The Floor Standing racks are used to house both network as well as server equipments. These are primarily used in data centers and other places with a large number of equipments.
  • Most of the points that are applicable to the above wall mount racks are applicable to floor standing server racks as well, except that these are bigger and kept on the floor (some might even have wheels attached, to enable their movement).
  • Common sizes: 24U, 30U, 36U, 42U, 45U.
  • The whole front section generally comes with full length doors with hardened glass/ lock. Some might even have rear doors.
  • In addition to the normal cable managers, these floor standing racks also offer specialized channels for electrical cabling, network cabling, etc which ensures neat movement of cables in the rear end, along the height of the racks.
  • These racks can house more equipments and can handle loads of around 450-500 Kg.
  • Floor mount racks are supplied either in CKD (Completely Knocked Down) condition where individual components are shipped to the site and the rack itself is assembled in the site (or) is assembled in the factory and shipped as a whole.

Network Rack – Accessories:

  • Fan Housing Units: These are either mounted in the roof (or) in the side plate. Each unit generally consists of 2/4/6 fans that are used for cooling the equipments inside the racks. Some vendors also provide rack mounted fan housing trays that can be mounted along with other equipments in the rack to provide cooling at specialized places.
  • AC Distribution Box: Network racks generally consist of a lot of equipments that need AC power. It would be inconvenient if each unit needs to be powered from an external source separately. So, an AC distribution box is used inside the rack to give power to individual equipments using one or two power lines from outside. The AC distribution box generally consists of 5 to 15 sockets (5A/15A).
  • Cable Manager: A cable manager is generally an open conduit (with metal holdings) for passing multiple cables across the horizontal section of the rack. This makes the cabling arrangement look neat as well as prevent any excessive bending of the cables.
  • Fixed/ Sliding Shelves: Not all the equipments that need to be kept in a network rack are rack mountable. Some of them come in different shapes and sizes. So, a fixed shelf plate is inserted in to the rack and these equipments are kept over it. For example, the standing desktop based servers can be kept over the shelves. There are certain heavy duty shelves to accommodate higher weight equipments. There are some sliding shelves which can be used to pull out equipments placed on them for say, frequent servicing.
  • Additional cable channels and conduits enable easier and neat arrangements of cables.
  • Modem holders: Some vendors provide special chassis type shelves in order to hold more number of modems vertically, one next to another.  Otherwise, they are kept horizontally and each shelf can hold only a few of them, which results in inefficient usage of rack space.

Using ping for network troubleshooting

Ping which stands for Packet INetrnet Gropper, is a great utility when it comes to troubleshooting network issues. It is part of iputils package. It sends ICMP “echo request” packets to the target system and listen for “echo response” replies. Ping records the round-trip time and records any packet loss. It prints a summary at the end showing number of packets sent and received, percent packet loss and total time. It also prints out minimum, average, maximum and maximum deviation (standard deviation).
After the brief introduction, let’s dig into the nitty-gritty of ping
In its simplest and usual form, ping is used to to see if a host is alive.
We will ping www.google.com and analyze the output, so type
ping  www.google.com
PING www.l.google.com (64.233.169.103) 56(84) bytes of data.
64 bytes from yo-in-f103.google.com (64.233.169.103): icmp_seq=1 ttl=128 time=31.7 ms
64 bytes from yo-in-f103.google.com (64.233.169.103): icmp_seq=2 ttl=128 time=30.9 ms
64 bytes from yo-in-f103.google.com (64.233.169.103): icmp_seq=3 ttl=128 time=32.0 ms
64 bytes from yo-in-f103.google.com (64.233.169.103): icmp_seq=4 ttl=128 time=31.2 ms

— www.l.google.com ping statistics —
4 packets transmitted, 4 received, 0% packet loss, time 3001ms
rtt min/avg/max/mdev = 30.979/31.509/32.093/0.481 ms
Let’s see what we have from the output.
The first line shows that ping is sending ICMP “echo request” to the host www.l.google.com with an IP 64.233.169.103 with 56 bytes of data. This first line proves that our DNS resolution is working so ping can be used as a simple name resolution tool.
The second line states information about the echo response packet. It received 64 bytes (why 64 bytes while it said 56 bytes above? I will explain it later), name of the host with its IP the data was received from, icmp sequence number, time to live value and the the time duration between the packet was sent and then received. Important things to look for in these lines is sequence numbers which should increment by 1 if there are no packet loss and time where a higher value would indicate network latency.
At the end we have summary of pings performed. Here 4 packets were sent, 4 received with 0% packet loss. The whole process, from the time when I start ping to the point when I stopped it, took 3001 milliseconds.
Then we have minimum, average, maximum and standard deviation of round-trip traffic.
ICMP echo request and echo reply contains 8 byes worth of ICMP headers. That’s why we see 8 bytes more than the amount of data (default 56) we sent.
By default all Linux distributions continuously ping the target host until stopped with ctrl+c.
To send a limited number of pings, use -c (for count). The following will send 5 ICMP packets of type echo request
ping -c 5 www.google.com
By default ping waits one second between sending packet. It can be changed with –i (for interval) option. The following will wait 2 second before sending another packet.
ping –i 2 www.google.com
Interval can be made even smaller. For example, to wait half a second before sending a packet, use
ping -i .5 www.google.com
To change the default packet size of 56 bytes, use -s (for size) option. To send 168 bytes, use the following
ping -s 168 www.example.com
PING www.example.com (208.77.188.166) 168(196) bytes of data.
176 bytes from www.example.com (208.77.188.166): icmp_seq=1 ttl=128 time=93.6 ms
176 bytes from www.example.com (208.77.188.166): icmp_seq=2 ttl=128 time=94.3 ms
176 bytes from www.example.com (208.77.188.166): icmp_seq=3 ttl=128 time=95.1 ms

— www.example.com ping statistics —
3 packets transmitted, 3 received, 0% packet loss, time 2000ms
rtt min/avg/max/mdev = 93.667/94.391/95.167/0.708 ms
Notice the new size 176 bytes because of the addition of 8 bytes header data.
Maximum packet size is 65,535 bytes.
Be careful of sending very large packets to target host.
Different options can be combined as well. For example to send 3 packets of size 200 bytes with .5 sec interval, we would use
ping -i .5 -s 200 -c 3 www.example.com
Another option (can be dangerous) is –f (for flood). It sends a lot packets very fast. If interval is not given, it sets interval to zero and outputs packets as fast as they come back or one hundred times per second, whichever is more. Only the super-user may use this option with zero interval.
ping -f www.host.com