Sunday, August 7, 2011

The FCC’s Plan to Bring Broadband to Rural America

The FCC’s Plan to Bring Broadband to Rural America


According to the June 30th article in The Daily Item, by Tricia Pursell, The Federal Communications Commission (FCC) is working on a new initiative to bring broadband Internet access to rural areas across the country.  As with any new initiative, there are some positive and some not so positive effects of this initiative.  Here we will attempt to look at both sides of this situation.
  • This initiative will definitely make life easier for those who currently are unable to access the Internet from their homes. Take Susie Ewing for example. Susie works in Beaver Springs where she is able to access the Internet at a decent speed. This is a good thing, she says, because her job requires access to the Internet, but when she goes home at the end of the day, it’s a different story.  Living in rural McClure, she has no Internet access.
  • Currently there are families in rural Pennsylvania that do have Internet access, yet their neighbors just a quarter of a mile down the road, do not. The FCC’s plan will help to even this out and make the Internet accessible, at a decent speed, to everyone; not just people living in rural Pennsylvania, but in rural areas across the country.
  • The FCC has a history of bringing communication technology to rural areas. In the past, they used monies in their fund to support telephone communication to rural areas. By implementing this project and helping to make Internet access, another form of communication, to everyone, will help to move this country forward.
On the surface this all sounds great. But there are some underlying difficulties that should also be considered.
  • Typically the FCC has not been able to distribute the funds equally. This issue will leave some people and areas without access in spite of their efforts. They state that they will not increase the size of their fund, which means that the whole process may take longer and leave many areas still waiting.
  • Their Connect America Fund plans include subsidizing Internet service providers if ‘their costs to bring service to rural locations is way high above what the norm is.” This opens up a whole lot of questions as to who determines whether the cost is “way high above what the norm is,” and what prevents a service provider from taking advantage of this subsidy?
  • The Universal Service fund, from which the Connect America Initiative fund comes, has been accused of being wasteful and inefficient in the past. There is concern that this will continue, but the FCC has answered that concern with its plan to require accountability from the companies receiving subsidies and will distribute the funds for access more evenly.
As with any new plan or initiative there are positive aspects and some negative aspects; the Connect America plan is no different. The people in rural areas who will be getting broadband Internet access for the first time, because of this initiative, will agree- there will be only positive aspects!


From 

    http://www.broadbandserviceproviders.com/

Thursday, August 4, 2011

10 Reasons Your DSL Broadband Connection Cuts In and Out


Sometimes when accessing the Internet, you may notice your connection cuts in and out. This can be frustrating at a minimum and devastating at the worst. So, what can be causing it? If you find that out and fix the problem, you will find your next “surfing” expedition to be smooth sailing. Here are 10 reasons your DSL Broadband connection may be cutting in and out:
  1. Distance: Speed of Internet access and clarity of connection depend on the distance between your home and the telephone exchange that is providing your service. The further away you are from the exchange the more likely there will be interruptions.
  2. Equipment at the exchange: Your ISP/telephone company that provides your Internet service, must keep up with the advancing technology, if they have any equipment that would be considered out-dated in this fast-paced technological world, you could have breaks in your connection.
  3. The Contention ratio: Find out how many other people in your immediate area share the DSL Broadband signal with you. This is the contention ratio. The more people who use the signal means there will be more uploads, downloads  and general traffic which can cause connection problems with your broadband DSL.
  4. Equipment at home: With the amount of power a broadband DSL supplies to your home, you need to make sure your modem and router can handle the speed. Some people have found improved performance of their DSL broadband by trying a different modem or router.
  5. Broadband contract: If your broadband connection seems slower than you expected it would be, take another look at the contract with your ISP. You might have signed up for a slower version of access. If that’s the case, you can contact your provider and change your contract.
  6. Phone line: Broadband DSL is fast and if you have copper phone lines that have not yet been replaced by fiber optic lines, you may experience outages in service. Copper lines just can’t handle the speed.
  7. Extension sockets: Double check your extension socket and make sure it is properly installed. Noise and crackles on the line can result if they are not.
  8. Junction boxes: In the same vein, you’ll want to double check the condition of the junction boxes outside your house. If moisture gets in there, your modem can be upset and the connection reset.
  9. Trees: Yes, trees can be a nightmare. When it is windy the branches can stress the line & which in turn will cause crackling and static within the connection.
  10. Other wireless components: Is it possible that you’re using a cordless telephone and a wireless router? Since they use the same bandwidth, they could be interfering with the connection causing it to slow down, become noisy or cut out.
Once you’ve looked at all the possible causes for the disruption in your DSL Broadband connection, you will be able to take the steps needed to get it fixed.

Saturday, July 9, 2011

What do you think of accessing blocked sites & hiding IP address using personal VPN services?

If you are a network admin, you might be aware of the various techniques used by students/ employees to gain access to blocked sites. Right from typing IP address instead of URL, using URL shortener’s, using various proxy servers available on the Internet – various methods might be used. A paid personal VPN service is also being employed these days to access blocked sites and hide IP address. Let us read more about personal VPN services, in this article.

What is a Personal VPN Service?

The personal VPN service is similar to the organization wide VPN (Virtual Private Network) employed by the network administrators. But in this, the user directly forms a tunnel between his laptop/desktop to the server hosted by the personal VPN service provider and encrypts all the content that travels between his computer and the server.
So, if a user wants to access a website (that can be accessed only from a particular country – tracked based on the Geo-location of IP addresses), he uses a personal VPN service, reaches their server, picks up a new IP address based on the servers location and from there he is redirected to the website (that he wants to browse) by the VPN service provider. So, instead of visiting the website directly, they visit the website through the server hosted by the personal VPN service provider.
For example, certain websites like Pandora, Hulu, etc can be accessed only from the United States. So, people outside the United States can sign up for a personal VPN service provider who has a server within the United States, connect to the server and from there browse those websites with a newly acquired US IP address! One more reason for using the personal VPN service is to browse anonymously without revealing your original IP address.
The price for such a service ranges mostly from 5 USD to 20 USD per month and most of them provide multiple VPN access methods like PPTP,SSL,SSTP, etc. Some service providers have a monthly maximum bandwidth cap, while most of them offer unlimited browsing. Many personal VPN service providers have multiple servers in multiple countries and enable the user to choose which country IP address they want to use for their Internet session.

Benefits of Personal VPN Service

In certain countries that have genuine websites/services blocked, users might use them to access these services. Data encryption makes it more secure to browse websites from public Wi-Fi hot-spots/ premises where Internet is shared. Personal VPN Services can also be used to browse anonymously (to a certain extant) without revealing your IP address and hence it is possible to hide your personal details/ browsing habits to search engines/ other e-commerce based websites.

Disadvantages of Personal VPN Service

Even though people use VPN and change their IP address while browsing websites, their identity could still be traced back – Its just slightly more difficult. The VPN Service provider might get their IP address blocked, for offering such a service. Some ISP’s block all VPN connections going through them (in certain countries) but a few providers might support browsing through Stealth SSL / SSTP VPN which are difficult to block. The bandwidth consumed is still the same, if not slightly more (for the users) and their computers need to encrypt and decrypt all the sessions which might put additional strain on the processors.
Besides, a Personal VPN service could be mis-used by users in the following ways:
1. Users accessing websites that are needed to be genuinely blocked in schools/ colleges/ offices etc like social networking sites. video streaming sites. etc.
2. Users can use this service to download MP3, Videos etc anonymously.
3. Users might use this service for illegal/ disallowed activities.
4. Users might make cheap VOIP Calls (as they can pick up another countries IP address and pretend to be in that country while making those calls)
Well, like it or not these services are currently available and as a network administrator you need to be aware of them. So, next time when there are too many VPN tunnels opened by employees (which are not controlled by the organization), you might as well want to check what they are doing.

Wednesday, June 29, 2011

A Conceptual Introduction to Static Routing, RIP & OSPF

In large networks, Layer-3 Switches/ Routers are important and inevitable. They help contain the broadcast domain by sub-dividing the network in to various segments. But once a network is segmented, you need to route packets between the various sub-networks. Routing protocols / methodologies like Static Routing, RIP (Routing Information Protocol) & OSPF (Open Shortest Path First) help you to do just that.

Introduction:

Wouldn’t it be a simpler world if a whole campus could be put on a single network? It would, but it would be a very congested network too! So, when you are planning a network for an enterprise company (or) a huge campus it is a good practice to segment the network into multiple sub-networks.
Layer-2 Network Switches are enough to communicate within a network (sub-network) but they cannot pass on packets to other networks. That’s where you need Routers / Layer-3 Switches (L3 Switches with routing capabilities are used more these days).
If there are only two networks and one path between them, it is easy to specify the routing table to the L3 Switch/ Router – Just forward all packets with a certain destination IP address range to the other L3 Switch/ Router. That’s it! But practically, there are multiple sub-networks within a campus and multiple links to (and from) each sub-network. Multiple links are required for both reaching destination networks faster and also for redundancy in links (in case of primary link failure).
That’s why we need Routing Methodologies & Routing Protocols. L3 Switches/ Routers form something called as Routing Tables where they store information on the various nodes in the network and the best path to reach each node. These Routing Tables can be formed manually (for small networks) using Static Routing (or) can be formed automatically (for larger networks) by using dynamic routing protocols like RIP, OSPF, BGP, etc.
Another important function of the Dynamic Routing Tables is to automatically adapt to the change in network topologies (like link/ device failures, addition/ deletion of nodes, etc) by first identifying that change quickly and using alternate routes (links) / devices to reach the destinations.

Static Routing:

The process of specifying the routing tables for every router manually by a network administrator (in a small network) is called Static Routing. Basically, if there are only a couple of Layer3 Switches in the network, it is easy to specify the routes for packets to be delivered to the other network manually.
Static Routing is simple to implement and is fast as it doesn’t require any extra processing capacity / additional bandwidth. But it does not route packets around failed links/ devices and hence does not account for redundancy. So, a small network without any need for redundant links might find Static Routing useful.

Distance Vector Routing (Vs) Link State Routing:

Dynamic Routing is divided into two major categories – Distance Vector Routing & Link State Routing.
In Distance Vector Routing, each L3 Switch/ Router maintains a table of distances/ hops to every node from its perspective of the network and the least cost route between any two nodes is (mostly) the route with the minimum distance or minimum hops. In Distance Vector Routing, each node shares its table with its immediate neighbor more frequently (like every 30 seconds) and when there is a change in the network topology. Example: RIP
In Link State Routing, each L3 Switch/ Router maintains a complete network map of the local area that it is present in, with all the routers maintaining an identical database. The least cost route between any two nodes is calculated using many factors including maximum bandwidth, minimum delay, maximum throughput, etc. In Link State Routing, only the topology updates are exchanged between the routers when there is a change in network topology (or) every 30 minutes (less frequently). Example: OSPF

RIP (Routing Information Protocol):

* RIP is a open standards based distance vector routing protocol.
* RIP is an Intra-domain routing protocol used within an autonomous system – AS (where all routers are controlled by the same entity).
* In RIP, all the routers / L3 switches create a unique routing table with information like -  lowest cost links to each router in its network, next hop router(s), etc.
* RIP uses hop count/ distance as its link cost metric.
* RIP allows for convergence around failed links/ network topology changes, but recovery is in the order of minutes.
* Total number of nodes (Routers/ L3 Switches) supported by RIP is limited due to finite hop count restrictions in the protocol.
* Periodic updates of Routing Tables (every 30 seconds for example) happens even when there are no changes in the network topology.

OSPF (Open Shortest Path First):

* OSPF is an open and standards based routing protocol.
* OSPF is an Intra-domain routing protocol based on link state routing.
* In OSPF, the entire network is called an Autonomous System (as it is maintained by one entity). The Autonomous System is divided into different areas (sub-networks).
* In OSPF, there are some special types of routers based on their function – Area border routers connect two or more areas, Autonomous System boundary routers connect two or more Autonomous Systems, etc.
* The Router/ Layer 3 switch maintains the complete network map of all the nodes in the area that it is present in. The routing table is the same for all the routers in a given area.
* Link State Advertisements are exchanged between all the routers in an area – Every router receives the LSA’s of every other router within an area.
* OSPF updates the routing tables of all the routers in an area immediately when there is a change in the network topology – which is faster than RIP, and also periodically (every 30 minutes for example) – which is less frequent than RIP.
* OSPF calculates the link cost in terms of minimum delay, maximum throughput, maximum bandwidth etc. So, it is not strictly based on the hop count and OSPF gives higher priority for faster links (for example).
* OSPF supports Variable Length Subnet Masks (VLSM), which gives it the ability to work with different subnets and hence conserve IP addresses.
* OSPF provides for authentication of messages between the Routers/ L3 Switches (through MD5).
* QoS (Quality of Service) metrics can be applied to OSPF based on bandwidth calculations (for example), to avoid high latency paths.

Thursday, June 23, 2011

Create a PPPoE client connection

You can install the PPPoE client just like you install any other dial-up networking connection. To create a PPPoE client connection, follow these steps:
  1. Click Start, click Control Panel, and then double-click Network and Internet Connections.
  2. Click Network Connections, and then click Create a new connection in the Network Tasks pane.
  3. After the Network Connection Wizard starts, click Next.
  4. Click Connect to the Internet, and then click Next.
  5. Click Set up my connection manually, and then click Next.
  6. Click either Connect using a broadband connection that requires a user name and password or Connect using a broadband connection that is always on.
  7. Type the Internet service provider (ISP) name that your ISP provided, and then click Next.
  8. Type the user name that the ISP provided.
  9. Type the password that the ISP provided.
  10. Type the password one more time to confirm it, and then click Next.
  11. Click Add a shortcut to this connection to my desktop.
  12. Click Finish to complete the wizard.

Wednesday, June 8, 2011

Happy IPv6 Day, What Did You Get?

World IPv6 day, where Web properties around the world test drive their sites using the IPv6 protocol, is today. It's not a holiday by anyone's real stretch of the imagination, nor is it a full-on switch-out. But it will be a milestone in the ultimate, slow transition from current IPv4 addressing to IPv6 addressing.

According to the Internet Society's FAQ page on the event, "is a global-scale test flight of IPv6 sponsored by the Internet Society. On World IPv6 Day, major Web companies and other industry players will come together to enable IPv6 on their main Web sites for 24 hours. The goal is to motivate organizations across the industry -- Internet service providers, hardware makers, operating system vendors and Web companies -- to prepare their services for IPv6 to ensure a successful transition as IPv4 address space runs out."
Basically this is one big 24-hour awareness event to prepare us all for what life will be like after all the current 32-bit IP addresses run out.
Oh, wait, that already happened.

There is no sign of any kind of catastrophic collapse now that there's no more unallocated IPv4 addresses, but there is a little bit more urgency in the efforts to get people to pay more attention to IPv6. But the fact that there's no Year 2000-type disaster looming in the headlines seems to have put a damper on IPv6 network deployments.
The trick to implementing IPv6 may be not treating it as a upgrade migration project -- at least in traditional terms. When you upgrade software, for instance, the older version of the software is typically removed, and your organization must deal with the consequences -- good and bad -- of the new software version.
But when implementing IPv6, there's no reason to eliminate IPv4. In fact, you would not want to, because a vast majority of networked servers will still roll with IPv4 in a dual-stack or tunneling scenario for a long time. Instead, when you swap out existing infrastructure for any reason, all you need to do is just make sure the new equipment or software is IPv6-compliant. Then it's just a matter of taking a little extra time to make sure IPv6 is running alongside IPv4 for that new component.
That's a piecemeal way of solving the problem, of course. Even as you roll out IPv6-compatible components, you will also need to identify all of the components in your infrastructure that actually need IPv6. It could be a big list, depending on the size of your organization, because there's a lot of IT assets that are network enabled. Client computers, Web servers, file servers, printers, routers, WiFi, firewalls, switches... even any mobile device your organization for which has responsibility.
Without a complete inventory, there could come a day when your organization will switch over to IPv6 and discover gaping deadzones in your network topology because of one forgotten switch somewhere.

The new protocol is about more than just packets. You also need to make sure IPv6 can do the tasks you need it to do. IPv6 is a protocol that is still relatively immature as an application platform. There are still a lot of things that IPv4 can do that IPv6 users can't, because there may not be networking applications that are IPv6 ready.
In May, for instance, Cisco took a big step forward in solving this problem when it released a new version of its IOS platform with a reported 200 new IPv6 features.
"Our biggest goal in IOS now is to have parity between IPv4 and IPv6," Faraz Aladin, director, Marketing Cloud Switching and Services at Cisco told InternetNews.com. "Whatever you can do in IPv4, you should be able to do with IPv6."

Some of this is really basic stuff, too. The Cisco announcement in May highlighted the availability of the Network Time Protocol in IPv6. But in truth, any application that uses IP addresses will need to be updated.
Finally, you will need to make a plan on how to handle network security. There are some experts who are suggesting that you won't need network allocation tables (NATs) at the edges of your network anymore. With 2128 addresses available, you don't need an internal set of IP addresses contained behind firewalls. Each machine or device can have their own IP address. But if your network is on the same topology level as the entire Internet, what does that mean for security?
Right now, security is probably the biggest question mark for IPv6 deployment, because network security relies in well-defined network edges, ideally guarded by the firewall. If NATs went away, the firewall would have to maintain security across a fuzzier boundary. It's been suggested that with the huge network segments available in IPv6, malicious hackers would have to scan your network segment for years to find a potentially vulnerable target. Great, but what if you have to run the same kind of scan for internal vulnerabilities?
Another potential angle of attack: IP options in IPv6 like destination or authentication information aren't found in the main header, but in extension headers. This means an IPv6 packet is roughly put together like this: Main header and extension headers and packet content. Malicious packets could contain specific extension information or just a ton of junk extension data that might potentially choke the target device.
Security, then, will be the space to watch as you start moving towards IPv6. You must make sure you have efficient security tools in place that follow the best practices--which are even now being put together. Until these issues are solved, start making that inventory, and analyze which applications use IP addressing. Then you will be ready to swim into IPv6 when the lifeguards say the water is safe.

Thanks to Author
Author-Brian Proffitt , Internet news

Sunday, June 5, 2011

Why you need to implement a Storage Area Network using iSCSI over an IP Network

what is iSCSI and why it makes a good storage area network (SAN)
advantages of storage area network (SAN)
The first image defines what is iSCSI and what exactly its role is, in Storage Area Networks. Convergence is happening over IP everywhere (one reason why IP is exciting!), and even the Storage Area Network is expected to converge over IP networks through iSCSI especially with the anticipation of lossless Converged Enhanced Ethernet. The other popular convergence option being FCOE (Fiber Channel Over Ethernet).
The second image lists the advantages of Storage Area Networks, in general. If you have been hesitating to implement/ upgrade to a Storage Area Network (SAN) because of cost, complexity, or any other reason, iSCSI SAN now offers you lot of advantages in addition to the ones mentioned above. Let us look at some of them:
  • With iSCSI, a separate network for SAN is not required as it uses the existing IP Networks and components (NIC, Switches, Cables etc) to create the Storage Area Network.
  • The cost of creating a iSCSI SAN is very less when compared to the cost of creating a FC(Fiber Channel) SAN.
  • iSCSI based SAN can co-exist with the existing FC based SAN. Customers have the option of retaining their existing investments on FC SAN’s and still expand by adding additional storage capacity using iSCSI SAN.
  • iSCSI SAN does not have any distance limitation. You can have a Data Center anywhere in the world and still back up/ restore the data remotely (over WAN/Internet) to your Local Area Network/ vice versa.
  • Since iSCSI SAN can be located anywhere, they are very useful for disaster recovery objectives.
  • iSCSI SAN can use either a specialized HBA (Host Bus Adapter) to connect the servers to the SAN (or) just use the standard NIC Cards/ Ethernet ports for the same. This enables server I/O consolidation and reduces complexity/ cost.
  • Gigabit Server Adapters (NIC Cards) are normally available in the market, and iSCSI can use them to connect to the network at Gigabit speeds today. They are available in Single Port/ Dual Port/ Quad Port etc, and connect to the server using PCI Express slots. Very soon, even 10 Gigabit Ethernet NIC Cards are expected to become popular, which offers much more capacity/ speed for SAN performance.
  • Since iSCSI uses the normal IP based network components, its easy to learn, implement and maintain. Contrast this with Fiber Channel (FC) based SAN which requires a high level of expertise to create and maintain them.
  • iSCSI is very much suitable for implementation of SAN in virtual server environments as it supports software initiators that make such integration easier. Also, since iSCSI supports whatever bandwidths supported by IP Networks, it can support 10 GE which might be required for virtual server environments.
  • iSCSI allows direct backup’s to tape or disks, even from certain virtual servers.
Limitations / Disadvantages of iSCSI SAN:
  • The IP network, currently is a best effort network. Hence it may drop packets or deliver them out of order during network congestion. But these attributes are not favorable for storage area networks as they need to be highly reliable. So, till the lossless Converged Enhanced Ethernet technologies are standardized and implemented, iSCSI SAN may still not be the most reliable option when compared to FC SAN.
  • Server CPU’s may be burdened with iSCSI and TCP/IP Stack processing if HBA/ NIC cannot offload that function. This may result in the server processor cycles not working to their capacity for application processing, which is what they are primarily supposed to do.
  • Since iSCSI is generally not run on a separate network (but is mixed with the IP network traffic), there may be networks congestion/ bandwidth constraints during peak hours especially if the network is not designed to support the bandwidth required for both the applications.
  • iSCSI operates on clear text protocol (normally) and hence there is a chance that it might be exposed to hackers who are more familiar with the widely used IP Networks.
Salient points/ Good practices for iSCSI SAN that might be useful:
  • CHAP (Challenge Handshake Authentication Protocol) can be provisioned for authentication of iSCSI messages for security. Encryption, even if possible, may not be feasible – currently.
  • Some specialized HBA’s/ NIC Cards can offload processor hungry processes like iSCSI and TCP/ IP Stack processing. This can make the server processor work more efficiently.
  • Major Operating Systems (Including Windows and Linux) provide built-in software initiator drivers for iSCSI which makes their integration with IP network components, easy.
  • Its better to have a dedicated port for iSCSI SAN connection from a server (instead of sharing one network port). Better still, redundant ports can be provided for High Availability/ Link Aggregation.
  • Its a good practice to have iSCSI packets flow on a separate VLAN so that the storage traffic can be logically separated from the general network traffic. Better still, if iSCSI can be operated on its own physical network segment (with dedicated switches/ cables, etc).

Friday, May 20, 2011

What is Server Adapter Teaming?

It is the process of combining multiple adapter ports in the same server to function as a single virtual adapter (with a single IP address, in most cases) in order to achieve a higher (combined) connectivity throughput. For example, its possible to team two adapters with a single port of 1 Gbps each, to make it function like a single adapter with a speed of up to 2 Gbps. These two individual links are load balanced, so that there is equal distribution of traffic between them, and hence combined maximum throughput (as far as possible) can be obtained. Server adapter teaming is mostly a software enabled function. With Server Adapter teaming, one can achieve High Availability (Fault Tolerance) for the server adapter connectivity.

What are the benefits of Server Adapter Teaming?

  • Higher throughput – Combined bandwidth of two or more ports/ links.
  • Easier upgradation – If 1 Gbps NIC card performance is not sufficient for a particular server, another NIC of similar capacity can be teamed with this to achieve a higher throughput. This might be cheaper and more efficient than buying a 10 Gbps adapter and discarding the 1 Gbps adapter altogether.
  • Load balancing – The traffic in each link that goes out of the teamed ports is load balanced with other links to get a better performance.
  • High Availability – Depending on the configuration, server adapter teaming gives fault tolerance (and hence high availability to the server) in the event of: Network Switch failures, Switch Mis-Configurations (resulting in hanging/ non availability of the switch), Network cable (connecting the server and the switch) failure/ accidental disconnection, PCI Slot failures & Server Adapter/ Port failure.
  • Virtual Servers – If virtualization is used to create many server images within a single server, adapter teaming gives the higher throughput required in such situations.
  • Its possible (but not recommended) to team multiple vendor adapters/ multi-capacity adapters (100 Mbps/ 1000 Mbps).
  • Multiple ports built in to the same adapter/ motherboard of a server can be teamed together.
  • If both adapters and network switches support IEEE 802.3ad Link Aggregation standard, both incoming and outgoing traffic (from the server adapter) can be load balanced. Otherwise, only the outgoing traffic is load balanced.
  • Up to 8 Ports can be teamed together with certain server adapters.
server adapter teaming and link aggregation configurationSome configurations for server adapter teaming, are shown above.
When you consider the scenario in ‘C’, a single adapter with four ports (or) multiple adapters (2×2 ports; 4×1 port) can be teamed together and connected to the same switch. This achieves combined throughput of the four ports and high availability (against the failure of any single route) but does not account for switch failure. If the switch fails, all the links are down and cannot connect to the network.
Now consider the scenario in ‘A’. Among the four adapter ports available, two of them connect to one switch and two more connect to another switch. This way, even if one switch fails, the traffic continues to flow through the other.
In the scenario shown in ‘B’, each set of teamed ports to connecting to individual switches carry the traffic meant for different subnets. This provides for better segmentation of network and dedicated links for each subnet. The multiple adapter ports connecting to each switch (and each subnet) are teamed together for higher performance.
In most of the cases, multiple adapters/ adapter ports advertise a single IP address (and single MAC address) when they are teamed together. It becomes like a single virtual adapter with a higher (combined) port capacity. The above shown scenarios are just examples, and adapter teaming configurations are implemented differently by different vendors.

Thursday, May 5, 2011

Free DNS service adds IPv6 support

OpenDNS, one of the Internet's most popular free DNS services, is now offering production-grade support for IPv6, an upgrade to the Internet's main communications protocol known as IPv4.
OpenDNS CEO David Ulevitch says he is launching the IPv6 service now to help website operators and networking firms prepare for World IPv6 Day, a 24-hour test of IPv6 that is scheduled for June 8. Sponsored by the Internet Society, World IPv6 Day has attracted more than 160 participants, including some of the Internet's leading content providers, such as Google, Yahoo and Facebook.
"World IPv6 Day ... is all about getting organizations to make their resources available over IPv6," Ulevitch says. "It's a flag day. It's a point that you can use to convince your boss that IPv6 is worthwhile. ... It's not really for the end users; it's really for the network administrators, the IT guys, to figure out that it's not that hard to do IPv6."
OpenDNS says it will participate in World IPv6 Day by having all of its Web sites support IPv6 by default on June 8.
BACKGROUND: Is free DNS a good deal for business?
OpenDNS said it was the first to offer a free DNS recursive service that supports IPv6. Recursive DNS services allow Internet users to find websites by typing in their domain names and pulling up the corresponding IP numbers. In contrast, authoritative DNS services allow website operators to publish their domain names and corresponding IP addresses to Internet users.
"We are not aware of anybody else doing" a free recursive DNS service that supports IPv6, Ulevitch says. "There's no financial angle; we just want to encourage people to use an IPv6 DNS service."
Until now, network engineers experimenting with IPv6 had to encapsulate their traffic to transverse IPv4-based DNS servers.
"People want to experiment with IPv6, and we have DNS set up to support them," Ulevitch says. "We are trying to help push traffic and resources over to IPv6. If you actually want to reach IPv6-only resources, you need to use DNS resources that you can talk to over IPv6."
The IPv6 addresses for the OpenDNS IPv6 DNS Sandbox are: 2620:0:ccc::2 and 2620:0:ccd::2.
Internet companies like OpenDNS are promoting IPv6 because the Internet is running out of IPv4 addresses.
IPv4 uses 32-bit addresses and can support 4.3 billion devices connected directly to the Internet. Most IPv4 addresses have been handed out. The free pool of unassigned IPv4 addresses was depleted in February, and the Asia Pacific regional Internet registry said in April that it has doled out all but its last 16.7 million IPv4 addresses which are being held in reserve for startup network operators.

IPv6, on the other hand, uses 128-bit addresses and supports a virtually unlimited number of devices -- 2 to the 128th power. But despite its promise of an endless supply of address space, IPv6 represents only a tiny fraction -- less than 0.03% -- of Internet traffic.
Ulevitch said the new OpenDNS IPv6 service would allow Internet users to access IPv6-only websites, which he admits are a "very,very small'' portion of Internet resources.
OpenDNS says it has more than 20 million users globally, representing 1% of all Internet users. The company's free service is popular with U.S. public school systems, while its paid enterprise version has attracted corporations of all sizes.

Replacing hardware-based network appliances with virtual appliances

Tuesday, May 3, 2011

India Enters Supercomputer Race, Builds 220 TeraFLOPS Machine

The 220 teraflops supercomputer will be used by space scientists for solving complex aerospace problems
BANGALORE, INDIA: Indian Space Research Organisation (ISRO) announced on Monday that it has built a supercomputer,
which is to be India's fastest supercomputer in terms of theoretical peak performance of 220 TeraFLOPS
(220 Trillion Floating Point Operations per second).

The supercomputing facility named as Satish Dhawan Supercomputing Facility is located at Vikram Sarabhai Space Centre (VSSC), Thiruvananthapuram. The new Graphic Processing Unit (GPU) based 220 teraflops supercomputer named "SAGA-220"
(Supercomputer for Aerospace with GPU Architecture-220 TeraFLOPS) is being used by space scientists for solving complex
aerospace problems. The supercomputer SAGA-220 was inaugurated by Dr K Radhakrishnan, Chairman, ISRO today at VSSC.
"SAGA-220" Supercomputer is fully designed and built by Vikram Sarabhai Space Centre using commercially available hardware,
open source software components and in house developments. The system uses 400 NVIDIA Tesla 2070 GPUs and
400 Intel Quad Core Xeon CPUs supplied by Wipro with a high speed interconnect. It cost Rs 14 crore to build.
With each GPU and CPU providing a performance of 500 GigaFLOPS and 50 GigaFLOPS respectively, the theoretical
peak performance of the system amounts to 220 TeraFLOPS.
The present GPU system offers significant advantage over the conventional CPU based system in terms of cost,
power and space requirements, said ISRO. It also added that the system is environmentally green and consumes a power
of only 150 kW. This system can also be easily scaled to many PetaFLOPS (1000 TeraFLOPS).

Monday, May 2, 2011

100 Gig Ethernet. Where It's At. Where It's Going

The IEEE 802.3ba standard for 100 Gigabit Ethernet (100 GbE) was approved in June of 2010 and has been gaining adoption ever since. Multiple vendors including Juniper, Cisco, Brocade and Alcatel-Lucent have all announced 100 GbE solutions.
Major carriers -- including Verizon and AT&T -- are adopting 100 GbE as vendors ramp up their product offerings.
"An exact number is not immediately available, but Juniper has sold several dozen 100 GbE blades for the T1600, including some to Verizon, who recently announced plan for 100 GbE commercial service on select links in the U.S. and in Europe," Luc Ceuppens, VP of product marketing for Juniper Networks told InternetNews.com.
As to why carriers are choosing to deploy 100 GbE, it's often a question of the management cost of link aggregation complexity. Many carriers today have link aggregation strategies for the 10 Gigabit Ethernet links.
"Most customers will need to make a decision of continuing to go down the path of 10 gig link aggregation to address bandwidth needs, versus jumping over to 100 gig," Ken Cheng, vice president, Service Provider Products at Brocade told InternetNews.com. "There are pros and cons to both approaches."
Cheng noted that spectrum exhaustion is one reason why 100 GbE has an advantage. So instead of putting a 10 gig stream of traffic onto a fiber, a provider can put 10 times that amount on the same fiber, which makes it more cost effective.
"So even though customers tend to think of 100 GbE as being expensive, when you take into consideration the access to fiber, 100 GbE is quite reasonable and economical," Cheng said.
Cheng added that service providers are seeing longer network flows that live longer. He noted that networks used to be dominated by small packets, but with video and large file transfers there are larger flows.
"This is the type of traffic that is not easily served by a network based on 10x10 gig connectivity," Cheng said. "With larger flows, it's harder to fit into a LAG (Link Aggregation Group)."
All that said, Cheng noted that on a capital expenditure basis, the cost of 100 GbE is currently sometimes higher than a 10x10 gig deployment.
"As this is the first generation of 100 GbE costs the equipment costs can be higher," Cheng said. "Over time with innovation and economies of scale, the costs will come down."

100 GbE Standards

Cisco is also pushing its 100 GbE capabilities, which are a key part of their CRS-3 core routing platform. Cisco however doesn't see their solution as being the same as what rival Juniper is offering.
"Other vendors such as Juniper might be touting 100 GbE but if you really look at what they're doing it's not 100 GbE in its true form," Stephen Liu, senior manager of service provider marketing for routing and switching solutions at Cisco told InternetNews.com."
Liu said that the Juniper 100 GbE implementation is a 2x50 Gbps approach.
"There are real downsides to 2 x50," Liu said. "It may sound the same but it's not necessarily the same."
Liu also alleged that the Juniper 100 GbE implementation is not interoperable with 100 GbE solutions from other vendors.
Juniper disagrees.
"Juniper’s 100 GbE is fully standards compliant," Luc Ceuppens, VP of product marketing for Juniper Networks told InternetNews.com. "We have also done full interoperability testing with solutions from all of the leading vendors."
Cueppens also address the 2 x 50 GbE deployment question.
"Juniper’s 100 GbE for the T1600 is logically 2x50 GbE, but it is not an issue for any of our customers," Cueppens said.

 

Tuesday, April 26, 2011

Why Wireless Interference is an important consideration in Wi-Fi networks

Unlike a Wired Network, where adding more network switches gives better performance, a wireless network cannot be optimized for performance by adding more access points/ denser deployment of access points – mainly due to the Wireless Interference. In this article, we’ll try to understand frequency bands, interference, interference from 802.11 Wi-Fi enabled devices, interference from Non-Wi-Fi devices and how to identify and mitigate wireless interference.

Understanding Wireless Frequency Bands:

You might be familiar with the concept of frequency tuning in radio. When you tune your receiver to a certain frequency, you are able to hear the programs from a particular channel. So, when you use an analog rotary tuner to switch channels, you might have noticed that as you rotate the tuner, first a faint sound appears, then you get a strong signal, and then the signal weakens. So, the signals are received (with varying range of amplitudes) over a range of frequencies. When you consider many channels, the used range of frequencies becomes wider.
Similarly, Wireless (Wi-Fi networks) operate mainly in two major frequency bands (ranges) – 2.4Ghz and 5 Ghz. Both are unlicensed ISM band frequencies (Industrial, Scientific and Medical RF band) – Which means, any device / technology can use that band for communications.
2.4 Ghz & 5 Ghz are frequency bands (range of frequencies). The actual communications happen in sub-frequencies called channels, within each spectrum (frequency band). For example, in the 2.4 Ghz spectrum, Channel center frequencies might be like : Channel 1 – 2.412 Ghz; Channel 2 – 2.417 Ghz…… Channel 13 – 2.472 Ghz, etc. A Wireless Radio (on wireless access point) & client radio (wireless client on a laptop) operates in one of these channels to transmit information between them.
Every channel (sub-frequency) overlaps with its adjacent channels. So, Channel 6 for example, might overlap strongly with channels 5, 4 but weakly with channels 3, 2. In the 2.4 Ghz spectrum, Channels 1,6 & 11 are non-overlapping channels. That brings us to the next topic – Interference.

Wireless Interference:

Consider that there are three operational access points situated at a distance of 1 meter from each other (for example). If they operate in channels 1, 2 & 3 (respectively) or channels 1, 1 & 1 (respectively) – there would be a lot of interference that will affect all the clients connecting to these three access points. That’s because, generally access points and clients receive all the communications that are transmitted and reject those that are not in its frequency (channel) of operation. But if different access points operate in same channels (or) adjacent channels, they get confused if messages sent to them were meant for them or not!
But if the three access points are operating in channels 1, 6 & 11 (respectively), even if they are placed very close to each other, there would not be much interference because, the sub-frequencies used by each channel are far apart. In other words, these three channels are non-overlapping channels.
Interference might not allow you to connect to a wireless access point/ network, disconnect you from an existing connection (requiring you to re-connect to the network) or might slow down/ choke the wireless connectivity. Wireless Interference causes noticeable problems with real time applications like voice/ video transmitted over the wireless network. Interference is both a performance issue and a security concern (Rogue Access Points, Wireless DOS attacks, etc).
There are two types of wireless interference – Interference from Wi-Fi (802.11) Sources & Interference from Non-Wi-Fi Sources.

Interference from Wi-Fi (802.11) Sources:

Wi-Fi devices that interfere with the wireless network are – Access Points that are in the range of each other (and operating in overlapping channels); Neighboring Access Points that might be operating in overlapping channels & Wireless Jammers that intentionally operate in overlapping channels.
So, when two access points operate in same channel/ adjacent channels, and are in the range of each other, there would be interference. With 802.11 Wi-Fi based networks and devices, people might still be accessing and working on the wireless network even if there is considerable interference but with reduced throughput levels. 802.11 networks are resilient enough to retransmit the lost packets, but that might reduce the total available bandwidth.
Similarly, the access points across the street or in neighboring office, might as well be operating in the same channel, causing some interference. There are certain wireless jammers which cause interference in the network with the intention of disrupting wireless services.
Since the latest 802.11n network and devices use multiple antennas, they might be in a slightly better position to reduce interference by comparing the received signals from multiple antennas and averaging out the interfering signals.

Interference from Non-Wi-Fi Sources:

Since 2.4 Ghz and 5 Ghz are unlicensed frequency bands (spectrum), a lot of other technologies like Bluetooth, Zigbee & lot of devices like microwave ovens, wireless cameras, cordless phones, wireless headsets, wireless device controllers, etc operate in these frequency bands as well, thereby causing interference to Wi-Fi network communications.
Microwave ovens operate in multiple frequencies (wideband) and consistently interfere with the Wi-Fi devices. Wireless Cameras operate in narrow band and hence interfere on particular Wi-Fi frequencies, Bluetooth headset keeps hopping across the frequency band but still causes interference temporarily.
Even if a complete site-survey is done prior to the implementation of Wi-Fi network, it is still difficult to find out the Non-Wi-Fi sources of interference because, newer/smaller wireless devices are appearing in the market which could be brought by the employees at any time, thereby causing (unintentional) disturbance to the corporate Wi-Fi network.

Detecting and Mitigating Wireless Interference:

5 Ghz is a relatively clean spectrum without much interference from non Wi-Fi sources. But most of the commercially available Wi-Fi network devices operate in the more popular 2.4 Ghz spectrum. It might be better to implement Wi-Fi networks to operate in 5 Ghz frequency band (For this, both the client adapter on the laptops and access point should support 5 Ghz operation), especially with 802.11n high performance networks.
Some vendors fit sensors on access points that detect interference in their channel of operation (if any) and switch to other channels. But this may not be a solution for interference from non Wi-Fi sources. Its possible to reduce the chances of interference by controlling (reducing) the (transmission) power levels of access points. Using multiple/ multi-sector antennas might also improve the SNR.
Wi-Fi Sources: The interference from other Wi-Fi sources are relatively easier to detect, and in some cases even mitigate. The basic principle with Wi-Fi sources is to avoid any neighboring access points operating in the same channel (and adjacent channels). As far as possible, neighboring access points need to operate in non-overlapping channels (Like 1,6,11).
Its quite difficult to monitor each access point manually, and change the frequency of operation manually for all access points (though its possible). Even if they are set manually, if an access point reboots (due to power loss etc), it will choose an arbitrary frequency (channel) which may not be the same as manually set frequency. So, the process (assigning channels manually) needs to be repeated.
To automate this process, a Wireless Controller, that provides centralized management can be used in a network to continuously gauge the channel of operation for all the neighboring access points and adjust their channel settings dynamically. Most of the Wireless Controllers can manage only their own make of access points, but there are wireless management softwares available to manage multi-vendor access points/ controllers.
Non Wi-Fi Sources: The normal Wireless management softwares/ controllers may not detect interference from non Wi-Fi sources (some of them do) but there are specialized spectrum analyzers that can be employed for this purpose. But unlike the Wi-Fi sources of interference, simply changing the frequency channel of operation of access points may not be a solution for non Wi-FI based interference and hence the best way to tackle them might be to physically remove the sources / shield the sources from spreading out, hence restricting them to a certain area.
There are certain open source based spectrum analyzers which can be used for detecting interference like Netstumbler, Kismet, inSSIDer etc. Commercial spectrum analyzers are also available for the same.

Monday, April 18, 2011

Tips for Planning a Wireless Network

Know Your Building’s Bones

Do you know what your building is made of?  Before you install your wireless network, you should.  Dense building materials like filled cinder blocks, brick, rock walls, adobe or stucco construction can significantly reduce the strength of your wireless signal, and increase the number of access points needed to ensure a fast, reliable connection. Also anything that holds water, like pipes, bathrooms and elevator shafts tend to limit the range of wireless signals. 

Count Heads and Balance the Load

Typically, small and medium-sized businesses (SMBs) require fewer than 24 access points, but businesses must consider bandwidth in the overall plan.  Without adequate bandwidth to handle traffic, you may not realize expected productivity gains. IT staff should also be able to manage multiple access points and balance the load accordingly; centrally-managed wireless controller appliances can do this dynamically to boost performance and save time.

Power Up

After deciding the number of necessary WLAN access points you need, determine the power requirements necessary to support these points, typically 15 watts or less.  While power requirements differ for each business, power injectors are still a great option for powering the access points.  The injectors can be placed anywhere along the line within 100 meters and provide greater flexibility by eliminating the need for external AC Adapter power supply.

Safe and Secure Networking

Who among us hasn’t searched for an unsecured wireless network to jump on when we are away from home or work?  Keeping the wireless network safe is a top priority, so avoid using obsolete protocols for wireless security, like WEP (Wired Equivalent Privacy). Better alternatives include WPA (Wi-Fi Protected Access) and WPA2, which will help safeguard against hackers.  For increased protection, IT departments should configure access points to use the strongest available AES 256 bit encryption.

Common Wireless Networking Missteps

Are you ready to jump online?  Avoid some of the more common wireless network pitfalls:

This Access Point Worked at Home

Depending on the size of the business, wireless devices designed for home use may not be a fit for the business environment.  Although home access points are less expensive, they are not designed to achieve the results necessary beyond a small home office.  Businesses with multiple access points require devices designed to achieve a seamless connection.  Home access points are designed for single deployments and will interfere with other access points in multiple access point scenarios.

Just Add Access Points

The easiest locations for access points are not necessarily the best locations.  While a comprehensive wireless site survey is ideal, it may be cost prohibitive for most small businesses, ranging from $2,000 - $3,000, so consider these approaches to the access point challenge:
  • Install multiple access points and err on the side of over-coverage.  The initial investment in multiple access points will save money in the long run, compared to commissioning a site survey
  • Perform a rudimentary site survey independently by setting up one access point, charting its coverage using one laptop and using its coverage range as a guideline for access points throughout the facility
  • Consider a wireless LAN controller.  The controller recognizes all of the connected access points and sets the appropriate channel and power setting.  Some controllers even let you load a diagram of the floor plan, providing a heat map that shows the signal strength of each access point

Do What You’ve Always Done

It’s easy to become complacent with wireless routines.  Network equipment is constantly improving, with networked devices becoming smarter and more complex -- just like the technologies that hackers use to attack networks.  Don’t put your small business at risk -- understand exactly where the wireless marketplace stands and where the technology is headed to avoid exposing the business to security risks that waste time and money.

Don’t Plan to Grow

When implementing a WLAN, think about current and future networking needs, and be prepared to grow with the technology. One benefit of a wireless infrastructure is that it is fairly simple to reconfigure an office space during times of growth or change.  The equipment and the configuration should be driven by business goals -- be mindful of what potential needs will be six months to a year into the future
A wireless network can be a great asset to your business, but be careful to consider the objectives, limitations and the potential future benefits.  Also, be aware of the possible pitfalls to avoid disappointment and lost productivity time.  When done right, a wireless implementation can translate into a successful business plan.

Friday, April 15, 2011

Protecting website using basic authentication

Apache uses auth_mod to protect the whole or part of a site.
Here we will see how to provide access to your website to only authenticated users. I will demonstrate and explain the use of basic authentication.
In Apache’s main configuration file located at /etc/httpd/conf/httpd.conf or inside <VirtualHost></VirtualHost> directives, put in the following:
<Directory />
AuthName "Authentication Needed"
AuthType Basic
AuthUserFile /etc/httpd/conf/security_users
require valid-user
</Directory>
Let me explain the above directives one by one:
<Directory /> means that the directives applies to / , that is to the DocumentRoot of the site
AuthName creates a label that is displayed by web browsers to users.
AuthUserFile sets the file that Apache will consult to check user names and passwords for authenticating users.
AuthType specifies what type of authentication scheme to use
require directive stats that only valid users are allowed access to the site.
Now we have create the file that will hold the users and their passwords with the following command
htpasswd -c /etc/httpd/conf/security_users testuser
New password:
Re-type new password:
Adding password for user testuser
-c is meant to create the linuxgravity_access and testuser is the user to be created. The flag -c is not needed when adding any further users in the same file.
Now restart Apache.
/etc/init.d/apache2 restart
Access the site e.g. http://localhost or http://IP_of_apache

Monday, April 11, 2011

What is Server Virtualization?

Servers rarely run at full capacity. Server Virtualization enables multiple applications running on multiple operating systems to run on the same server, utilizing that unused additional capacity. Lets find out more about Server Virtualization, in this article.

What is Server Virtualization?

Server Virtualization Architecture DiagramIn the above diagram, the three servers on the left hand side represent stand-alone servers – There is an Operating System & an Application on each of them. That’s the conventional set up. Well, almost. There are some drawbacks with this set up – Some operating systems/ applications do not use all the resources of an entire server. So, the additional capacity goes under-used. Also, unless more physical servers are introduced, there is no back up for these stand-alone servers/applications – should they fail.
The three servers on the right hand side represent virtualized servers. In each server (for the sake of simplicity) lets consider that multiple applications are running on multiple operating systems. Each OS/Application is isolated from the others. Further, server resources like processor capacity/ RAM/ Hard-disk capacity etc, are reserved (or allocated) separately for each OS/Application.
The OS/Application pairs run over a software module called Hypervisor. The Hypervisor resides between the bare metal hardware and the virtual systems. It basically de-couples the Operating System/ Applications from the underlying physical hardware and provides a common management / operating platform for multiple operating systems/ applications.
So, in a nutshell, Server Virtualization can be defined as – Multiple instances of different operating systems/ applications running on a physical server hardware. This approach, has a lot of advantages as discussed below.

Advantages of Server Virtualization:

  • Since the resources (RAM, Processor, etc) of the virtualized servers are utilized more (than stand-alone OS/App) as multiple operating systems and applications share the same server resources, Server Virtualization improves the resource utilization ratio.
  • Server Virtualization enables server consolidation – resulting in the usage of less number of servers for the same OS/application(s).
  • If a server is down (due to either hardware or application failure) (or) due to maintenance activities, application downtime can be avoided by migrating the virtual systems (OS/application pairs) to other servers. This ensures high availability of the applications.
  • The applications can also be transferred from a primary data center to a secondary data center (by certain virtualization softwares, if up-to date copies are kept at the secondary data center) enabling effective disaster recovery strategy.
  • Server Virtualization avoids over-purchasing/ over-allocation of servers for certain applications.
  • On-demand resource allocation is possible along with the ability to scale up / scale down resources.
  • The time required for getting an application up and running is greatly reduced, especially for smaller applications that can be provisioned in one of the existing virtual servers.
  • Server Virtualization is an Operating System neutral technology – multiple operating systems can reside alongside in the same server.
  • Even though various operating systems/ applications reside in the same server, they are logically isolated from each other thereby enhancing  security.
  • The operating systems/ applications (virtual systems) are hardware independent. They just need to communicate with the hypervisor and the hypervisor communicates with the hardware components.
  • Server Virtualization is useful for testing applications / using them in the production environment temporarily as there is no need to buy additional servers for doing that.

Limitations of Server Virtualization:

  • The resource allocation for each virtual system needs to be planned carefully. If very less resources are allocated, the application performance might be affected and if too much resources are allocated, it will result in under-utilization. The servers that are to be virtualized should have sufficient resources, in the first place.
  • 32-bit processors/ operating systems/ applications can make use of only limited memory resources in the server (4 GB) and hence 64-bit computing is preferred for server virtualization. But not all the applications have been migrated to 64-bit computing yet.
  • Only a few processors (that support virtualization) can be used to virtualize servers. And for migrating the virtual systems from one server to another, some vendors require similar model/make of processors.
  • The hypervisor itself utilizes some processing power. This is in addition to the processing power required for the applications.
  • The cost of virtualization software, management applications, management expertise required etc, might limit the usage of server virtualization in smaller environments with very few servers.
  • Sometimes, a separate SAN/NAS network might be required for storage as there may not be sufficient storage capacity inside the server for multiple OS/ Applications.
  • The software switch running inside the hypervisor to connect the various virtual systems (Operating System/ Application) may not be able to integrate with the existing network settings like VLAN/ QoS settings, etc. At least, they cannot implement all the features of a specialized network switches connecting to individual servers in a full fledged way.

Monday, April 4, 2011

Squid Access Controls

Tag Name acl
Usage acl aclname acltype string1 ... | "file"
Description
This tag is used for defining an access List. When using "file" the file should contain one item per line By default, regular expressions are CASE-SENSITIVE. To make them case-insensitive, use the -i option.

Acl Type: src
Description
This will look client IP Address.
Usage acl aclname src ip-address/netmask.
Example
1.This refers to the whole Network with address 172.16.1.0 - acl aclname src 172.16.1.0/24
2.This refers specific single IP Address - acl aclname src 172.16.1.25/32
3.This refers range of IP Addresses from 172.16.1.25-172.16.1.35 - acl aclname src 172.16.1.25-172.16.1.35/32

Note
While giving Netmask caution must be exerted in what value is given

Acl Type: dst
Description
This is same as src with only difference refers Server IPaddress. First Squid will dns-lookup for IPAddress from the domain-name, which is in request header. Then this acl is interpreted.

Usage acl aclname dst ip-address/netmask.

Acl Type: srcdomain
Description
Since squid needs to reverse dns lookup (from client ip-address to client domain-name) before this acl is interpreted, it can cause processing delays. This lookup adds some delay to the request.

Usage acl aclname srcdomain domain-name
Example
acl aclname srcdomain .kovaiteam.com

Note
Here "." is more important.

Acl Type: dstdomain
Description
This is the effective method to control specific domain

Usage acl aclname dstdomain domain-name
Example
acl aclname dstdomain .kovaiteam.com
Hence this looks for *.kovaiteam.com from URL
Hence this looks for *.kovaiteam.com from URL
Note
Here "." is more important.

Acl Type: srcdom_regex
Description
Since squid needs to reverse dns lookup (from client ip-address to client domain-name) before this acl is interpreted, it can cause processing delays. This lookup adds some delay to the request

Usage acl aclname srcdom_regex pattern
Example
acl aclname srcdom_regex kovai
Hence this looks for the word kovai from the client domain name
Note
Better avoid using this acl type to be away from latency.

Acl Type: dstdom_regex
Description
This is also an effective method as dstdomain

Usage acl aclname dstdom_regex pattern
Example
acl aclname dstdom_regex kovai
Hence this looks for the word kovai from the client domain name

Acl Type: time
Description
Time of day, and day of week

Usage acl aclname time [day-abbreviations] [h1:m1-h2:m2]
day-abbreviations:
S - Sunday
M - Monday
T - Tuesday
W - Wednesday
H - Thursday
F - Friday
A - Saturday
h1:m1 must be less than h2:m2
Example
acl ACLTIME time M 9:00-17:00
ACLTIME refers day of Monday from 9:00 to 17:00.

Acl Type: url_regex
Description
The url_regex means to search the entire URL for the regular expression you specify. Note that these regular expressions are case-sensitive. To make them case-insensitive, use the -i option.

Usage acl aclname url_regex pattern
Example
acl ACLREG url_regex cooking
ACLREG refers to the url containing "cooking" not "Cooking"

Acl Type: urlpath_regex
Description
The urlpath_regex regular expression pattern matching from URL but without protocol and hostname. Note that these regular expressions are case-sensitive

Usage acl aclname urlpath_regex pattern
Example
acl ACLPATHREG urlpath_regex cooking
ACLPATHREG refers only containing "cooking'' not "Cooking"; and without referring protocol and hostname.
If URL is http://www.visolve.com/folder/subdir/cooking/first.html then this acltype only looks after http://www.visolve.com .
In other words, if URL is http://www.visolve.com/folder/subdir/cooking/first.html then this acltype's regex must match /folder/subdir/cooking/first.html .

Acl Type: port
Description
Access can be controlled by destination (server) port address

Usage acl aclname port port-no
Example
This example allows http_access only to the destination 172.16.1.115:80 from network 172.16.1.0

acl acceleratedhost dst 172.16.1.115/255.255.255.255
acl acceleratedport port 80
acl mynet src 172.16.1.0/255.255.255.0
http_access allow acceleratedhost acceleratedport mynet
http_access deny all

Acl Type: proto
Description
This specifies the transfer protocol

Usage acl aclname proto protocol
Example
acl aclname proto HTTP FTP
This refers protocols HTTP and FTP

Acl Type: method
Description
This specifies the type of the method of the request

Usage acl aclname method method-type
Example
acl aclname method GET POST
This refers get and post methods only

Acl Type: browser
Description
Regular expression pattern matching on the request's user-agent header

Usage acl aclname browser pattern
Example
acl aclname browser MOZILLA
This refers to the requests, which are coming from the browsers who have "MOZILLA" keyword in the user-agent header.

Acl Type: ident
Description
String matching on the user's name

Usage acl aclname ident username ...
Example
You can use ident to allow specific users access to your cache. This requires that an ident server process runs on the user's machine(s). In your squid.conf configuration file you would write something like this:

ident_lookup on
acl friends ident kim lisa frank joe
http_access allow friends
http_access deny all

Acl Type: ident_regex
Description
Regular expression pattern matching on the user's name. String match on ident output. Use REQUIRED to accept any non-null ident

Usage acl aclname ident_regex pattern
Example
You can use ident to allow specific users access to your cache. This requires that an ident server process run on the user's machine(s). In your squid.conf configuration file you would write something like this:

ident_lookup on
acl friends ident_regex joe
This looks for the pattern "joe" in username


Acl Type: src_as
Description
source (client) Autonomous System number



Acl Type: dst_as
Description
destination (server) Autonomous System number



Acl Type: proxy_auth
Description
User authentication via external processes. proxy_auth requires an EXTERNAL authentication program to check username/password combinations (see authenticate_program ).

Usage acl aclname proxy_auth username...
use REQUIRED instead of username to accept any valid username
Example
acl ACLAUTH proxy_auth usha venkatesh balu deepa

This acl is for authenticating users usha, venkatesh, balu and deepa by external programs.
Warning
proxy_auth can't be used in a transparent proxy. It collides with any authentication done by origin servers. It may seem like it works at first, but it doesn't. When a Proxy-Authentication header is sent but it is not needed during ACL checking the username is NOT logged in access.log.


Acl Type: proxy_auth_regex
Description
This is same as proxy_auth with a difference. That is it matches the pattern with usernames, which are given in authenticate_program

Usage acl aclname proxy_auth_regex [-i] pattern...

Acl Type: snmp_community
Description
SNMP community string matching

Example
acl aclname snmp_community public
snmp_access aclname


Acl Type: maxconn
Description
A limit on the maximum number of connections from a single client IP address. It is an ACL that will be true if the user has more than maxconn connections open. It is used in http_access to allow/deny the request just like all the other acl types.

Example
acl someuser src 1.2.3.4
acl twoconn maxconn 5
http_access deny someuser twoconn
http_access allow !twoconn

Note
maxconn acl requires client_db feature, so if you disabled that (client_db off) maxconn won't work.


Acl Type: req_mime_type
Usage acl aclname req_mime_type pattern
Description
Regular expression pattern matching on the request content-type header

Example
acl aclname req_mime_type text

This acl looks for the pattern "text" in request mime header

Acl Type: arp
Usage acl aclname arp ARP-ADDRESS
Description
Ethernet (MAC) address matching This acl is supported on Linux, Solaris, and probably BSD variants.

To use ARP (MAC) access controls, you first need to compile in the optional code.
Do this with the --enable-arp-acl configure option:
% ./configure --enable-arp-acl ...
% make clean
% make

If everything compiles, then you can add some ARP ACL lines to your squid.conf
Default acl all src 0.0.0.0/0.0.0.0
acl manager proto cache_object
acl localhost src 127.0.0.1/255.255.255.255
acl SSL_ports port 443 563
acl Safe_ports port 80 21 443 563 70 210 1025-65535
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl CONNECT method CONNECT
Example
acl ACLARP arp 11:12:13:14:15:16
ACLARP refers
MACADDRESS of the ethernet 11:12:13:14:15:16
Note
Squid can only determine the MAC address for clients that are on the same subnet. If the client is on a different subnet, then Squid cannot find out its MAC address.

Tag Name http_access
Usage http_access allow|deny [!]aclname ...
Description
Allowing or denying http access based on defined access lists

If none of the "access" lines cause a match, the default is the opposite of the last line in the list. If the last line was deny, then the default is allow. Conversely, if the last line is allow, the default will be deny. For these reasons, it is a good idea to have a "deny all" or "allow all" entry at the end of your access lists to avoid potential confusion
Default http_access allow manager localhost
http_access deny manager
http_access deny !Safe_ports
http_access deny CONNECT !SSL_ports
http_access deny all
If there are no "access" lines present, the default is to allow the request


Caution
The deny all line is very important. After all the http_access rules, if access isn't denied, it's ALLOWED !! So, specifying a LOT of http_access allow rules, and forget the deny all after them, is the same of NOTHING. If access isn't allowed by one of your rules, the default action ( ALLOW ) will be triggered. So, don't forget the deny all rule AFTER all the rules.

And, finally, don't forget rules are read from top to bottom. The first rule matched will be used. Other rules won't be applied. 

Tag Name icp_access
Usage icp_access allow|deny [!]aclname ...
Description
icp_access allow|deny [!]aclname ...

Default icp_access deny all
Example
icp_access allow all - Allow ICP queries from everyone

Tag Name miss_access
Usage miss_access allow|deny [!]aclname...
Description
Used to force your neighbors to use you as a sibling instead of a parent. For example:

acl localclients src 172.16.0.0/16
miss_access allow localclients
miss_access deny !localclients
This means that only your local clients are allowed to fetch MISSES and all other clients can only fetch HITS.
Default By default, allow all clients who passed the http_access rules to fetch MISSES from us.
miss_access allow all


Tag Name cache_peer_access
Usage cache_peer_access cache-host allow|deny [!]aclname ...
Description
Similar to 'cache_peer_domain ' but provides more flexibility by using ACL elements.

The syntax is identical to 'http_access' and the other lists of ACL elements. See 'http_access ' for further reference.
Default none
Example
The following example could be used, if we want all requests from a specific IP address range to go to a specific cache server (for accounting purposes, for example). Here, all the requests from the 10.0.1.* range are passed to proxy.visolve.com, but all other requests are handled directly.

Using acls to select peers,
acl myNet src 10.0.0.0/255.255.255.0
acl cusNet src 10.0.1.0/255.255.255.0
acl all src 0.0.0.0/0.0.0.0
cache_peer proxy.visolve.com parent 3128 3130
cache_peer_access proxy.visolve.com allow custNet
cache_peer_access proxy.visolve.com deny all

Tag Name proxy_auth_realm
Usage proxy_auth_realm string
Description
Specifies the realm name, which is to be reported to the client for proxy authentication (part of the text the user will see when prompted for the username and password).

Default proxy_auth_realm Squid proxy-caching web server
Example
proxy_auth_realm My Caching Server

Tag Name ident_lookup_access
Usage ident_lookup_access allow|deny aclname...
Description
A list of ACL elements, which, if matched, cause an ident (RFC 931) lookup to be performed for this request. For example, you might choose to always perform ident lookups for your main multi-user Unix boxes, but not for your Macs and PCs

Default
By default, ident lookups are not performed for any requests
ident_lookup_access deny all
Example
To enable ident lookups for specific client addresses, you can follow this example:

acl ident_aware_hosts src 198.168.1.0/255.255.255.0
ident_lookup_access allow ident_aware_hosts
ident_lookup_access deny all
Caution
This option may be disabled by using --disable-ident with the configure script.


Examples:
(1) To allow http_access for only one machine with MAC Address 00:08:c7:9f:34:41
To use MAC address in ACL rules. Configure with option -enable-arp-acl.
acl all src 0.0.0.0/0.0.0.0
acl pl800_arp arp 00:08:c7:9f:34:41
http_access allow pl800_arp
http_access deny all
(2) To restrict access to work hours (9am - 5pm, Monday to Friday) from IP 192.168.2/24
acl ip_acl src 192.168.2.0/24
acl time_acl time M T W H F 9:00-17:00
http_access allow ip_acl time_acl
http_access deny all
(3) Can i use multitime access control list for different users for different timing.
AclDefnitions
acl abc src 172.161.163.85
acl xyz src 172.161.163.86
acl asd src 172.161.163.87
acl morning time 06:00-11:00
acl lunch time 14:00-14:30
acl evening time 16:25-23:59

Access Controls
http_access allow abc morning
http_access allow xyz morning lunch
http_access allow asd lunch

This is wrong. The description follows:
Here access line "http_access allow xyz morning lunch" will not work. So ACLs are interpreted like this ...

http_access RULE statement1 AND statement2 AND statement3 OR
http_access ACTION statement1 AND statement2 AND statement3 OR
........
So, the ACL "http_access allow xyz morning lunch" will never work, as pointed, because at any given time, morning AND lunch will ALWAYS be false, because both morning and lunch will NEVER be true at the same time. As one of them is false, and acl uses AND logical statement, 0/1 AND 0 will always be 0 (false).
That's because this line is in two. If now read:
http_access allow xyz AND morning OR
http_access allow xyz lunch
If request comes from xyz, and we're in one of the allowed time, one of the rules will match TRUE. The other will obviously match FALSE. TRUE OR FALSE will be TRUE, and access will be permitted.
Finally Access Control looks...
http_access allow abc morning
http_access allow xyz morning
http_access allow xyz lunch
http_access allow asd lunch
http_access deny all
(4) Rules are read from top to bottom. The first rule matched will be used. Other rules won't be applied.
Example:
http_access allow xyz morning
http_access deny xyz
http_access allow xyz lunch

If xyz tries to access something in the morning, access will be granted. But if he tries to access something at lunchtime, access will be denied. It will be denied by the deny xyz rule, that was matched before the 'xyz lunch' rule.