Monday, February 28, 2011

10 basic concepts that Every Windows Network Admin Must Know

Here are  my list of 10 core networking concepts that every Windows Network Admin (or those interviewing for a job as one) must know:

1.     DNS Lookup

The domain naming system (DNS) is a cornerstone of every network infrastructure. DNS maps IP addresses to names and names to IP addresses (forward and reverse respectively). Thus, when you go to a web-page like www.google.com, without DNS, that name would not be resolved to an IP address and you would not see the web page. Thus, if DNS is not working "nothing is working" for the end users.
DNS server IP addresses are either manually configured or received via DHCP. If you do an IPCONFIG /ALL in windows, you will see your PC's DNS server IP addresses.
Ethernet adapter Local Area Connection:

        Connection-specific DNS Suffix  . : rahulonline.edu
        Description . . . . . . . . . . . : Intel(R) PRO/100 VE Network Connecti
on
        Physical Address. . . . . . . . . : 00-19-D1-20-32-DB
        Dhcp Enabled. . . . . . . . . . .      : Yes
        Autoconfiguration Enabled . . . .: Yes
        IP Address. . . . . . . . . . . . :   192.168.1.7
        Subnet Mask . . . . . . . . . . . : 255.255.255.0
        Default Gateway . . . . . . . .  :  192.168.1.254
        DHCP Server . . . . . . . . .  . :  192.168.2.1
        DNS Servers . . . . . . . . . . . :  192.168.2.1             //DNS
        Primary WINS Server . . . ..   192.168.2.1
        Lease Obtained. . . . . . . . . . : Monday, February 28, 2011 9:19:20 AM
        Lease Expires . . . . . . . . . .  : Tuesday, March 01, 2011 9:19:20 AM
So, you should know what DNS is, how important it is, and how DNS servers must be configured and/or DNS servers must be working for "almost  anything" to work.
When you perform a ping, you can easily see that the domain name is resolved to an IP (shown in Figure 2).

2.     Ethernet & ARP

Ethernet is the protocol for your local area network (LAN). You have Ethernet network interface cards (NIC) connected to Ethernet cables, running to Ethernet switches which connect everything together. Without a "link light" on the NIC and the switch, nothing is going to work.
MAC addresses (or Physical addresses) are unique strings that identify Ethernet devices. ARP (address resolution protocol) is the protocol that maps Ethernet MAC addresses to IP addresses. When you go to open a web page and get a successful DNS lookup, you know the IP address. Your computer will then perform an ARP request on the network to find out what computer (identified by their Ethernet MAC address, shown in Figure 1 as the Physical address) has that IP address.

3.     IP Addressing and Subnetting

Every computer on a network must have a unique Layer 3 address called an IP address. IP addresses are 4 numbers separated by 3 periods like 1.1.1.1.
Most computers receive their IP address, subnet mask, default gateway, and DNS servers from a DHCP server. Of course, to receive that information, your computer must first have network connectivity (a link light on the NIC and switch) and must be configured for DHCP.
You can see my computer's IP address in Figure 1 where it says IPv4 Address 192.168.1.7. You can also see that I received it via DHCP where it says DHCP Enabled YES.
Larger blocks of IP addresses are broken down into smaller blocks of IP addresses and this is called IP subnetting. I am not going to go into how to do it and you do not need to know how to do it from memory either (unless you are sitting for a certification exam) because you can use an IP subnet calculator, downloaded from the Internet, for free.

4.     Default Gateway

The default gateway, shown in Figure 3 as 192.168.1.254, is where your computer goes to talk to another computer that is not on your local LAN network. That default gateway is your local router. A default gateway address is not required but if it is not present you would not be able to talk to computers outside your network (unless you are using a proxy server).

5.     NAT and Private IP Addressing

Today, almost every local LAN network is using Private IP addressing (based on RFC1918) and then translating those private IPs to public IPs with NAT (network address translation). The private IP addresses always start with 192.168.x.x or 172.16-31.x.x or 10.x.x.x (those are the blocks of private IPs defined in RFC1918).
In Figure 2, you can see that we are using private IP addresses because the IP starts with "10". It is my integrated router/wireless/firewall/switch device that is performing NAT and translating my private IP to my public Internet IP that my router was assigned from my ISP.

6.     Firewalls

Protecting your network from malicious attackers are firewalls. You have software firewalls on your Windows PC or server and you have hardware firewalls inside your router or dedicated appliances. You can think of firewalls as traffic cops that only allow certain types of traffic in that should be in.
For more information on Firewalls, checkout our Firewall articles.

7.     LAN vs WAN

Your local area network (LAN) is usually contained within your building. It may or may not be just one IP subnet. Your LAN is connected by Ethernet switches and you do not need a router for the LAN to function. So, remember, your LAN is "local".
Your wide area network (WAN) is a "big network" that your LAN is attached to. The Internet is a humongous global WAN. However, most large companies have their own private WAN. WANs span multiple cities, states, countries, and continents. WANs are connected by routers.

8.     Routers

Routers route traffic between different IP subnets. Router work at Layer 3 of the OSI model. Typically, routers route traffic from the LAN to the WAN but, in larger enterprises or campus environments, routers route traffic between multiple IP subnets on the same large LAN.
On small home networks, you can have an integrated router that also offers firewall, multi-port switch, and wireless access point.

9.     Switches

Switches work at layer 2 of the OSI model and connect all the devices on the LAN. Switches switch frames based on the destination MAC address for that frame. Switches come in all sizes from small home integrated router/switch/firewall/wireless devices, all the way to very large Cisco Catalyst 6500 series switches.

10. OSI Model encapsulation

One of the core networking concepts is the OSI Model. This is a theoretical model that defines how the various networking protocols, which work at different layers of the model, work together to accomplish communication across a network (like the Internet).
Unlike most of the other concepts above, the OSI model isn't something that network admins use every day. The OSI model is for those seeking certifications like the Cisco CCNA or when taking some of the Microsoft networking certification tests. OR, if you have an over-zealous interviewer who really wants to quiz you.
To fulfill those wanting to quiz you, here is the OSI model:
  • Application - layer 7 - any application using the network, examples include FTP and your web browser
  • Presentation - layer 6 - how the data sent is presented, examples include JPG graphics, ASCII, and XML
  • Session - layer 5 - for applications that keep track of sessions, examples are applications that use Remote Procedure Calls (RPC) like SQL and Exchange
  • Transport - layer 4 -provides reliable communication over the network to make sure that your data actually "gets there" with TCP being the most common transport layer protocol
  • Network - layer 3 -takes care of addressing on the network that helps to route the packets with IP being the most common network layer protocol. Routers function at Layer 3.
  • Data Link - layer 2 -transfers frames over the network using protocols like Ethernet and PPP. Switches function at layer 2.
  • Physical - layer 1 -controls the actual electrical signals sent over the network and includes cables, hubs, and actual network links.
At this point, let me stop degrading the value of the OSI model because, even though it is theoretical, it is critical that network admins understand and be able to visualize how every piece of data on the network travels down, then back up this model. And how, at every layer of the OSI model, all the data from the layer above is encapsulated by the layer below with the additional data from that layer. And, in reverse, as the data travels back up the layer, the data is de-encapsulated.

Sunday, February 27, 2011

just to know about 10 Gigabit Ethernet

As the name suggests, 10 Gigabit Ethernet, which is referred to as 10GbE, has the capability to provide data transmission rates of up to 10 Gigabits per second. 10 Gigabit Ethernet is defined in the IEEE 802.3ae standard.
There are a number of 10GbE implementations. Of these standards, 10BaseSR is designed for LAN or MAN implementations, with a maximum distance of 300 meters using 50 micron multimode fiber-optic cabling. 10BaseSR can also be implemented with 62.5 micron multimode fiber, but is limited to 33 meters in this configuration.
10GBaseLR and 10GBaseER are designed for use in MAN and WAN implementations, and are implemented using single mode fiber-optic cabling. 10GBaseLR has a maximum distance of 10km, whereas 10GBaseER has a maximum distance of 40km. Table 6.
Table 6 Summary of IEEE 802.3ae 10 Gigabit Ethernet Characteristics
10GBaseSR
10GBaseLR
10GBaseER
Transmission Method
Baseband
Baseband
Baseband
Speed
10000Mbps
10000Mbps
10000Mbps
Distance Cable Type
33m/300m
10,000m
40,000m
50 or 62.5 micron
Single Mode
Single Mode
multimode
fiber
fiber
Fibre/50 Micron
Multimode fiber
Connector Type
Fiber connectors
Fiber connectors
Fiber connectors

Saturday, February 26, 2011

Monitor your changed files in real-time in Linux

Everybody knows top or htop. Ever wished there was something similar but to monitor your files instead of CPU usage and processes? Well, there is.
Run this:
watch -d -n 2 ‘df; ls -FlAt;’
and you’ll get to spy on which files are getting written on your system. Every time a file gets modified it will get highlighted for a second or so. The above command is useful when you grant someone SSH access to your box and wish to know exactly what they’re modifying.

Record your Linux desktop from the command line

If you do not wish to install a dedicated application for recording your desktop, you can do it with this one-liner:
ffmpeg -f x11grab -s wxga -r 25 -i :0.0 -sameq /home/user/out.mpg
All you need is have ffmpeg already installed (and most systems do). Navigate to /home/user and you’ll find you MPEG video file there.

Wednesday, February 23, 2011

Why Blade Servers are popular in Large Server Deployments

Blade Servers refer to Chassis based Server modules. We will look at the various components that go in to the Chassis of Blade Servers as well as the advantages of Blade Servers (over rack mount servers) for large server deployments, in this article.

First, what is a Chassis?

Chassis refers to any backplane module that has empty slots where the front end modules (like blade servers) & add-on modules can be inserted. All the individual modules/ components can be inserted in to the empty chassis slots (or) somewhere in the chassis as add-on modules.  The chassis comes  with a certain fixed number of slots, and with many vendors, the chassis as a whole can be fixed in to a standard 19″ server rack. The major components of the Blade Server Chassis are listed below:
  • Server modules (Blades) with Multi-Core Processors, RAM, hard disks, etc. These blades are inserted in to the empty front end slots of the Chassis.
  • Common Power Supply Modules.
  • Common Fan/Cooling Modules.
  • Midspan connections (usually specialized cables) that connect specific add-on/integrated Chassis modules and Blade modules.
  • Common Input/Output expansion modules (Eg. PCI Express modules).
  • Specialized expansion modules (Like RAID expansion modules, etc).
  • Storage Module(s).
  • Networking Module(s) (Switch ports, Ethernet cables, etc).

Advantages of Blade Servers (Over Individual Rack Mount Servers):

Redundancy:
This is one of the most important advantages of Chassis based Blade Servers. Blade Servers introduce redundancy with many components – The chassis contain (n+1) power supply modules (common to all the blades) which provide redundancy in case of individual power supply module failures. This is better than providing (n+1) power supply modules within each server. These modules can be powered from different grids/ different phases to provide additional power resiliency. The fan modules (used for cooling) too have an n+1 configuration for redundancy.  Even the individual connections from the blade servers to the chassis backplane which provide common I/O, switch ports etc, can be made redundant by providing additional connections that run in parallel.
Consolidation:
Since Blade servers are generally used for larger server installations, a number of components can be consolidated for the entire chassis instead of being provisioned (usually in excess) for each blade. For example, the I/O modules (Eg. PCI express modules) can be consolidated over the chassis and any of the blade servers can utilize them as and when required. This is better than provisioning multiple I/O modules in every server.  With consolidated provisioning, lesser modules are required, hence reducing the overall cost. Even the power supply and cooling modules are consolidated over the entire blade (in the chassis) and is better than having individual power modules/ cooling modules for every server, as lesser modules are used in the chassis.
Centralized Management:
Most of the components in blade servers can be centrally managed using a unified management application, and some vendors allow for remote management/ administration too. So, along with the blade servers, the storage modules, networking modules, I/O modules, power modules, cooling modules etc, can be administered as a single system. Sometimes, even multiple such Chassis (from a single vendor) can be managed from a single system. This makes the administration of blade servers easier.
Easy Servicing/ Replacement:
Chassis based blade servers generally offer easily removable, and in most cases hot pluggable individual modules. For example, if one of the fan/ cooling modules are not working, they can be removed and replaced without having to switch off the entire chassis. This provides for easy and uninterrupted replacement of individual chassis modules.
More Processing Power:
Blade Servers can accommodate more processing power, per rack unit within each server module. This is possible as the cooling and power supply modules are removed from the individual blade servers and consolidated on the chassis. This extra space can be used to accommodate more/ higher capacity processors. As you can imagine, this creates more heat inside the server modules,  and vendors offer additional fan modules/ liquid cooling systems to offset the higher levels of generated heat.
Bulk Deployment/ Scaling of Servers:
Blade servers with chassis are useful for bulk deployments and offer a way to quickly scale-up the number of servers. When all the slots of the chassis are filled up with blade modules, and a number of such chassis based blade servers are deployed, the resulting configuration has higher degree of efficiency, reliability, ease of management & ease of maintenance when compared to large number of individual rack server deployments. If additional servers need to be added, they can just be inserted in to the empty slots in the chassis/ add additional chassis modules and the whole system can be brought up and running, quickly.
Preloaded Operating Systems/ Server Virtualization Software:
Certain blade server manufacturers offer their server modules with preloaded operating systems as well as Server Virtualization softwares. This brings down the deployment time drastically and also gives a tighter hardware-software integration. With some vendors, even the individual processors/ threads can be dedicated to certain virtual systems and they can be enabled/disabled individually.
Wide range of deployment:
Blade Servers can be deployed for small as well as large deployments. Chassis based blade servers can be deployed to hold just a few servers ( For example, 6 blade server modules), (or) clusters of hundreds of interconnected blade server chassis can be formed. There are blade servers that just occupy half the width of a regular rack mount server and hence two such blade servers can be accommodated within a 1U slot. But these servers may come with reduced processing/ other capabilities, depending on the vendor.
Integrated Storage Modules:
Some vendors offering Blade Servers simplify the deployment of storage systems by offering separate storage modules that can be inserted in to the same slots of the chassis (like the blade server modules). This enables integrated server/ storage management. Some blade server vendors offer SSD (Solid State Devices) to store data with higher reliability and implement RAID based hard disk storage within each server modules, out of the box.
Integrated Network Switch Modules:
Some blade server vendors offer integrated network switch ports within the chassis either as add-on modules (or) as separate full fledged switching modules. All the server modules within the chassis can be connected to the switching module present in the chassis itself and multiple chassis can also be interconnected using these network switch modules. Since the chassis itself comes with the Ethernet ports, the total number of ports required for their interconnection with separate external Ethernet switches/ Network cables are reduced to a good extant.
Specialized Functions:
Certain blade server manufacturers offer specialized functions like remote KVM modules (for operating the server’s keyboard/mouse/monitor from a remote location), remote power-on/ power-off functions and various other functionalities that make deployment and management of these chassis based blade servers easier.

Saturday, February 19, 2011

Explore domain name resolution tools in Linux

Often times, we are faced with issues pertaining to DNS name resolutions. In this series of articles, I will explore different tools available in Linux that can help in DNS name resolutions. First we will look at the utility called host
host is most basic and simple utility for performing DNS lookups. In its normal usage it resolves names to IPs. For example, in the following command we are asking host to give us the IP of www.example.com
host www.example.com
www.example.com has address 208.77.188.166
As you can see, we got the IP that www.example.com points to.
For more detailed (verbose) output, we have the -v or -d option
host -v www.example.com
Trying "www.example.com"
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 46859
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 2, ADDITIONAL: 2

;; QUESTION SECTION:
;www.example.com.               IN      A

;; ANSWER SECTION:
www.example.com.        172764  IN      A       208.77.188.166

;; AUTHORITY SECTION:
example.com.            172709  IN      NS      b.iana-servers.net.
example.com.            172709  IN      NS      a.iana-servers.net.

;; ADDITIONAL SECTION:
a.iana-servers.net.     67316   IN      A       192.0.34.43
b.iana-servers.net.     172709  IN      A       193.0.0.236

Received 129 bytes from 192.168.23.1#53 in 5 ms
Trying "www.example.com"
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 20897
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;www.example.com.               IN      AAAA

;; AUTHORITY SECTION:
example.com.            10764   IN      SOA     dns1.icann.org. hostmaster.icann.org. 2007051703 7200 3600 1209600 86400

Received 94 bytes from 192.168.23.1#53 in 2 ms
Trying "www.example.com"
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 13380
;; flags: qr rd ra; QUERY: 1, ANSWER: 0, AUTHORITY: 1, ADDITIONAL: 0

;; QUESTION SECTION:
;www.example.com.               IN      MX

;; AUTHORITY SECTION:
example.com.            10764   IN      SOA     dns1.icann.org. hostmaster.icann.org. 2007051703 7200 3600 1209600 86400

Received 94 bytes from 192.168.23.1#53 in 14 ms
If you are interested in particular type of record such as CNAME, MX, NS, SOA, SIG, KEY, AXFR, etc, use the -t option. By default it looks for A record. The following examples will search for name server (NS), mail server (MX) and Start of Authority (SOA) records for google.com domain and the last example gives the domain name that the IP 64.233.169.99 points to (pointer or PTR record)
host -t NS google.com
google.com name server ns2.google.com.
google.com name server ns3.google.com.
google.com name server ns4.google.com.
google.com name server ns1.google.com.

host -t MX google.com
google.com mail is handled by 100 smtp2.google.com.
google.com mail is handled by 10 google.com.s9a1.psmtp.com.
google.com mail is handled by 10 google.com.s9a2.psmtp.com.
google.com mail is handled by 10 google.com.s9b1.psmtp.com.
google.com mail is handled by 10 google.com.s9b2.psmtp.com.
google.com mail is handled by 100 smtp1.google.com.

host -t SOA google.com
google.com has SOA record ns1.google.com. dns-admin.google.com. 1393514 7200 1800 1209600 300

host -t PTR 64.233.169.99
99.169.233.64.in-addr.arpa domain name pointer yo-in-f99.google.com.
To display the SOA records for zone name from all the listed authoritative name servers for that zone, use the -C option. The list of name servers is defined by the NS records that are found for the zone.
host -C example.com
Nameserver b.iana-servers.net:
        example.com has SOA record dns1.icann.org. hostmaster.icann.org. 2007051703 7200 3600 1209600 86400
Nameserver a.iana-servers.net:
        example.com has SOA record dns1.icann.org. hostmaster.icann.org. 2007051703 7200 3600 1209600 86400
To have host try the UDP query more than once if a query gets unanswered, use -R and the number of tries. The following example will try three times to resolve www.example.com if the previous query does not get answered
host -R 3 www.example.com
www.example.com has address 208.77.188.166
By default host uses UDP when making queries. The -T option makes it use a TCP connection when querying the name server. TCP will be automatically selected for queries that require it, such as zone transfer (AXFR) requests.
host -T www.google.com
www.google.com is an alias for www.l.google.com.
www.l.google.com has address 64.233.169.104
www.l.google.com has address 64.233.169.147
www.l.google.com has address 64.233.169.99
www.l.google.com has address 64.233.169.103
If you want to wait for the answer longer than the default (maybe you are on a slow connection), use the -W and a number of seconds to wait for the answer. If wait is less than one, the wait interval is set to one second.
host -W 5 www.google.com
www.google.com is an alias for www.l.google.com.
www.l.google.com has address 64.233.169.147
www.l.google.com has address 64.233.169.99
www.l.google.com has address 64.233.169.103
www.l.google.com has address 64.233.169.104
host uses the name server that are configured in /etc/resolv.conf. If you want it to make the search against another name server, specify that at the end of the command. It can be either the name or IP address of the name server that host should query.
host www.google.ca ns4.google.com
Using domain server:
Name: ns4.google.com
Address: 216.239.38.10#53
Aliases:

www.google.ca is an alias for www.google.com.
www.google.com is an alias for www.l.google.com.

Why are Proxy Servers required in a network?

Proxy Server: A proxy server generally sits on the gateway of a corporate network. When a client (computer) requests an object (web page, image, etc) from an origin server (server hosted publicly on the Internet), a proxy server interrupts the communication and checks if that object is already present in it (when caching is enabled). If it is present, it would provide it to the client by itself. If its not present, the request is forwarded over the Internet to the origin server. Generally, while forwarding such requests, the IP address of the client is changed and the request is forwarded using a common IP address so that the IP addresses of the individual clients is not exposed outside the corporate network. The Origin server receives the request (from what it thinks is the request from the client), processes it, and sends back the response. The proxy server receives it, changes the destination IP address to the client’s original IP address and sends it to the client.
As you can see, the proxy server acts as a forwarder of communications between the client(s) and the origin server (generally over the Internet). The following are some of the main functions that can be performed by the proxy server:
Caching: Proxy server caches (stores) the frequently requested content (web objects) and provides them directly to the clients when requested. This saves a lot of bandwidth and helps in reducing the latency (time taken for an object’s request and arrival).
Security: Since all the communications from the corporate network goes through the proxy server, they can perform certain security related operations like URL filtering (blocking certain websites based on IP/ category), Content filtering (blocking certain content from going outside the network), etc.
Anonymity: Since proxy server presents a common IP address on the Internet, the individual IP addresses of the clients over the corporate network are not exposed. Some proxy servers can forward the request over multiple proxy servers in order to make it difficult for anyone on the Internet to guess the corporate IP address.
SSL Offloading: Some proxy servers can handle encryption/ decryption functions on behalf of the clients and hence can reduce client performance bottlenecks due to those processes, as well as see through the encrypted traffic.
Logs: Proxy servers can maintain logs (URL information, time at which it was requested, longevity of the web sessions, etc) and these logs can be retrieved when required.
The proxy servers are available as open source as well as commercial softwares. These functions can also be performed partly by certain devices like URL Filtering appliances, UTM (Unified Threat Management) Appliances, Application Delivery Controllers, Routers etc.

Friday, February 18, 2011

What is a Link Load Balancer in Wide Area Networking

What is a Link Load Balancer?
As consolidated data centres are becoming more popular, the WAN links from the head office and branch offices/ remote sites have become very crucial for business. So, companies are buying bandwidth pipes from different service providers for double the capacity (or less) just to make sure that in case there is an outage of service from one provider, then they have some back-up. Sometimes such back-up links are not used much and the fail over, if the primary link fails is generally not instantaneous.
So, instead of having all the incoming links from the service provider(s) separately terminated and routed to, in to an organisation, a Link Load Balancer is used to terminate all these incoming lines at the gateway level and provide a uniform interface for the systems to connect to it.
Advantages of Link Load Balancers:
If there is an outage with a particular service provider, all the users/ active sessions are seamlessly transferred to the other active lines. The lines provisioned for back-up are also used by the regular users as the link load balancer now provides a uniform interface for all the systems to access any line. That gives some additional bandwidth capacity to the users. Bandwidth allocation, and bandwidth shaping can be done at the WAN gateway level using these link load balancer so that the critical applications always get their share of bandwidth and the chatty non-critical applications do not take up the entire bandwidth. This approach also enables an organization to route certain users to certain lines based on the best performing links to a certain destination (or least cost routing, application type, application priority etc).
Additional Features of a Link Load Balancer:
Certain Link Load Balancers also provide VPN and data compression services from branch to the data centre / HO. They also allow multiple types of WAN links to be connected to them (Like MPLS VPN, Leased Lines, Internet Leased Lines, Broadband connection, 3G Mobile through USB, etc) but this depends on the vendor and the model. Two such appliances can work in High Availability mode – in case one of them fails. Some of them also allow to use each link up to a certain maximum bandwidth (as the charges for some carriers are more when usage exceeds a particular limit). QoS mechanisms help set priority for certain types of traffic (like video, voice etc) which are critical and delay sensitive.
When a link is down, the users are accommodated in the other links and a notification is sent to the administrator via email or cellphone, as most of the Link Load Balancers can integrate with Syslog servers or SNMP network management softwares.

How Stacking Multiple Network Switches helps to build a more resilient LAN

Summary:

There are number of ways to connect various network switches in a LAN. Stacking a group of network switches to form a single stack unit that can be managed with an individual IP address for all the switches in the stack group would not only make managing a lot of switches easier, it would also introduce switch level and link level redundancy. In this article, we analyze a few salient points about stacking multiple switches together and how various vendors have implemented it, in order to get a better idea about stacking multiple network switches and its advantages.

Salient Points you need to know about Stacking Switches:

  • There is a single IP address for all the switches managed by the stack. This reduces the total number of IP addresses required in the network. Any configuration changes can be done once (usually in the master switch) and those configuration changes are applied to all the other switches, automatically. With many vendors – routing, multi-casting, spanning tree, QoS, Link Aggregation, etc can be applied globally to the entire stack. With some vendors, the stack group can be managed from their own utilities as well as any SNMP based management utilities.
  • Usually, there is a master switch (which is either nominated by the user (or) selected automatically by the stack) which controls the operation of the entire stack. All configuration is generally done in the master switch, and are automatically propagated to other switches. If the master switch fails, another switch member in the stack group usually takes over as the master.
  • One of the main advantages of stacking multiple switches together is to achieve switch level and link level redundancy. In the above diagram, a stacking (ring based) architecture is shown. Suppose a cable in that stack configuration goes down, the switch where the cable is connected can always be reached with the other cable connected to it. Even if a switch goes down, all the other switches are accessible to each other.
  • Based on the vendor, there is generally a limitation to the maximum number of switches that can be stacked together. This maximum number can be 6,8,12,16, etc. Also, there are no standards for stacking. Hence multiple switches from different vendors (and) even multiple switches from different switch family of the same vendor (sometimes) cannot be stacked together. But some vendors offer stacking capabilities across  their own family of switches.
  • Generally, there are separate stacking ports and separate stacking cables. Most of these ports/ cables are proprietary and have more throughput than the maximum throughput supported by standard Ethernet ports (For example, 1 GE) it carries. The maximum distance supported by these cables is limited (for example, 3 meters). Some vendors offer stacking in the normal standards based Ethernet ports (1 GE/ 10 GE), and the usual Cat x cables which can extend the stacking up to 100 meters, sometimes even through other network switches. This technique is also referred to as clustering.
  • connecting multiple=With some vendors, its possible to connect two separate stack groups (of switches) using two or more links with link aggregation enabled between them so that even if one link fails, the other link(s) keep operating and all the links operate with the combined throughput of the individual links (If each link supports 1 GE, then the link operates at n x 1GE where n is the number of Link Aggregated links).  With some vendors, these links can be from different switches in each stack group in order to provide link level as well as switch level redundancy for interconnection of multiple stack groups.
  • Its possible (with some vendors) to apply certain functionalities, to any port of any switch in the stack group as if all the ports belong to one big switch. For example, port mirroring can be applied to the traffic passing through multiple switch ports in various switches of the stack group, and still the traffic from all these ports can be observed from one port – in the master switch (for example).
  • The switches that form the stack group are flexible – its possible to operate them as a part of a stack group today, and operate them as individual switches tomorrow.
  • Stacking switches might be more cost effective to deploy, especially in case of deploying low/medium end switches & in case of small deployments which gradually grow to become bigger over a period of time. This is in comparison to chassis based higher end switches.
  • With some vendors its possible to remove and replace entire switches from a stack group without affecting the working of other switches in the stack. When new switches are introduced in the stack, it quickly takes the configuration from the other switches and becomes a member of the stack without much user intervention.
  • When a link in a stack group fails, the fail-over time taken for the stack to reconfigure and send traffic through other links is very minimum (usually within a second). This is much better when compared to other similar processes like STP/VRRP, etc.

Sending mails from command line

Sending mails using mail:
mail (mailx is the newer version) is a fantastic program that can be used for sending email from command line or from within scripts.
The following example will send an email to admin@example.com, with subject “Apache is down” and text “Please check Apache at host name of the server”
echo “Please check Apache at `hostname`” | mail -s “Apache is down” admin@example.com
We can cat the contents of any text file, for example, log file and it will be sent to the recipient specified
cat “/var/log/apache2/error.log” | mail -s “Apache is down” admin@example.com
To attach a file, other than a text one, we need to uuencode (unix to unix encode) it before sending
uuencode banner.jpg banner_out.jpg | mail webmaster@example.com
The banner.jpg is the name of input file and banner_out.jpg is the output uuencoded file that we will be sent by mail.
To have text sent alogwith the attachment, we can cat or echo that text too
(cat /var/log/apache2/error.log;uuencode banner.jpg banner.jpg) | mail -s pic webmaster@example.com

Sending mails from using mutt:
With mutt, its same as using mail.
echo “Please check Apache at `hostname`” | mutt -s “Apache is down” admin@example.com
or we can cat the contents of a text file to show as body text
cat /var/log/apache2/error.log | mutt -s “Apache is down” admin@example.com

OR
mutt -s “Apache is down” admin@example.com
To send an empty body mail, use an empty line as the mail body:
echo | mutt -s “Software upgrades for `hostname`” admin@example.com
To attach a binary file, its even easier with mutt, just use the -a option
echo | mutt -s “New logo for the company” -a logo.gif webmaster@example.com
Hope you this tutorial added to your knowledge.

Wednesday, February 16, 2011

Sharing a directory using nfs

First install nfs server (I am on Debian 5.0, other distributions would have a the nfs package name similar to that)
aptitude install aptitude install nfs-kernel-server
After we have the nfs server installed, you need to export the directory by using the /etc/exports file. The format of the file is:
dir_to_be_exported allowed_hosts(options)
I am just about to export my home dir and allowing only 192.168.2.10 to mount it, so in /etc/exports, I would the following:
/home/linuxgravity 192.168.2.10(rw,sync,no_subtree_check)
That’s it. That was so easy, isn’t it.
Restart nfs server:
/etc/init.d/nfs-kernel-server restart
Now it is time for us to mount the shared (exported) directory to mount it on an empty directory. So while in the 192.168.2.10, first we would create a directory
mkdir /home/remote_home
Now just mount it with the following magically command:
mount 192.168.2.2:/home/linuxgravity /home/remote_home
As you have already figured out, the format is mount nfs_server_ip:exported_dir mount_point
And now you can just read from/write to /home/remote_home easily.
The whole process takes only less than two minutes.
Tip: showmount command on nfs server would actually show you which clients have mounted the exported directory.

Tuesday, February 15, 2011

VPN client with windows XP

We already set VPN server with XP...

1.Go to Start / Settings / Network Connections
2.Start the New Connection Wizard
3.Select Connect to the network at my workplace
4.Click on the Next button.
5.Click on Virtual Private Network connection
6.Click on the Next button
7.Give the Connection a Name
8.Click on the Next button
9.If prompted, select whether or not you need to dial to the Internet before establishing a  VPN connection.  
Enter in the IP address of the server you want to connect to. This needs to be the external WAN IP address that is being used by the VPN server
10.Check whether you want to have an icon placed on the desktop and click on the Finish button.


Thanks







Connect to Windows shares from Linux ( Samba Guide)

To access Windows shares from our Linux PC.
First we need to install smbclient
On Debian
sudo aptitude install smbclient
It will ask
1. for workgroup, type in the same workgroup you have configured on Windows
2. whether or not to use encrypted passwotd, select ‘Yes’
3. if it should get WINS server’s IP address from DHCP or not, select ‘Yes’ or ‘No’ as appropriate.
On Red hat/Centos/Suse
yum install smbclient
Let’s first find which machines are advertising their shares
findsmb
                                *=DMB
                                +=LMB
IP ADDR          NETBIOS NAME     WORKGROUP/OS/VERSION
---------------------------------------------------------------------
192.168.1.2   Rahul-PC      +[WORKGROUP] [Windows 5.1] [Windows 2000 LAN Manager]
192.168.1.3   Harish-PC     +[WORKGROUP] [Windows 5.1] [Windows 2000 LAN Manager]
Next find what are shared on the windows box
sbmclient -L windows_machine_name -U username
For me it would be like this
smbclient -L Rahul-PC -U testuser
Password:
Domain=[Rahul-PC] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager]

        Sharename       Type      Comment
        ---------                ----           -------
        IPC$                 IPC           Remote IPC
        REPORTS             Disk
        ADMIN$              Disk           Remote Admin
        C$                  Disk           Default share
Domain=[Rahul-PC] OS=[Windows 5.1] Server=[Windows 2000 LAN Manager]

        Server                   Comment
        ---------                    -------

        Workgroup                Master
        ---------                    -------
Provide the password for the testuser you have on the windows PC.
Now we want to mount a shared directory
First create a directory to mount the share on
mkdir /mnt/mount_point_name
In my case, it is
mkdir /mnt/reports
Next mount the share on the above created mount point directory
mount -t smbfs -o username=windows_username,password=windows_pc_ password   //windows_pc_name_or_IP/share_name   /mnt/mount_point_name
For me, it looks like below
mount -t smbfs -o username=testuser,password=secret //192.168.1.2/$C /mnt/reports
Change directory to the mount point
cd /mnt/reports
to see files/folders in the windows share. Now you can copy to/from it as if it was a local directory, do
ls 

Sunday, February 13, 2011

Securing your Wireless Network ( For Home Users)

These days wireless networking products are so ubiquitous and inexpensive that just about anyone can set up a WLAN in a matter of minutes with less than $100 worth of equipment. This widespread use of wireless networks means that there may be dozens of potential network intruders lurking within range of your home or office WLAN. 
 
Most WLAN hardware has gotten easy enough to set up that many users simply plug it in and start using the network without giving much thought to security. Nevertheless, taking a few extra minutes to configure the security features of your wireless router or access point is time well spent. Here are some of the things you can do to protect your wireless network:
1) Secure your wireless router or access point administration interfaceAlmost all routers and access points have an administrator password that's needed to log into the device and modify any configuration settings. Most devices use a weak default password like "password" or the manufacturer's name, and some don't have a default password at all.  As soon as you set up a new WLAN router or access point, your first step should be to change the default password to something else. You may not use this password very often, so be sure to write it down in a safe place so you can refer to it if needed. Without it, the only way to access the router or access point may be to reset it to factory default settings which will wipe away any configuration changes you've made.
2) Don't broadcast your SSIDMost WLAN access points and routers automatically (and continually) broadcast the network's name, or SSID (Service Set IDentifier). This makes setting up wireless clients extremely convenient since you can locate a WLAN without having to know what it's called, but it will also make your WLAN visible to any wireless systems within range of it. Turning off SSID broadcast for your network makes it invisible to your neighbors and passers-by (though it will still be detectible by WLAN "sniffers").
3)Enable WPA encryption instead of WEP
802.11's WEP (Wired Equivalency Privacy) encryption has well-known weaknesses that make it relatively easy for a determined user with the right equipment to crack the encryption and access the wireless network. A better way to protect your WLAN is with WPA (Wi-Fi Protected Access). WPA provides much better protection and is also easier to use, since your password characters aren't limited to 0-9 and A-F as they are with WEP. WPA support is built into Windows XP (with the latest Service Pack) and virtually all modern wireless hardware and operating systems. A more recent version, WPA2, is found in newer hardware and provides even stronger encryption, but you'll probably need to download an XP patch in order to use it.  
4) Remember that WEP is better than nothing 
If you find that some of your wireless devices only support WEP encryption (this is often the case with non-PC devices like media players, PDAs, and DVRs), avoid the temptation to skip encryption entirely because in spite of it's flaws, using WEP is still far superior to having no encryption at all. If you do use WEP, don't use an encryption key that's easy to guess like a string of the same or consecutive numbers. Also, although it can be a pain, WEP users should change encryption keys often-- preferably every week.
5) Use MAC filtering for access control Unlike IP addresses, MAC addresses are unique to specific network adapters, so by turning on MAC filtering you can limit network access to only your systems (or those you know about). In order to use MAC filtering you need to find (and enter into the router or AP) the 12-character MAC address of every system that will connect to the network, so it can be inconvenient to set up, especially if you have a lot of wireless clients or if your clients change a lot. MAC addresses can be "spoofed" (imitated) by a knowledgable person, so while it's not a guarantee of security, it does add another hurdle for potential intruders to jump.
6) Reduce your WLAN transmitter power
You won't find this feature on all wireless routers and access points, but some allow you lower the power of your WLAN transmitter and thus reduce the range of the signal. Although it's usually impossible to fine-tune a signal so precisely that it won't leak outside your home or business, with some trial-and-error you can often limit how far outside your premises the signal reaches, minimizing the opportunity for outsiders to access your WLAN.
7) Disable remote administration
Most WLAN routers have the ability to be remotely administered via the Internet. Ideally, you should use this feature only if it lets you define a specific IP address or limited range of addresses that will be able to access the router. Otherwise, almost anyone anywhere could potentially find and access your router. As a rule, unless you absolutely need this capability, it's best to keep remote administration turned off. (It's usually turned off by default, but it's always a good idea to check.)

 Thanks...

Friday, February 11, 2011

Resetting root password on Centos/Red Hat


Boot the system and when you see the following screen, press any key
grub1.png
At the following screen, press e
grub2.png
It will take you the following screen
grub3.png
Highlight the line with vmlinuz in it by using the arrow keys and press e. The next screen will look like below
grub4.png
Now type single or init 1 at the very end of the line so.
grub5.png
Then press enter and b to boot the system with the new argument
The system will boot into single user mode and you will see bash prompt like below
grub6.png
Now change the password
passwd
Enter new UNIX password:
Retype new UNIX password:
passwd: password updated successfully
type in the new password and then reboot
reboot
to restart the system.


You are Done.

Thursday, February 10, 2011

Configure windows XP as VPN server

1. Click Start --> Controlpanel.
2. Open the Network Connections.
3. Open the New Connection Wizard.
4. Click Next.
5.  Select the 'Set Up An Advanced Connection option'.
6. Then select the Accept Incoming Connections option and click Next.
7. We can select optional devices on which you want to accept incoming connections.
8.On the Incoming Virtual Private Network Connection page , select the Allow Virtual Private Connections option and click Next.
9.On the User Permissions page , select the users that are allowed to make incoming VPN connections. Click Next.
10.On the Networking Software page , click on the Internet Protocol (TCP/IP) entry and click the Properties button.
11.In the Incoming TCP/IP Properties dialog box , place a check mark in the Allow Callers To Access My Local Area Network check box. This will allow VPN callers to connect to other computers on the LAN.Click OK and then click Next.
12. Click Finish to create the connection.

You are Done......Please drop a comment if you have any queries.



Tuesday, February 8, 2011

Install and Configure DHCP server in ubuntu


Install DHCP server in ubuntu
sudo apt-get install dhcp3-server    // For Installation
Configuring DHCP server
You should select which interface you want to use for  DHCP server listening.By default it eth0.
Do this by editing  /etc/default/dhcp3-server file
sudo vi /etc/default/dhcp3-server
Find this line
INTERFACES=”eth0″
Replace with the following line
INTERFACES=”eth1″
Save and exit.
Using Address Pool
You need to change the following sections in /etc/dhcp3/dhcpd.conf file
default-lease-time 600;
max-lease-time 7200;
option subnet-mask 255.255.255.0;
option broadcast-address 192.168.1.255;
option routers 192.168.1.254;
option domain-name-servers 192.168.1.1, 192.168.1.2;
option domain-name “yourdomainname.com”;
subnet 192.168.1.0 netmask 255.255.255.0 {
range 192.168.1.10 192.168.1.200;
}
save and exit the file
This will result in the DHCP server giving a client an IP address from the range 192.168.1.10-192.168.1.200 . It will lease an IP address for 600 seconds if the client doesn’t ask for a specific time frame. Otherwise the maximum (allowed) lease will be 7200 seconds. The server will also “advise” the client that it should use 255.255.255.0 as its subnet mask, 192.168.1.255 as its broadcast address, 192.168.1.254 as the router/gateway and 192.168.1.1 and 192.168.1.2 as its DNS servers.
Using MAC address  // Fixed IP for Machines
default-lease-time 600;
max-lease-time 7200;
option subnet-mask 255.255.255.0;
option broadcast-address 192.168.1.255;
option routers 192.168.1.254;
option domain-name-servers 192.168.1.1, 192.168.1.2;
option domain-name “yourdomainname.com”;
subnet 192.168.1.0 netmask 255.255.255.0 {
range 192.168.1.10 192.168.1.200;
}
host serverA {
hardware ethernet <mac>;
fixed-address 192.168.1.1;
}
host serverB {
hardware ethernet <mac>;
fixed-address 192.168.1.2;
}
host ClientA {
hardware ethernet <mac>;;
fixed-address 192.168.3;
}
host PrinterA {
hardware ethernet <mac>;;
fixed-address 192.168.1.4;
}
Now you need to restart dhcp server
sudo /etc/init.d/dhcp3-server restart


Configure Subnet

nano -w /etc/dhcp3/dhcpd.conf
ddns-update-style none;
log-facility local7;

subnet 192.168.1.0 netmask 255.255.255.0 {

        option routers                  192.168.1.1;
        option subnet-mask              255.255.255.0;
        option broadcast-address        192.168.1.255;
        option domain-name-servers      194.168.4.100;
        option ntp-servers              192.168.1.1;
        option netbios-name-servers     192.168.1.1;
        option netbios-node-type 2;
        default-lease-time 86400;
        max-lease-time 86400;

        host bla1 {
                hardware ethernet DD:GH:DF:E5:F7:D7;
                fixed-address 192.168.1.2;
        }
        host bla2 {
                hardware ethernet 00:JJ:YU:38:AC:45;
                fixed-address 192.168.1.20;
        }
}

subnet  10.152.187.0 netmask 255.255.255.0 {

        option routers                  10.152.187.1;
        option subnet-mask              255.255.255.0;
        option broadcast-address        10.152.187.255;
        option domain-name-servers      194.168.4.100;
        option ntp-servers              10.152.187.1;
        option netbios-name-servers     10.152.187.1;
        option netbios-node-type 2;

        default-lease-time 86400;
        max-lease-time 86400;

        host bla3 {
                hardware ethernet 00:KK:HD:66:55:9B;
                fixed-address 10.152.187.2;
        }
}

Configure Ubuntu DHCP Client
If you want to configure your ubuntu desktop as DHCP client
You need to open /etc/network/interfaces file
sudo vi /etc/network/interfaces
make sure you have the following lines (eth0 is an example)
auto lo eth0
iface eth0 inet dhcp
iface lo inet loopback
Save and exit the file
You need to restart networking services
sudo /etc/init.d/networking restart.
How to find DHCP server IP address
You need to use the following commands
sudo dhclient
or
tail -n 15 /var/lib/dhcp3/dhclient.*.leases
You are done !!!

Sunday, February 6, 2011

The End Is Near: Last Blocks of IPv4 Addresses Assigned

The end is near -- for IPv4 addresses, that is. In a public ceremony Thursday, the last blocks of addresses based on the current Internet Protocol were assigned to regional Internet registries (RIR). Those addresses are projected to be given out by the RIRs by September, at which point the future expansion of the Internet will be dependent on a successful transition to the next generation.
Each block contains 16 million addresses, and one block went to each of the five regional organizations, covering Africa, the Asia Pacific region, America, Europe and the Middle East, and the Latin American and Caribbean region. The hand-off was conducted at a public ceremony in Miami by four international nonprofit groups that collaboratively administer the Internet addressing system.
'Only a Matter of Time'
Raúl Echeberría, chairman of the RIR umbrella organization, the Number Resource Organization, said "it's only a matter of time before the RIRs and Internet service providers must start denying requests for IPv4 address space." He added that "deploying IPv6 is now a requirement, not an option."
Laura DiDio, an analyst with Information Technology Intelligence Corp., said handing out of the final batch is "definitely a wake-up call" for businesses and consumers to get with the transition.
Businesses need to make sure they have a transition plan, she said, including an examination of whether they have applications that are dependent on IPv4.
Three main factors are behind the now-in-sight depletion of IPv4 addresses. One is the explosion in web access from multiple devices for each user, primarily in developed countries. Each of those smartphones, laptops, tablets, desktops and other devices that access the web require a different IP, or Internet Protocol, address. And the demand for device addresses is increasing rapidly, with TVs, game consoles, even automobiles beginning to offer web-browsing capabilities.
'50 Thousand Trillion Trillion Addresses'
A second factor is a rapidly growing user base in developing countries such as Brazil, India and China. Many users in those countries access the web through mobile devices, which means the device-per-user ratio is also likely to rapidly increase.
And third, the Internet is becoming the communications network for non-user-based equipment, such as smart electricity grids, sensors, RFIDs and smart houses.
IPv4 dates back to 1980 and a time when its 4.5 billion addresses seemed like a lot. The new IPv6 utilizes 128-bit addresses, instead of IPv4's 32-bit, and the new IP could offer -- if needed -- a vast number of addresses that should keep humanity happy until the sun burns out.
Some experts say IPv6 could provide four billion addresses for each person on Planet Earth. But Dave Evans, Cisco's chief technologist in its Internet business solutions group, has said the actual number is closer to "50 thousand trillion trillion addresses per person."
In addition to zillions of new addresses, IPv6 brings other improvements, including in routing, network autoconfiguration, better handling of 3G mobile networks, and other advantages.