Friday, May 20, 2011

What is Server Adapter Teaming?

It is the process of combining multiple adapter ports in the same server to function as a single virtual adapter (with a single IP address, in most cases) in order to achieve a higher (combined) connectivity throughput. For example, its possible to team two adapters with a single port of 1 Gbps each, to make it function like a single adapter with a speed of up to 2 Gbps. These two individual links are load balanced, so that there is equal distribution of traffic between them, and hence combined maximum throughput (as far as possible) can be obtained. Server adapter teaming is mostly a software enabled function. With Server Adapter teaming, one can achieve High Availability (Fault Tolerance) for the server adapter connectivity.

What are the benefits of Server Adapter Teaming?

  • Higher throughput – Combined bandwidth of two or more ports/ links.
  • Easier upgradation – If 1 Gbps NIC card performance is not sufficient for a particular server, another NIC of similar capacity can be teamed with this to achieve a higher throughput. This might be cheaper and more efficient than buying a 10 Gbps adapter and discarding the 1 Gbps adapter altogether.
  • Load balancing – The traffic in each link that goes out of the teamed ports is load balanced with other links to get a better performance.
  • High Availability – Depending on the configuration, server adapter teaming gives fault tolerance (and hence high availability to the server) in the event of: Network Switch failures, Switch Mis-Configurations (resulting in hanging/ non availability of the switch), Network cable (connecting the server and the switch) failure/ accidental disconnection, PCI Slot failures & Server Adapter/ Port failure.
  • Virtual Servers – If virtualization is used to create many server images within a single server, adapter teaming gives the higher throughput required in such situations.
  • Its possible (but not recommended) to team multiple vendor adapters/ multi-capacity adapters (100 Mbps/ 1000 Mbps).
  • Multiple ports built in to the same adapter/ motherboard of a server can be teamed together.
  • If both adapters and network switches support IEEE 802.3ad Link Aggregation standard, both incoming and outgoing traffic (from the server adapter) can be load balanced. Otherwise, only the outgoing traffic is load balanced.
  • Up to 8 Ports can be teamed together with certain server adapters.
server adapter teaming and link aggregation configurationSome configurations for server adapter teaming, are shown above.
When you consider the scenario in ‘C’, a single adapter with four ports (or) multiple adapters (2×2 ports; 4×1 port) can be teamed together and connected to the same switch. This achieves combined throughput of the four ports and high availability (against the failure of any single route) but does not account for switch failure. If the switch fails, all the links are down and cannot connect to the network.
Now consider the scenario in ‘A’. Among the four adapter ports available, two of them connect to one switch and two more connect to another switch. This way, even if one switch fails, the traffic continues to flow through the other.
In the scenario shown in ‘B’, each set of teamed ports to connecting to individual switches carry the traffic meant for different subnets. This provides for better segmentation of network and dedicated links for each subnet. The multiple adapter ports connecting to each switch (and each subnet) are teamed together for higher performance.
In most of the cases, multiple adapters/ adapter ports advertise a single IP address (and single MAC address) when they are teamed together. It becomes like a single virtual adapter with a higher (combined) port capacity. The above shown scenarios are just examples, and adapter teaming configurations are implemented differently by different vendors.

Thursday, May 5, 2011

Free DNS service adds IPv6 support

OpenDNS, one of the Internet's most popular free DNS services, is now offering production-grade support for IPv6, an upgrade to the Internet's main communications protocol known as IPv4.
OpenDNS CEO David Ulevitch says he is launching the IPv6 service now to help website operators and networking firms prepare for World IPv6 Day, a 24-hour test of IPv6 that is scheduled for June 8. Sponsored by the Internet Society, World IPv6 Day has attracted more than 160 participants, including some of the Internet's leading content providers, such as Google, Yahoo and Facebook.
"World IPv6 Day ... is all about getting organizations to make their resources available over IPv6," Ulevitch says. "It's a flag day. It's a point that you can use to convince your boss that IPv6 is worthwhile. ... It's not really for the end users; it's really for the network administrators, the IT guys, to figure out that it's not that hard to do IPv6."
OpenDNS says it will participate in World IPv6 Day by having all of its Web sites support IPv6 by default on June 8.
BACKGROUND: Is free DNS a good deal for business?
OpenDNS said it was the first to offer a free DNS recursive service that supports IPv6. Recursive DNS services allow Internet users to find websites by typing in their domain names and pulling up the corresponding IP numbers. In contrast, authoritative DNS services allow website operators to publish their domain names and corresponding IP addresses to Internet users.
"We are not aware of anybody else doing" a free recursive DNS service that supports IPv6, Ulevitch says. "There's no financial angle; we just want to encourage people to use an IPv6 DNS service."
Until now, network engineers experimenting with IPv6 had to encapsulate their traffic to transverse IPv4-based DNS servers.
"People want to experiment with IPv6, and we have DNS set up to support them," Ulevitch says. "We are trying to help push traffic and resources over to IPv6. If you actually want to reach IPv6-only resources, you need to use DNS resources that you can talk to over IPv6."
The IPv6 addresses for the OpenDNS IPv6 DNS Sandbox are: 2620:0:ccc::2 and 2620:0:ccd::2.
Internet companies like OpenDNS are promoting IPv6 because the Internet is running out of IPv4 addresses.
IPv4 uses 32-bit addresses and can support 4.3 billion devices connected directly to the Internet. Most IPv4 addresses have been handed out. The free pool of unassigned IPv4 addresses was depleted in February, and the Asia Pacific regional Internet registry said in April that it has doled out all but its last 16.7 million IPv4 addresses which are being held in reserve for startup network operators.

IPv6, on the other hand, uses 128-bit addresses and supports a virtually unlimited number of devices -- 2 to the 128th power. But despite its promise of an endless supply of address space, IPv6 represents only a tiny fraction -- less than 0.03% -- of Internet traffic.
Ulevitch said the new OpenDNS IPv6 service would allow Internet users to access IPv6-only websites, which he admits are a "very,very small'' portion of Internet resources.
OpenDNS says it has more than 20 million users globally, representing 1% of all Internet users. The company's free service is popular with U.S. public school systems, while its paid enterprise version has attracted corporations of all sizes.

Replacing hardware-based network appliances with virtual appliances

Tuesday, May 3, 2011

India Enters Supercomputer Race, Builds 220 TeraFLOPS Machine

The 220 teraflops supercomputer will be used by space scientists for solving complex aerospace problems
BANGALORE, INDIA: Indian Space Research Organisation (ISRO) announced on Monday that it has built a supercomputer,
which is to be India's fastest supercomputer in terms of theoretical peak performance of 220 TeraFLOPS
(220 Trillion Floating Point Operations per second).

The supercomputing facility named as Satish Dhawan Supercomputing Facility is located at Vikram Sarabhai Space Centre (VSSC), Thiruvananthapuram. The new Graphic Processing Unit (GPU) based 220 teraflops supercomputer named "SAGA-220"
(Supercomputer for Aerospace with GPU Architecture-220 TeraFLOPS) is being used by space scientists for solving complex
aerospace problems. The supercomputer SAGA-220 was inaugurated by Dr K Radhakrishnan, Chairman, ISRO today at VSSC.
"SAGA-220" Supercomputer is fully designed and built by Vikram Sarabhai Space Centre using commercially available hardware,
open source software components and in house developments. The system uses 400 NVIDIA Tesla 2070 GPUs and
400 Intel Quad Core Xeon CPUs supplied by Wipro with a high speed interconnect. It cost Rs 14 crore to build.
With each GPU and CPU providing a performance of 500 GigaFLOPS and 50 GigaFLOPS respectively, the theoretical
peak performance of the system amounts to 220 TeraFLOPS.
The present GPU system offers significant advantage over the conventional CPU based system in terms of cost,
power and space requirements, said ISRO. It also added that the system is environmentally green and consumes a power
of only 150 kW. This system can also be easily scaled to many PetaFLOPS (1000 TeraFLOPS).

Monday, May 2, 2011

100 Gig Ethernet. Where It's At. Where It's Going

The IEEE 802.3ba standard for 100 Gigabit Ethernet (100 GbE) was approved in June of 2010 and has been gaining adoption ever since. Multiple vendors including Juniper, Cisco, Brocade and Alcatel-Lucent have all announced 100 GbE solutions.
Major carriers -- including Verizon and AT&T -- are adopting 100 GbE as vendors ramp up their product offerings.
"An exact number is not immediately available, but Juniper has sold several dozen 100 GbE blades for the T1600, including some to Verizon, who recently announced plan for 100 GbE commercial service on select links in the U.S. and in Europe," Luc Ceuppens, VP of product marketing for Juniper Networks told InternetNews.com.
As to why carriers are choosing to deploy 100 GbE, it's often a question of the management cost of link aggregation complexity. Many carriers today have link aggregation strategies for the 10 Gigabit Ethernet links.
"Most customers will need to make a decision of continuing to go down the path of 10 gig link aggregation to address bandwidth needs, versus jumping over to 100 gig," Ken Cheng, vice president, Service Provider Products at Brocade told InternetNews.com. "There are pros and cons to both approaches."
Cheng noted that spectrum exhaustion is one reason why 100 GbE has an advantage. So instead of putting a 10 gig stream of traffic onto a fiber, a provider can put 10 times that amount on the same fiber, which makes it more cost effective.
"So even though customers tend to think of 100 GbE as being expensive, when you take into consideration the access to fiber, 100 GbE is quite reasonable and economical," Cheng said.
Cheng added that service providers are seeing longer network flows that live longer. He noted that networks used to be dominated by small packets, but with video and large file transfers there are larger flows.
"This is the type of traffic that is not easily served by a network based on 10x10 gig connectivity," Cheng said. "With larger flows, it's harder to fit into a LAG (Link Aggregation Group)."
All that said, Cheng noted that on a capital expenditure basis, the cost of 100 GbE is currently sometimes higher than a 10x10 gig deployment.
"As this is the first generation of 100 GbE costs the equipment costs can be higher," Cheng said. "Over time with innovation and economies of scale, the costs will come down."

100 GbE Standards

Cisco is also pushing its 100 GbE capabilities, which are a key part of their CRS-3 core routing platform. Cisco however doesn't see their solution as being the same as what rival Juniper is offering.
"Other vendors such as Juniper might be touting 100 GbE but if you really look at what they're doing it's not 100 GbE in its true form," Stephen Liu, senior manager of service provider marketing for routing and switching solutions at Cisco told InternetNews.com."
Liu said that the Juniper 100 GbE implementation is a 2x50 Gbps approach.
"There are real downsides to 2 x50," Liu said. "It may sound the same but it's not necessarily the same."
Liu also alleged that the Juniper 100 GbE implementation is not interoperable with 100 GbE solutions from other vendors.
Juniper disagrees.
"Juniper’s 100 GbE is fully standards compliant," Luc Ceuppens, VP of product marketing for Juniper Networks told InternetNews.com. "We have also done full interoperability testing with solutions from all of the leading vendors."
Cueppens also address the 2 x 50 GbE deployment question.
"Juniper’s 100 GbE for the T1600 is logically 2x50 GbE, but it is not an issue for any of our customers," Cueppens said.