Friday 18 July 2014

Mobile Phones


There are three things most people have on them at all times: keys, wallet, and phone. Considering how integral phones are to our lives, it’s strange to think how quickly they’ve risen from being only landlines, to the size of backpacks, to thinner than our wallet (and smarter than our old computer). Here’s a tour through the history of phones – a revealing look at the past and future of one of our most celebrated technological companions:
Mobile Phones have proved themselves to be one of the greatest gifts to the mankind. They have become an indispensible part of our lives. But going back in time, owning a mobile phone was confined to members of affluent class. All thanks to cost cutting techniques and innovations over a period of time; mobile phones are now affordable for everyone. With a plethora of mobile phones sets flooding the market, they are not just catering to the needs but are acting as status symbols for many. Origin of this gadget is quite interesting. Starting from bulky mobiles phones which were as long and heavy as one’s forearms, to ultra thin and techno savvy handsets, mobiles phones have covered a long way so far. It all started with the basic telephony. Alexander Graham Bell was the first one to patent telephone in the year 1876. This technology was developed using the equipment designed for telegraph. Calls were connected with the help of operators. And a pillar of the mobile telephony came into existence when Charles Stevenson invented radio communication in early 1890s for keeping contacts with the offshore lighthouses. Marconi transmitted signals over the distance of 2 kms in year 1894. And Fessenden capably broadcasted music through radio by 1906. And the following development was merger of radio telephone technology. In 1926, first class passenger trains, running from Berlin to Hamburg used the technology. These radio telephones were also used for air traffic safety as well as in the passenger airplanes. At the time of Second World War, German tanks made great use of these radio telephones too. Two way radios was an ancestor of the mobiles phones. These radios also known as the mobile rigs, were fixed police cruisers, ambulances, taxicabs before advent of handheld mobile phones. Since the mobile radios were given connection to telephone network, one could not dial these from the home phones. And slowly this technology gained popularity among the mobile radio users.
    Later versions of these radio phones incorporated cigarette lighter plugs and were called bag phones. Fixed in the vehicles, these gadgets were either used as portable two way radios or mobile phones. And then in 1940s, Motorola came with new developments in mobile phones. And this is how Walkie Talkie was born. Large, bulky and battery operated, this Handie Talkie soon found a way to US military. Another turning point came in the history of mobile phones when the base station for mobile phones came into being. Engineers from Bell Labs developed the base stations in 1947. The same year, W. Rae Philip and Douglas H. Ring developed hexagonal cells for these mobile phones. But an engineer, Porter from Bell Labs, suggested positioning of the cell towers at corners of hexagons instead of center. He also argued for the directional antennas, for transmitting or receiving the signals in the three directions, into adjacent hexagon cells.

In 1956, Ericsson Company released the earliest full automatic cellular phone system called MTA in Sweden. Though this gadget was operated automatically but due to its bulkiness, could not really hold the users interest for long. It is surprising to hear that this mobile phone weighed around 40 kgs back then. And then improved and lighter version of the same phone was introduced in 1965. This was known as MTB and used the DTMF signaling. Soon in 1957, Leonid Kupriyanovich developed experimental model of wearable mobile phones in Moscow, operating with the help of base station. This young engineer had earlier developed the radio phone known as LK-1. The battery life of the wearable mobile phone by the young inventor lasted for around 20-30 hours. Weighing 3 kg, it worked within the distance of 20 to 30 km from the station. Later he patented the mobile phones and also came up with a version of pocket mobile phone that was just of 0.5 Kgs in the same year.

Then again automatic pocket mobile phone was developed in 1966 at Bulgaria. Called RAT-0.5, phone coordinated with the base station known as RATZ-10. And further developments in the field of the cellular phones were witnessed in 1967. It was decided that every mobile phone would be catered to a base station throughout its life. Though this was not that novel concept, need of one base station at least broke continuity of the automatic services to the mobile phones. After three years, in 1970, another engineer Ames E. Joel invented automatically operated call handoff technology. This system allowed the mobile phones to pass through cell areas while making a phone call without any loss of conversation. This was the time when the mobile user could use the gadget without any disturbance.

Further in year 1971, AT&T Incorporation projected mobile phone service that was approved by FCC later. Another development in the history of mobile phones was registered with ARP network’s success launched in Finland. It was the earliest commercial cellular phones and was known as Zero Generation mobile network.

Invention of mobile phones that closely resembles today’s mobile phones is credited to Martin Cooper, employer and researcher of Motorola.  He initially developed cellular phone named Motorola Dynatac in 1973. With 5 inches width and 9 inches length, this 2.5 pounds weighing phone carried around 30 circuit boards in it. With recharge time of around 10 hours, talk time of 35 minutes, this phone gave comfortable talking experience to the users. One could listen, dial and talk on this mobile phone but what was missing was display screen. With passing time, refinements were made and these mobile phones improved by leaps and bounds.

With introduction of Global System for the mobile communications, radio spectrum could be used effectively. The technology gave great voice quality, international roaming facilities along with compatibility with ISDN systems. And further for providing coverage to the remote areas that ISDN, GSM and cellular phones could not offer, satellite phones came into being. Base station for the satellite phones were built in the geostationary satellites. And now there is no place on the planet that is untouched by the mobile phones.

The Internet

INTERNET

The Internet has revolutionized the computer and communications of the world like nothing before. The invention of the telegraph, telephone, radio, and computer set the stage for this unprecedented integration of capabilities. The Internet is at once a world-wide broadcasting capability, a mechanism for information dissemination, and a medium for collaboration and interaction between individuals and their computers without regard for geographic location. The Internet represents one of the most successful examples of the benefits of sustained investment and commitment to research and development of information infrastructure. Beginning with the early research in packet switching, the government, industry and academia have been partners in evolving and deploying this exciting new technology. Today, terms like "bleiner@computer.org" and "http://www.acm.org" trip lightly off the tongue of the random person on the street. 1 This is intended to be a brief, necessarily cursory and incomplete history. Much material currently exists about the Internet, covering history, technology, and usage. A trip to almost any bookstore will find shelves of material written about the Internet. 2 In this paper,3 several of us involved in the development and evolution of the Internet share our views of its origins and history. This history revolves around four distinct aspects. There is the technological evolution that began with early research on packet switching and the ARPANET (and related technologies), and where current research continues to expand the horizons of the infrastructure along several dimensions, such as scale, performance, and higher-level functionality. There is the operations and management aspect of a global and complex operational infrastructure. There is the social aspect, which resulted in a broad community of Internauts working together to create and evolve the technology. And there is the commercialization aspect, resulting in an extremely effective transition of research results into a broadly deployed and available information infrastructure. The Internet today is a widespread information infrastructure, the initial prototype of what is often called the National (or Global or Galactic) Information Infrastructure. Its history is complex and involves many aspects - technological, organizational, and community. And its influence reaches not only to the technical fields of computer communications but throughout society as we move toward increasing use of online tools to accomplish electronic commerce, information acquisition, and community operations.
Origins of the Internet
    The first recorded description of the social interactions that could be enabled through networking was a series of memos written by J.C.R. Licklider of MIT in August 1962 discussing his "Galactic Network" concept. He envisioned a globally interconnected set of computers through which everyone could quickly access data and programs from any site. In spirit, the concept was very much like the Internet of today. Licklider was the first head of the computer research program at DARPA,4 starting in October 1962. While at DARPA he convinced his successors at DARPA, Ivan Sutherland, Bob Taylor, and MIT researcher Lawrence G. Roberts, of the importance of this networking concept. Leonard Kleinrock at MIT published the first paper on packet switching theory in July 1961 and the first book on the subject in 1964. Kleinrock convinced Roberts of the theoretical feasibility of communications using packets rather than circuits, which was a major step along the path towards computer networking. The other key step was to make the computers talk together. To explore this, in 1965 working with Thomas Merrill, Roberts connected the TX-2 computer in Mass. to the Q-32 in California with a low speed dial-up telephone line creating the first (however small) wide-area computer network ever built. The result of this experiment was the realization that the time-shared computers could work well together, running programs and retrieving data as necessary on the remote machine, but that the circuit switched telephone system was totally inadequate for the job. Kleinrock's conviction of the need for packet switching was confirmed. In late 1966 Roberts went to DARPA to develop the computer network concept and quickly put together his plan for the "ARPANET", publishing it in 1967. At the conference where he presented the paper, there was also a paper on a packet network concept from the UK by Donald Davies and Roger Scantlebury of NPL. Scantlebury told Roberts about the NPL work as well as that of Paul Baran and others at RAND. The RAND group had written a paper on packet switching networks for secure voice in the military in 1964. It happened that the work at MIT (1961-1967), at RAND (1962-1965), and at NPL (1964-1967) had all proceeded in parallel without any of the researchers knowing about the other work. The word "packet" was adopted from the work at NPL and the proposed line speed to be used in the ARPANET design was upgraded from 2.4 kbps to 50 kbps. 5

In August 1968, after Roberts and the DARPA funded community had refined the overall structure and specifications for the ARPANET, an RFQ was released by DARPA for the development of one of the key components, the packet switches called Interface Message Processors (IMP's). The RFQ was won in December 1968 by a group headed by Frank Heart at Bolt Beranek and Newman (BBN). As the BBN team worked on the IMP's with Bob Kahn playing a major role in the overall ARPANET architectural design, the network topology and economics were designed and optimized by Roberts working with Howard Frank and his team at Network Analysis Corporation, and the network measurement system was prepared by Kleinrock's team at UCLA. 6

Due to Kleinrock's early development of packet switching theory and his focus on analysis, design and measurement, his Network Measurement Center at UCLA was selected to be the first node on the ARPANET. All this came together in September 1969 when BBN installed the first IMP at UCLA and the first host computer was connected. Doug Engelbart's project on "Augmentation of Human Intellect" (which included NLS, an early hypertext system) at Stanford Research Institute (SRI) provided a second node. SRI supported the Network Information Center, led by Elizabeth (Jake) Feinler and including functions such as maintaining tables of host name to address mapping as well as a directory of the RFC's.

One month later, when SRI was connected to the ARPANET, the first host-to-host message was sent from Kleinrock's laboratory to SRI. Two more nodes were added at UC Santa Barbara and University of Utah. These last two nodes incorporated application visualization projects, with Glen Culler and Burton Fried at UCSB investigating methods for display of mathematical functions using storage displays to deal with the problem of refresh over the net, and Robert Taylor and Ivan Sutherland at Utah investigating methods of 3-D representations over the net. Thus, by the end of 1969, four host computers were connected together into the initial ARPANET, and the budding Internet was off the ground. Even at this early stage, it should be noted that the networking research incorporated both work on the underlying network and work on how to utilize the network. This tradition continues to this day.

Computers were added quickly to the ARPANET during the following years, and work proceeded on completing a functionally complete Host-to-Host protocol and other network software. In December 1970 the Network Working Group (NWG) working under S. Crocker finished the initial ARPANET Host-to-Host protocol, called the Network Control Protocol (NCP). As the ARPANET sites completed implementing NCP during the period 1971-1972, the network users finally could begin to develop applications.

In October 1972, Kahn organized a large, very successful demonstration of the ARPANET at the International Computer Communication Conference (ICCC). This was the first public demonstration of this new network technology to the public. It was also in 1972 that the initial "hot" application, electronic mail, was introduced. In March Ray Tomlinson at BBN wrote the basic email message send and read software, motivated by the need of the ARPANET developers for an easy coordination mechanism. In July, Roberts expanded its utility by writing the first email utility program to list, selectively read, file, forward, and respond to messages. From there email took off as the largest network application for over a decade. This was a harbinger of the kind of activity we see on the World Wide Web today, namely, the enormous growth of all kinds of "people-to-people" traffic.
The Initial Internetting Concepts

The original ARPANET grew into the Internet. Internet was based on the idea that there would be multiple independent networks of rather arbitrary design, beginning with the ARPANET as the pioneering packet switching network, but soon to include packet satellite networks, ground-based packet radio networks and other networks. The Internet as we now know it embodies a key underlying technical idea, namely that of open architecture networking. In this approach, the choice of any individual network technology was not dictated by a particular network architecture but rather could be selected freely by a provider and made to interwork with the other networks through a meta-level "Internetworking Architecture". Up until that time there was only one general method for federating networks. This was the traditional circuit switching method where networks would interconnect at the circuit level, passing individual bits on a synchronous basis along a portion of an end-to-end circuit between a pair of end locations. Recall that Kleinrock had shown in 1961 that packet switching was a more efficient switching method. Along with packet switching, special purpose interconnection arrangements between networks were another possibility. While there were other limited ways to interconnect different networks, they required that one be used as a component of the other, rather than acting as a peer of the other in offering end-to-end service.

In an open-architecture network, the individual networks may be separately designed and developed and each may have its own unique interface which it may offer to users and/or other providers. including other Internet providers. Each network can be designed in accordance with the specific environment and user requirements of that network. There are generally no constraints on the types of network that can be included or on their geographic scope, although certain pragmatic considerations will dictate what makes sense to offer.

The idea of open-architecture networking was first introduced by Kahn shortly after having arrived at DARPA in 1972. This work was originally part of the packet radio program, but subsequently became a separate program in its own right. At the time, the program was called "Internetting". Key to making the packet radio system work was a reliable end-end protocol that could maintain effective communication in the face of jamming and other radio interference, or withstand intermittent blackout such as caused by being in a tunnel or blocked by the local terrain. Kahn first contemplated developing a protocol local only to the packet radio network, since that would avoid having to deal with the multitude of different operating systems, and continuing to use NCP.

However, NCP did not have the ability to address networks (and machines) further downstream than a destination IMP on the ARPANET and thus some change to NCP would also be required. (The assumption was that the ARPANET was not changeable in this regard). NCP relied on ARPANET to provide end-to-end reliability. If any packets were lost, the protocol (and presumably any applications it supported) would come to a grinding halt. In this model NCP had no end-end host error control, since the ARPANET was to be the only network in existence and it would be so reliable that no error control would be required on the part of the hosts. Thus, Kahn decided to develop a new version of the protocol which could meet the needs of an open-architecture network environment. This protocol would eventually be called the Transmission Control Protocol/Internet Protocol (TCP/IP). While NCP tended to act like a device driver, the new protocol would be more like a communications protocol.

Four ground rules were critical to Kahn's early thinking:

    Each distinct network would have to stand on its own and no internal changes could be required to any such network to connect it to the Internet.
    Communications would be on a best effort basis. If a packet didn't make it to the final destination, it would shortly be retransmitted from the source.
    Black boxes would be used to connect the networks; these would later be called gateways and routers. There would be no information retained by the gateways about the individual flows of packets passing through them, thereby keeping them simple and avoiding complicated adaptation and recovery from various failure modes.
    There would be no global control at the operations level.

Other key issues that needed to be addressed were:

    1.Algorithms to prevent lost packets from permanently disabling communications and enabling them to be successfully retransmitted from the source.
    2.Providing for host-to-host "pipelining" so that multiple packets could be enroute from source to destination at the discretion of the participating hosts, if the intermediate networks allowed it.
    3.Gateway functions to allow it to forward packets appropriately. This included interpreting IP headers for routing, handling interfaces, breaking packets into smaller pieces if necessary, etc.
    4.The need for end-end checksums, reassembly of packets from fragments and detection of duplicates, if any.
    5.The need for global addressing
    Techniques for host-to-host flow control.
    6.Interfacing with the various operating systems
    There were also other concerns, such as implementation efficiency, internetwork performance, but these were secondary considerations at first.

Kahn began work on a communications-oriented set of operating system principles while at BBN and documented some of his early thoughts in an internal BBN memorandum entitled "Communications Principles for Operating Systems". At this point he realized it would be necessary to learn the implementation details of each operating system to have a chance to embed any new protocols in an efficient way. Thus, in the spring of 1973, after starting the internetting effort, he asked Vint Cerf (then at Stanford) to work with him on the detailed design of the protocol. Cerf had been intimately involved in the original NCP design and development and already had the knowledge about interfacing to existing operating systems. So armed with Kahn's architectural approach to the communications side and with Cerf's NCP experience, they teamed up to spell out the details of what became TCP/IP.

The give and take was highly productive and the first written version7 of the resulting approach was distributed at a special meeting of the International Network Working Group (INWG) which had been set up at a conference at Sussex University in September 1973. Cerf had been invited to chair this group and used the occasion to hold a meeting of INWG members who were heavily represented at the Sussex Conference.

Some basic approaches emerged from this collaboration between Kahn and Cerf:

    Communication between two processes would logically consist of a very long stream of bytes (they called them octets). The position of any octet in the stream would be used to identify it.
    Flow control would be done by using sliding windows and acknowledgments (acks). The destination could select when to acknowledge and each ack returned would be cumulative for all packets received to that point.
    It was left open as to exactly how the source and destination would agree on the parameters of the windowing to be used. Defaults were used initially.
    Although Ethernet was under development at Xerox PARC at that time, the proliferation of LANs were not envisioned at the time, much less PCs and workstations. The original model was national level networks like ARPANET of which only a relatively small number were expected to exist. Thus a 32 bit IP address was used of which the first 8 bits signified the network and the remaining 24 bits designated the host on that network. This assumption, that 256 networks would be sufficient for the foreseeable future, was clearly in need of reconsideration when LANs began to appear in the late 1970s.

The original Cerf/Kahn paper on the Internet described one protocol, called TCP, which provided all the transport and forwarding services in the Internet. Kahn had intended that the TCP protocol support a range of transport services, from the totally reliable sequenced delivery of data (virtual circuit model) to a datagram service in which the application made direct use of the underlying network service, which might imply occasional lost, corrupted or reordered packets. However, the initial effort to implement TCP resulted in a version that only allowed for virtual circuits. This model worked fine for file transfer and remote login applications, but some of the early work on advanced network applications, in particular packet voice in the 1970s, made clear that in some cases packet losses should not be corrected by TCP, but should be left to the application to deal with. This led to a reorganization of the original TCP into two protocols, the simple IP which provided only for addressing and forwarding of individual packets, and the separate TCP, which was concerned with service features such as flow control and recovery from lost packets. For those applications that did not want the services of TCP, an alternative called the User Datagram Protocol (UDP) was added in order to provide direct access to the basic service of IP.

A major initial motivation for both the ARPANET and the Internet was resource sharing - for example allowing users on the packet radio networks to access the time sharing systems attached to the ARPANET. Connecting the two together was far more economical that duplicating these very expensive computers. However, while file transfer and remote login (Telnet) were very important applications, electronic mail has probably had the most significant impact of the innovations from that era. Email provided a new model of how people could communicate with each other, and changed the nature of collaboration, first in the building of the Internet itself (as is discussed below) and later for much of society.

There were other applications proposed in the early days of the Internet, including packet based voice communication (the precursor of Internet telephony), various models of file and disk sharing, and early "worm" programs that showed the concept of agents (and, of course, viruses). A key concept of the Internet is that it was not designed for just one application, but as a general infrastructure on which new applications could be conceived, as illustrated later by the emergence of the World Wide Web. It is the general purpose nature of the service provided by TCP and IP that makes this possible.
Proving the Ideas

DARPA let three contracts to Stanford (Cerf), BBN (Ray Tomlinson) and UCL (Peter Kirstein) to implement TCP/IP (it was simply called TCP in the Cerf/Kahn paper but contained both components). The Stanford team, led by Cerf, produced the detailed specification and within about a year there were three independent implementations of TCP that could interoperate.

This was the beginning of long term experimentation and development to evolve and mature the Internet concepts and technology. Beginning with the first three networks (ARPANET, Packet Radio, and Packet Satellite) and their initial research communities, the experimental environment has grown to incorporate essentially every form of network and a very broad-based research and development community. [REK78] With each expansion has come new challenges.

The early implementations of TCP were done for large time sharing systems such as Tenex and TOPS 20. When desktop computers first appeared, it was thought by some that TCP was too big and complex to run on a personal computer. David Clark and his research group at MIT set out to show that a compact and simple implementation of TCP was possible. They produced an implementation, first for the Xerox Alto (the early personal workstation developed at Xerox PARC) and then for the IBM PC. That implementation was fully interoperable with other TCPs, but was tailored to the application suite and performance objectives of the personal computer, and showed that workstations, as well as large time-sharing systems, could be a part of the Internet. In 1976, Kleinrock published the first book on the ARPANET. It included an emphasis on the complexity of protocols and the pitfalls they often introduce. This book was influential in spreading the lore of packet switching networks to a very wide community.

Widespread development of LANS, PCs and workstations in the 1980s allowed the nascent Internet to flourish. Ethernet technology, developed by Bob Metcalfe at Xerox PARC in 1973, is now probably the dominant network technology in the Internet and PCs and workstations the dominant computers. This change from having a few networks with a modest number of time-shared hosts (the original ARPANET model) to having many networks has resulted in a number of new concepts and changes to the underlying technology. First, it resulted in the definition of three network classes (A, B, and C) to accommodate the range of networks. Class A represented large national scale networks (small number of networks with large numbers of hosts); Class B represented regional scale networks; and Class C represented local area networks (large number of networks with relatively few hosts).

A major shift occurred as a result of the increase in scale of the Internet and its associated management issues. To make it easy for people to use the network, hosts were assigned names, so that it was not necessary to remember the numeric addresses. Originally, there were a fairly limited number of hosts, so it was feasible to maintain a single table of all the hosts and their associated names and addresses. The shift to having a large number of independently managed networks (e.g., LANs) meant that having a single table of hosts was no longer feasible, and the Domain Name System (DNS) was invented by Paul Mockapetris of USC/ISI. The DNS permitted a scalable distributed mechanism for resolving hierarchical host names (e.g. www.acm.org) into an Internet address.

The increase in the size of the Internet also challenged the capabilities of the routers. Originally, there was a single distributed algorithm for routing that was implemented uniformly by all the routers in the Internet. As the number of networks in the Internet exploded, this initial design could not expand as necessary, so it was replaced by a hierarchical model of routing, with an Interior Gateway Protocol (IGP) used inside each region of the Internet, and an Exterior Gateway Protocol (EGP) used to tie the regions together. This design permitted different regions to use a different IGP, so that different requirements for cost, rapid reconfiguration, robustness and scale could be accommodated. Not only the routing algorithm, but the size of the addressing tables, stressed the capacity of the routers. New approaches for address aggregation, in particular classless inter-domain routing (CIDR), have recently been introduced to control the size of router tables.

Website the world wide communication


Website the world wide communication have changed quite a bit since their inception and now include many different flavors and varieties. In this article, I will describe for you a brief history of websites, show you how we arrived where we are today, and provide some suggestions on which website technology may be right for you.
Back in the 1990’s, when websites were becoming increasingly popular, most of the websites were static html. Static html implied that each page was planned out and hand coded to match the plan. Many of these sites were created by specialized website development firms who understood the complexities of this new technology. This creation by an outside firm meant that once the site was created, it was not updated often unless the site owner knew html. Many of the sites created during this time were simply extensions of an organization’s existing marketing materials. The focus at this time was to get a presence on the web quickly. Another reason organizations jumped into a website was the ability to have organization domain specific email addresses like joe@example.com.

The blogging trend of 2000 ushered in a new era for websites. A weblog or blog is another way to say a webpage. The term blog refers to a collection of blog entries, which are truly just web pages. The blogging tools like Movable Type, Blogger, and WordPress offered a mechanism for organization owners to make their websites more dynamic. In most cases, the tools were freely available. These tools utilized php code and database functionality underneath a website to dynamically serve content. The owners weren’t exposed to the complexity of the underlying system, which in turn freed them up to focus on their website content. A key feature of these tools was the ability to add a page on the fly, with the tools handling the previous complexity of creating an html page. Essentially, in this setup, a blog entry was a piece of content or web page. Following on the coattails of the blogging revolution was the What You See Is What You Get (WYSIWYG pronounced WIZ-e-WIG) editor. This allowed the site owner to create a new page with the blogging tools and add feature rich content like text and images without knowing any html whatsoever. By making it super easy to add rich content, more organizations started to adopt these blogging platforms.

Around this time, websites created with Adobe Flash (Formerly Macromedia Flash) technology began to gain steam. These websites allowed a richer user experience that standard html websites by including native support for animation, video, and sound. These sites were usually very appealing to the end user. Most of the Flash websites were similar to the initial static websites in that their content was static and usually built by a specialized web development firm. It didn’t take long before some websites were coded 100% in Flash. Although this provided a unique experience for the end user, the robots that scan web pages often had a tough time deciphering the page content. This meant that much of the content in the website was never indexed by the likes of Google which in turn made it difficult for search users to find the sites. This caveat of Flash led to what we see today, html websites with small amounts of Flash inside of them.

The blogging platforms were taken one step further by the concept of content management systems (CMS). Content management systems extended the functionality of blogging tools by allowing users to expand the content types from a blog post, to anything they desired (instead of just a blog post, imagine a content type of service offered, menu item, or event). For example, a restaurant could define a type of content in their website as a menu item, and then define what fields a menu item should contain(title, description, price).

Once defined, the restaurant owner simply has to click "Add new menu item" to update a menu. Another example would be a fitness club that offers a set schedule of classes. In this example site, a content type of classes could be defined with a description and date/time. Some of the more popular CMS frameworks are WordPress, Drupal, and Joomla. Drupal and Joomla are both open source content management systems. There are two important points to the open source type of software, they are free to use and they are supported by a worldwide community of developers who continuously improve the tools. These frameworks have brought advancements to website development. Development firms now focus on customizing the framework to exactly what the client wants instead of building a custom solution from the ground up. This web development trend has shifted the focus from creation to customization.

The future of the web is trending towards social media tools like Facebook and Twitter. These new tools are supplementing the previous generation of tools by offering aggregated feeds of user information. Instead of a customer coming to your site directly, many of them will now follow your organization on Facebook or Twitter so they can see your updates without ever having to go to your organization’s website.

So where does this leave your organization? All of the previous website development tools are still available in one form or another. You should look at your requirements for a website and determine which solution is correct for you. If you just need a simple website with a few pages, a static website may be best for you. If you want to be able to dynamically add a single type of content to your site, one of the blogging platforms will suit your needs. If you want to be able to dynamically add content to your website and you will have multiple content types (menu item, event, testimonials) look into content management systems.

The Seven Wonders of the world


The Seven Wonders of the world were hardly an objective agreed upon list of the greatest structures of the day but were, rather, very like a modern-day tourist pamphlet informing travelers on what to see on their trip. Herodotus disagreed with Philo’s original list and felt the Egyptian Labyrinth was greater than them all. Antipater replaced the Lighthouse with Babylon's walls and Callimachus, among others, listed the Ishtar Gate of Babylon. Philo’s list, however, has long been accepted as the 'official’ definition of the Seven Wonders of the Ancient World.

1.The Great Pyramid at Giza was constructed between 2584 and 2561 BCE for Egyptian Pharaoh Khufu (known in Greek as `Cheops') and was the tallest man-made structure in the world for almost 4,000 years. Excavations of the interior of the pyramid were only initiated in earnest in the late 18th and early 19th centuries CE and so the intricacies of the interior which so intrigue modern people were unknown to the ancient writers. It was the structure itself with its perfect symmetry and imposing height which impressed ancient visitors.
 The Great Pyramid at Giza was constructed between 2584 and 2561 BCE for the Egyptian Pharaoh Khufu (known in Greek as `Cheops') and was the tallest man-made structure in the world for almost 4,000 years. Excavations of the interior of the pyramid were only initiated in earnest in the late 18th and early 19th centuries CE and so the intricacies of the interior which so intrigue modern people were unknown to the ancient writers. It was the structure itself with its perfect symmetry and imposing height which impressed ancient visitors.

2.The Hanging Gardens of Babylon, if they existed as described, were built by Nebuchadnezzar between 605-562 BCE as a gift to his wife. They are described by the ancient writer Diodorus Siculus as being self-watering planes of exotic flora and fauna reaching a height of over 75 feet (23 metres) through a series of climbing terraces. Diodorus wrote that Nebuchadnezzar's wife, Amtis of Media, missed the mountains and flowers of her homeland and so the king commanded that a mountain be created for her in Babylon. The contoversy over whether the gardens existed comes from the fact that they are nowhere mentioned in Babylonian history and that Herodotus, `the Father of History', makes no mention of them in his descriptions of Babylon. There are many other ancient facts, figures, and places Herodotus fails to mention, however, or has been shown to be wrong about. Diodorus, Philo, and the historian Strabo all claim the gardens existed. They were destroyed by an earthquake sometime after the 1st century CE.

3.The Statue of Zeus at Olympia was 40 feet (12 metres) high and presented the great god seated on a throne with skin of ivory and robes of hammered gold. The statue was created by the sculptor Phidias, who also worked on the Parthenon of Athens. Visitors to the site were dwarfed by the immense statue which was situated, and probably lighted, to produce great feelings of awe, wonder, and humility. After the rise of Christianity, the Temple at Olympia was increadingly neglected and fell into ruin and the Olympic Games, then considered `pagan rites' were banned by the church. The statue was carried off to Constantinople where it was destroyed at some point in an earthquake in the 5th or 6th centuries CE.

4.The Temple of Artemis at Ephesos was completed in 550 BCE and was 425 feet (129 metres) high, 225 feet (69 metres) wide, and supported by 127 60 foot (18 metres) columns. The temple is described by every ancient writer who mentions it with awe and reverence for its beauty. It was destroyed 21 July 356 by a man named Herostratus who set fire to the temple in order that his name be remembered. Because of this, the Ephesians executed him and prohibited his name from being spoken or written down. The historian Theopompus, however, wishing to write a complete history of the temple, recorded his name for posterity. The temple was re-built twice, on a more modest scale, and the first building was later destroyed by the Goths while the second was completely laid to waste by a Christian mob led by Saint John Chrysostom in 401 CE.

5.The Mausoleum of Halicarnassus was built in 351 BCE as the tomb for the Persian Satrap Mauslos.It was 135 feet (41 metres) tall and ornamented with intricate sculpture. Mauslos and his wife, Artemisia, chose Halicarnassus as their capital and devoted themselves to making it the most beautiful and impressive city in the world. When Mauslos died in 353 BCE, Artemisia commissioned the tomb be built to match the splendor of the city the two of them had created. She died two years after him and her ashes were entombed with him in the building. It was destroyed by a series of earthquakes and lay in ruin until it was completely dismantled by the Knights of St. John of Malta in 1494 CE who used the stones in building their castle at Bodrum. It is from the Mausoleum of Halicarnassus that the English word `mausoleum' is derived.

6.The Colossus at Rhodes is frequently imagined by those in the modern day as an enormous figure who stradled the harbor of the island city of Rhodes. This is due to 19th and early 20th century CE depictions of the statue but, actually, it was much closer to the Statue of Liberty in the Manhattan harbor of the United States of America. It was built between 292 and 280 BCE and stood over 110 feet (33 metres) high. The statue was commissioned to commemorate the defeat of the invading army of Demetrius in 304 BCE and stood for 56 years until it was brought down by an earthquake. According to the historian Strabo, it remained a popular tourist attraction even in ruin. Theophanes, another historian, recounts how these ruins were carted away in 654 CE to be melted down.

7.The Lighthouse at Alexandria was completed c. 280 BCE and stood 440 feet (134 metres) high. It was the tallest man-made structure after the pyramids of Giza and its light could be seen 35 miles out to sea. Ancient writers agree that the lighthouse was so beautiful they could not find words adequate to describe it. It was severely damaged in an earthquake in 956 CE and, by 1480 CE after further damage by earthquakes, it was gone.

UFO claims include alien bases on the Moon and Mars.


    UFO claims include alien bases on the Moon and Mars. It is widely (but falsely) reported that Buzz Aldrin saw a UFO on the Apollo 11 flight and that NASA spacecraft discovered a humanoid face and other artifacts on Mars. Much of the public believes that UFOs are alien spacecraft. This represents a conceptual leap from unidentified lights in the sky or radar bogies that were the UFO stories when I was growing up.

Today, “believers” are talking about actual alien contact, with alien bases on the Moon and Mars, and their concerns receive reinforcement from radio, TV, and Internet blogs. On one level UFOs are real, of course; many people occasionally see objects in the sky that are not immediately identifiable as planes, balloons, planets, stars, or unusual atmospheric phenomena. But the questions I receive from the public (submitted to a NASA Web site) suggest a belief system linking UFOs with alien visitations and abductions spiced up by “conspiracy theories” to hide this information from the public. If UFOs are alien spacecraft visiting Earth, then it seems reasonable that evidence of alien civilizations might be seen by astronomers or the radio signals from alien spacecraft might be picked up by the sensitive receivers we use to communicate with our own spacecraft. Perhaps astronauts who venture into space would be among the first to make reliable observations of alien spacecraft or artifacts. Perhaps we should look for alien bases on other worlds. Indeed, the Internet carries many stories of such encounters. I will examine some of the evidence cited for alien presence in the solar system.


Astronaut Encounters with Aliens One allegedly well-documented report stems from an interview in which astronaut Aldrin describes seeing a UFO during the Apollo 11 mission. In an interview on the Science Channel (left, top), Aldrin stated that he, Neil Armstrong, and Mike Collins saw unidentified objects that appeared to follow their Apollo spacecraft. To get the story straight, I called Aldrin, who was happy to explain what happened. He said that his remarks were taken out of context to reverse his meaning. It is true that the Apollo 11 crew spotted an unidentified object moving with the spacecraft as they approached the Moon. After they verified that this mystery object was not Apollo 11’s large rocket upper stage, which was about 6,000 miles away by then, they concluded that they were seeing one of the small panels that had linked the spacecraft to the upper stage (any part of the spacecraft’s rocket upper stage will continue to move alongside the spacecraft, as both are floating in free-fall). These panels were too small to track from Earth and were relatively close to the Apollo spacecraft. Aldrin told me that they chose not to discuss this on the open communications channel since they were concerned that their comments might be misinterpreted. His entire explanation about identifying the panels was cut from the broadcast interview, giving the impression that the Apollo 11 crew had seen a UFO. Aldrin told me that he was angry about the deceptive editing and asked the Science Channel to correct the intentional twisting of his remarks, but they refused. Later, Aldrin explained what happened on CNN’s Larry King Live (left, bottom) but was nearly cut off by the host before he could finish.

The Bermuda Triangle


The Bermuda Triangle is a mythical section of the Atlantic Ocean roughly bounded by Miami, Bermuda and Puerto Rico where dozens of ships and airplanes have disappeared. Unexplained circumstances surround some of these accidents, including one in which the pilots of a squadron of U.S. Navy bombers became disoriented while flying over the area; the planes were never found.
Other boats and planes have seemingly vanished from the area in good weather without even radioing distress messages. But although myriad fanciful theories have been proposed regarding the Bermuda Triangle, none of them prove that mysterious disappearances occur more frequently there than in other well-traveled sections of the ocean. In fact, people navigate the area every day without incident.


    The area referred to as the Bermuda Triangle, or Devil’s Triangle, covers about 500,000 square miles of ocean off the southeastern tip of Florida. When Christopher Columbus sailed through the area on his first voyage to the New World, he reported that a great flame of fire (probably a meteor) crashed into the sea one night and that a strange light appeared in the distance a few weeks later. He also wrote about erratic compass readings, perhaps because at that time a sliver of the Bermuda Triangle was one of the few places on Earth where true north and magnetic north lined up.


Did You Know? After gaining widespread fame as the first person to sail solo around the globe, Joshua Slocum disappeared on a 1909 voyage from Martha’s Vineyard to South America. Though it’s unclear exactly what happened, many sources later attributed his death to the Bermuda Triangle. William Shakespeare’s play “The Tempest,” which some scholars claim was based on a real-life Bermuda shipwreck, may have enhanced the area’s aura of mystery. Nonetheless, reports of unexplained disappearances did not really capture the public’s attention until the 20th century. An especially infamous tragedy occurred in March 1918 when the USS Cyclops, a 542-foot-long Navy cargo ship with over 300 men and 10,000 tons of manganese ore onboard, sank somewhere between Barbados and the Chesapeake Bay. The Cyclops never sent out an SOS distress call despite being equipped to do so, and an extensive search found no wreckage. “Only God and the sea know what happened to the great ship,” U.S. President Woodrow Wilson later said. In 1941 two of the Cyclops’ sister ships similarly vanished without a trace along nearly the same route.

A pattern allegedly began forming in which vessels traversing the Bermuda Triangle would either disappear or be found abandoned. Then, in December 1945, five Navy bombers carrying 14 men took off from a Fort Lauderdale, Florida, airfield in order to conduct practice bombing runs over some nearby shoals. But with his compasses apparently malfunctioning, the leader of the mission, known as Flight 19, got severely lost. All five planes flew aimlessly until they ran low on fuel and were forced to ditch at sea. That same day, a rescue plane and its 13-man crew also disappeared. After a massive weeks-long search failed to turn up any evidence, the official Navy report declared that it was “as if they had flown to Mars.” Bermuda Triangle Theories and Counter-Theories By the time author Vincent Gaddis coined the phrase “Bermuda Triangle” in a 1964 magazine article, additional mysterious accidents had occurred in the area, including three passenger planes that went down despite having just sent “all’s well” messages. Charles Berlitz, whose grandfather founded the Berlitz language schools, stoked the legend even further in 1974 with a sensational bestseller about the legend. Since then, scores of fellow paranormal writers have blamed the triangle’s supposed lethalness on everything from aliens, Atlantis and sea monsters to time warps and reverse gravity fields, whereas more scientifically minded theorists have pointed to magnetic anomalies, waterspouts or huge eruptions of methane gas from the ocean floor. In all probability, however, there is no single theory that solves the mystery. As one skeptic put it, trying to find a common cause for every Bermuda Triangle disappearance is no more logical than trying to find a common cause for every automobile accident in Arizona. Moreover, although storms, reefs and the Gulf Stream can cause navigational challenges there, maritime insurance leader Lloyd’s of London does not recognize the Bermuda Triangle as an especially hazardous place. Neither does the U.S. Coast Guard, which says: “In a review of many aircraft and vessel losses in the area over the years, there has been nothing discovered that would indicate that casualties were the result of anything other than physical causes. No extraordinary factors have ever been identified.”

Dinosaur footprint


If the diverse and numerous dinosaurs (except birds) are extinct, how can we better understand how they lived? Even though the great dinosaurs of the Mesozoic are gone, they have left us many clues. Dinosaur fossils are not limited to bones, but include skin, eggs, nests, footprints, and other special kinds of fossils that give us clues about their lifestyles.
Dinosaur nests Baby Maiasaura,The news has recently carried several stories of wonderful finds of nesting dinosaurs, and it is true that an explosion of data on dinosaurian nesting and social behavior has been uncovered in the past 20 years. Some of the most well known and compelling evidence comes from Jack Horner's (Museum of the Rockies) ongoing work at the "Egg Mountain" site in Montana, where he has documented evidence of a large nesting area used by hadrosaurian (duckbilled) dinosaurs. These dinosaurs were named Maiasaura, "good mother reptile," referring to the closely packed nests that contain fossilized eggs, embryos, and juveniles (such as the one pictured at right). This is one case where we can be fairly confident that parental care was involved in these dinosaurs' lifestyle. Actually, this is not a surprising assertion, because both crocodilians (their closest living relatives) and birds (their living descendants), both show some degree of parental care and extensive nest building.

Other dramatic finds of dinosaur nests include theropod dinosaurs (Oviraptor and Troodon) that apparently died while brooding their nests, and abundant nests of the early ceratopsian dinosaur Protoceratops. An interesting story about Oviraptor: the so-called "egg stealer" was so named because it was found atop a clutch of eggs that were assumed to belong to Protoceratops. This idea held for some 70 years until a find in the 1990s showed an Oviraptor embryo inside one of those eggs..."egg stealer" exonerated!
Dinosaur footprints

Dinosaur footprint
We know of literally thousands of non-avian dinosaur footprints scattered around the globe, from Late Triassic to Late Cretaceous age. You might not think that a footprint or a sequence of footprints (called a trackway) could tell us much, but actually it can tell us some general things about the biology of dinosaurs.

    From trackway data, we can tell that:
    Some non-avian dinosaurs travelled in large groups;
    Non-avian dinosaurs moved with their feet held underneath their body (as birds and mammals do); and
    Some non-avian dinosaurs moved rather quickly, but some plodded along at a more leisurely pace — see our section on dinosaur speeds for more info.

Dinosaur diet
Dinosaurs, living and extinct, have varied diets. We have some strong evidence of exactly what the diets of some of the extinct dinosaurs was, and we can observe birds directly to learn about their diets. Dentition (tooth structure) is one of the most abundant lines of evidence useful for determining dinosaur diets. Most ornithischian and sauropodomorph dinosaurs had rather simple, short stubby crenellated teeth, which are similar to those of living herbivores, and clearly not too good for eating much meat.
    Sauropods feeding

Theropod teeth, on the other hand, retain the primitive archosaurian characteristic of being recurved, serrated, laterally -compressed, and knife-like. There is some variation in tooth structure among extinct theropods, but most are fairly similar and obviously related to a carnivorous diet.

Stomach contents are another line of evidence, somewhat more direct but also a bit trickier to interpret accurately. Well-preserved dinosaur skeletons sometimes have traces of apparent food items preserved in their abdominal cavity, where it's safe to assume that they had a stomach. This includes pine cones and/or needles in some herbivores' guts, and traces of some vertebrates in some theropods' guts. So this independent line of inquiry substantiates the data from tooth morphology. Also, some sauropodomorph stomachs contain well- rounded stones, caled gastroliths, that were probably used to grind food in a muscular crop or gizzard, like some birds (and crocodilians) do.

The general hypothesis that most ornithischians and sauropodomorphs were largely, if not completely herbivorous, and that theropods (at least before the origin of birds) were mostly carnivorous, thus holds. More specific hypotheses have been proposed and supported by data, while others have fallen by the wayside. It is likely that new discoveries will illuminate more about dinosaur diets as the global "dinosaur renaissance" continues.

Purpose of Education in Society


Education, it is the most importent knowledge looked at beyond its conventional boundaries, forms the very essence of all our actions. What we do is what we know and have learned, either through instructions or through observation and assimilation. When we are not making an effort to learn, our mind is always processing new information or trying to analyze the similarities as well as the tiny nuances within the context which makes the topic stand out or seem different. If that is the case then the mind definitely holds the potential to learn more, however, it is us who stop ourselves from expanding the horizons of our knowledge with self-doubt or other social, emotional, or economic constraints. While most feel that education is a necessity, they tend to use it as a tool for reaching a specific target or personal mark, after which there is no further need to seek greater education. Nonetheless, the importance of education in society is indispensable and cohering, which is why society and knowledge cannot be ever separated into two distinct entities. Let us find out more about the role of education in society and how it affects our lives.

                Purpose of Education in Society

Education is Self Empowerment Receiving a good education helps empower you, thus making you strong enough to look after yourself in any given situation. It keeps you aware of your given surrounding as well as the rules and regulations of the society you're living in. It's only through knowledge that you can be able to question authority for its negligence or discrepancies. It is only then that you can avail your rights as a citizen and seek improvement in the structural functioning of governance and economy. It's only when a citizen is aware about the policies of its government can he be able to support or protest the change. As a whole, people can bring about development only when they know where improvement is necessary for the greater good of mankind. Education helps you understand yourself better, it helps you realize your potential and qualities as a human being. It helps you to tap into latent talent, so that you may be able to sharpen your skills. Financial Stability and Dignity of Life

Another importance of education is that it helps you gain sufficient academic qualification so that you are able to get suitable employment at a later stage. A decent employment would be combined with hard-earned remuneration or salary through which you can look after your personal expenses. While you earn for yourself, you gradually begin to realize the true worth of money and how hard it is to earn it. You realize the significance of saving for a rainy day and for unforeseeable contingencies. You feel empowered because there is a new sense of worth that develops within you, and you feel the need to be independent and free from any further financial support. You take pride in the fact that you are earning for yourself, and are not obligated to anyone.

                Growth in Personal Aspiration

There also comes a phase when the amount you are earning presently will seem inadequate because your aspirations and expectations from yourself would have grown considerably. After this, you will want to change jobs so as to have a higher profile. However, here is when you need to be prepared. A promotion of this figure can occur in two given situations, which are, that either you have the necessary higher academic qualification or a college degree which allows you a safe passage, or that you have amassed enough practical experience which allows you to be a suitable candidate for the employment you seek. On the Job Efficiency This is why college education is very important after high school and must not be taken for granted.

 When faced with the option of choosing between a highly qualified candidate and a not so educated candidate, the employers will most probably go in for the qualified person. The reason being that, a qualified candidate will not require much investment of the employer's time and money. The organization need not teach him or her the tricks of the trade, or the various ways of functioning and performing the tasks of the workplace.

On the contrary, a novice / amateur applicant would need to be taught everything from scratch, which many employer's are usually not willing to do. The same applies for people who seek higher education and get advanced diplomas while working. These people are continuously improving their profile and their knowledge base so as to go higher up on the competitive ladder. Those who have amassed enough education, steer the path of development and progress for their country. It is these individuals who go ahead and become teachers, scientists, inventors, welfare activists, soldiers, and politicians who work together to form the very backbone of the society. Without this pool of intellect, the economic and social framework would crumple and fall, paving its way for anarchy, degradation, and violence. While this intricate balance of growth is maintained, there will be a continuous rise in progress in all quarters of life, whether that be personal growth, or development of the nation as an entity. This progress has a very important role to play for the coming generations, which will reap the benefits of our hard work, as they develop it further.

 At the same time, the negative impact of our actions shall have its collateral damage on the coming generation as well. Which is why we must be exceptionally prudent about the decisions we make and the actions we take in the present.

                Job Seeker vs. Job Provider

There will come a time, when you will no longer feel the need to be working as someone's mere employee. You would want to take charge and control over your own life and income. This is when you will decide to become a self-employed individual, who would like to watch his / her own ideas take realistic form. You would prefer being the one offering job opportunities to others and aid in providing income to them. At this stage of entrepreneurship, you may use your own expertise as well as that of other trained and skilled associates. As a team, you will find your business or venture expanding and yielding good results. You may even gain the confidence and insight, which will help you diversify and spread your expertise into other business arenas, which were previously unknown to you, or you were unsure about. This ability, comes with experience and knowledge amassed over the years. An Idle Mind is The Devil's Workshop

    Education and studying regularly, gives people of all age groups something substantial and challenging to do. It helps them think and use their idle hours, doing something productive and worthwhile. Education need not be purely academic and may include reading for leisure or as a passion for literature, philosophy, art, politics, economics, or even scientific research. There is no limit, to all that you can teach yourself, only if you take the interest to learn and grow as an individual. However, those who treat knowledge as trash, eventually find themselves getting absorbed with thoughts of violence, and jealously against those who are better off than themselves. It is people such as these who turn towards drug addiction, unnecessary rebellion, crime, and plain inactivity. Such people lack the self-esteem, that a good education often provides to its followers.

UFO FLYING SAUCER

On June 24, 1947, an amateur pilot named Kenneth Arnold was flying a small plane near Mount Rainier in Washington state when he saw something extraordinarily strange. Directly to his left, about 20 to 25 miles north of him and at the same altitude, a chain of nine objects shot across the sky, glinting in the sun as they traveled. By comparing their size to that of a distant airplane, Arnold gauged the objects to be about 45 to 50 feet wide. They flew between two mountains spaced 50 miles apart in just 1 minute, 42 seconds, he observed, implying an astonishing speed of 1,700 miles per hour, or three times faster than any manned aircraft of the era. However, as if controlled, the flying objects seemed to dip and swerve around obstacles in the terrain.
When the objects faded into the distance, Arnold flew to Yakima, Wash., landed and immediately told the airport staff of the unidentified flying objects he had spotted. The next day, he was interviewed by reporters, and the story spread like wildfire across the nation.
    "At that time there was still some thought that Mars or perhaps Venus might have a habitable surface ," Robert Sheaffer, an author of UFO books (and a skeptic), told Life's Little Mysteries. "People thought these UFOs were Martians who had come to keep an eye on us now that we had nuclear weapons." As time would prove, this was but the first of many outlandish theories behind visits of an extraterrestrial nature. The era of UFO sightings had begun.

Reporting error
Arnold's sighting was "such a sensation that it made front page news across the nation," UFO-logist and author Martin Kottmeyer wrote in an article ("The Saucer Error," REALL News, 1993). "Soon everyone was looking for these new aircraft which according to the papers were saucer-like in shape," Kottmeyer continued. "Within weeks hundreds of reports of these flying saucers were made across the nation. While people presumably thought they were seeing the same things that Kenneth Arnold saw, there was a major irony that nobody at the time realized. Kenneth Arnold hadn't reported seeing flying saucers."
    In fact, Arnold had told the press that the objects had flown erratically, "like a saucer if you skip it across the water." They were thin and flat when viewed on edge, he said, but crescent-shaped when viewed from the top down as they turned. Nonetheless, a reporter named Bill Bequette of the United Press interpreted Arnold's statement to mean that the objects he saw were round discs. According to Benjamin Radford, UFO expert and deputy editor of the Skeptical Inquirer, "It was one of the most significant reporter misquotes in history."

"The phrase 'flying saucers' provided the mold which shaped the UFO myth at its beginning," Kottmeyer wrote. UFOs took the form of flying saucers, he noted, in artist's renderings, hoax photos, sci-fi films, TV shows and even the vast majority of alien abduction and sighting reports for the rest of modern history, up until the present day.

"Bequette's error may not prove to be the ultimate refutation of the extraterrestrial theory for everyone. But it does leave their advocates in one helluva paradox: Why would extraterrestrials redesign their craft to conform to Bequette's mistake?" Kottmeyer wrote. [Read: Could Extraterrestrials Really Invade Earth, and How? ]

For the birds
Though he didn't see flying saucers, most of Arnold's contemporaries believed that he really had seen something that day. The Army report on the sighting states: "[If] Mr. Arnold could write a report of such a character and did not see the objects he was in the wrong business and should be engaged in writing Buck Rogers fiction." His account was very convincing.

So if he did see something, what was it exactly?
One theory holds that it was a fireball — a meteor breaking up upon entry into the atmosphere. If a meteor hit the atmosphere at a shallow angle to the Earth, its pieces would approach the surface traveling almost horizontally. Furthermore, the pieces of meteor would travel in a chain like the one Arnold saw, would shine very brightly, and would travel at thousands of miles per hour.

But most historians think the objects weren't from outer space at all: "It was probably pelicans flying in formation," Sheaffer said. "Probably Arnold misjudged the distance and thought they were huge objects at a great distance but they were actually much closer."

After all, the boomerang shape that Arnold drew in a picture of the objects he had seen looks very much like a bird with its wings outstretched.

Adolf Hitler life story


Adolf Hitler was born on April 20 1889 in the small village of Braunau in Austria. His father
Aloise Schickelgruber Hitler, was a 52 year-old
Austrian customs official. His mother was Klara Poelzl Hitler, a peasant girl who was still in her twenties when she gave birth to Adolf.

In 1895. Adolf entered the Volksschule (public school) in the village of Fischlham. In 1897-8, his devout mother sent him to the monastery school at Lambach. Klara's hoped that her son would become a monk. The young Adolf was caught smoking by the monks and was expelled. After Hitlers expulsion, the family moved to Leonding, a small suburb off Linz.

From 1900-1904 he attended the Realschule (high school for science) and from 1904-1905 at Steyr.
Adolf quit school at the age of sixteen without being graduated.

For the next two years the young HItler spent his time reading German history and mythology. His only desire at this time was to become an artist.

In October 1907, he went to Vienna to make a start at his potential carreer as an artist even though his beloved mother was extremly ill from cancer.

In Vienna, he tried to gain access to the Fine Arts Academy but failed the admission examination. Hitlers pride took a serius knock, which he would never get over.

In December 1908, Hitler lost his mother to cancer and for the next five years, he had to rely on charity to survive.
It it belived that Hitler learnt to 'Jew-hate' whilst he lived in Vienna. It was here that he rejected the teachings of Karl Marx and started drifting towards the influences of the right wing writer, Karl Lueger. His hatred for democracy also incresed as he found it extremely difficult to make ends meet. Hitler qiut Vienna in May 1913 and headed towards Munich, Germany. By now Hitler was feeling more German than Austrian. But in Munich he found no solace. Poverty and despair followed him. In February 1914, Hitler was recalled to Austria to sit a medical examination for compulsary military service. It has been claimed that he was regarded as too weak and unfit to bear arms, but when war broke out in August 1914, he wrote to the King of Bavaria and asked to serve in his army.

Hitler was assigned to the 16th Bavarian Infantry (List Regiment)and soon found himself serving on the Western Front. Until 1916, he served as an orderly and then later as a dispatch bearer. Hitler was wounded twice whilst serving at the front. He was awarded the Iron Cross (Second Class) in 1914 and was given the Iron Cross (First Class) on August 4 1918. Hitler recieved the Iron Cross (1st Class) award for the capture of an enemy officer and fifteen men. Even though Hitler did well on the Front, he never was promoted above the rank of lance corporal. Rumours circulated that he was a homosexual whilst he served at the front.


    After Germany's humiliating defeat in World War One, Hitler returned to munich, Bitter and disillusioned. He blamed the Jews and Marxists at home for Germanys defeat. Hitler was kept on the regimental roster and was assigned to spy on post-war political parties. In 1919, he was assignged to investigate a small radical group calling themselves the'Deutscher Arbeiterparie' (German Workers Party) The party had no program or no plan of action, but the rhetoric they spun, caught Hitler's attention and he soon resigned his role within the regiment and enlisted into the'Deutscher Arbeiterparie' Membership number 55. He soon found himself on the party's Executive Committee and within two years he advanced to the leadership of the party. With this he changed the partys name to the National Socialist Deutscher Arbeitparty. (NAZI party) Hitler transformed the NAZI movement with superb speeches and politcal rallies. In 1923 He believed that the Weimar Republic was ripe to overthrow and he helped organise a purge which became known as the Beer Hall Putsch, on November 8 1923. The Putsch was a complete failure in terms of overthrowing the German government, but it did achieve in giving Him and his party a platform of high media attention.


    Hitler was sentenced to five years imprisonment for high treason for which he served only nine months in Landsberg Prison. It was whilst he was serving his sentence, that he dictated his biography, which he called 'Mien Kampf' (My Struggle). After his release from prison in December 1924, Hitler set about rebuilding his party, assisted by two close followers, Dr Paul Joseph Goebbels, a master at propoganda and Captain Hermann Goering, a World War One ace fighter pilot. It is at this stage that Hitler pledged to destroy the Weimar Republic from within the democratic framework. His dream became reality on 30 January 1933 when he was offered the title of Chancellor.

Only one thing stood in Hitlers way as total leader of the German nation; President Von Hindenburg. President Hindenburg was known to despise the little corporal. Hitler had tried and failed to beat Hindenburg at the presidential elections for the office of President. In the elections of April 1932, Hitler polled 13,418,011 votes to Hindenburgs, 19,359,650 votes. Hitler would have to wait until Hindenburgs death on August 2 1934 before claiming the office of President and of the Reich's Fuehrer. Hitler was now master of Germany and with it the road to war and genocide.

HOW DID LIFE ON EARTH GET STARTED?


    in an arid outcropping of basalt in Australia, some of the oldest rocks on Earth lie exposed to the fierce sun. Formed at the bottom of an ancient ocean, this volcanic material shelters what one scientist calls the "oldest robust evidence" of life. At a scientific meeting at Rockefeller University in May, Roger Buick of the University of Washington said that the 3.5 billion-year-old rocks hold traces of carbon that once made up living organisms. Even before Buick's discovery, ample evidence indicated that life on Earth began while our 4.5 billion-year-old planet was very young. Simple organisms certainly flourished between 2 billion and 3 billion years ago, and claims of older evidence of life have periodically surfaced. But none have been universally embraced, and Buick's claim is so new that other scientists haven't fully reviewed it.


    Yet even if the geologist is right about his rocks, his discovery would leave unanswered one of life's biggest mysteries: how life actually arose. While creationists attribute that spark of life to the hand of God, scientists are convinced there's a natural explanation. Yet as close as they've come to pinning it down, some admit the particulars may never be fully resolved. Others are convinced that we're edging closer to an answer—and to settling one of the oldest and most contentious questions in science and religion. To solve the riddle of genesis, biologists, astronomers, geologists, and chemists are attacking the problem from all angles—even trying to re-create life from scratch. In recent years, institutions, including Harvard University, the Georgia Institute of Technology, and McMaster University in Canada, have formed "origins" institutes to probe the deepest history of life on Earth—and to search for life in the heavens. "The field is going through a minirenaissance," says chemical biologist Gerald Joyce of the Scripps Research Institute in La Jolla, Calif. According to scientists, life began when chemistry begat biology—that is, when simple molecules assembled into more complex molecules that then began to self-replicate. But rocks that might harbor traces of such genesis events simply don't exist, says Buick. During Earth's opening act, space debris and cataclysmic volcanic upheavals destroyed the evidence, like an arsonist torching his tracks. The oldest known rocks are about 4 billion years old, yet even they formed roughly half a million millenniums after our planet's surface cooled and water first pooled into shallow seas. Scientists widely suspect that life began during that long, undocumented interval. Theories about where and how life began range from the sublime to the bizarre. One camp says that deep-sea vents known as black smokers nurtured the first life. In the late 1970s, a team of researchers from Oregon State University unexpectedly discovered whole ecosystems thriving around a hot vent on the Pacific seafloor. Such vents, where molten rock from inside the Earth's mantle heats seawater to as much as 660 degrees Fahrenheit, could have provided the energy and basic organic molecules needed to spark life. Another camp believes that ice—not boiling water—served as the cradle of life. Even the coldest ice contains seams of liquid. These watery pockets could have acted as test tubes for the earliest organic reactions. Experiments show that units of RNA—the genetic material that was probably the forerunner to better-known DNA—spontaneously string themselves together in ice, supporting this theory.


    Still other scientists point to the skies. They argue that meteorites carrying amino acids and other important molecules seeded Earth with the necessary ingredient for life. Supporting the idea: high concentrations of amino acids inside meteorites found on Earth and in gas clouds in space. A wilder offshoot of this theory, called panspermia, suggests that whole bacteria—life itself—first evolved on Mars and then hitched a ride to Earth via small pieces of the Red Planet blasted here by asteroid or comet impacts. But no life has been found on Mars, and the one claim of fossil bacteria in a Martian meteorite, made by NASA scientists in 1996, has been almost universally rejected.

Today photography



Today photography has become a Huge Waned intrest in people. Millions of pictures are uploaded every minute. Correspondingly, everyone is a subject, and knows it—any day now we will be adding the unguarded moment to the endangered species list. It’s on this hyper-egalitarian, quasi-Orwellian, all-too-camera-ready “terra infirma” that National Geographic’s photographers continue to stand out. Why they do so is only partly explained by the innately personal choices (which lens for which lighting for which moment) that help define a photographer’s style. Instead, the very best of their images remind us that a photograph has the power to do infinitely more than document. It can transport us to unseen worlds. When I tell people that I work for this magazine, I see their eyes grow wide, and I know what will happen when I add, as I must: “Sorry, I’m just one of the writers.” A National Geographic photographer is the personification of worldliness, the witness to all earthly beauty, the occupant of everybody’s dream job. I’ve seen The Bridges of Madison County—I get it, I’m not bitter. But I have also frequently been thrown into the company of a National Geographic photographer at work, and what I have seen is everything to admire and nothing whatsoever to envy. If what propels them is ferocious determination to tell a story through transcendent images, what encumbers their quest is a daily litany of obstruction (excess baggage fees, inhospitable weather, a Greek chorus of “no”), interrupted now and then by disaster (broken bones, malaria, imprisonment). Away from home for many months at a time—missing birthdays, holidays, school plays—they can find themselves serving as unwelcome ambassadors in countries hostile to the West. Or sitting in a tree for a week. Or eating bugs for dinner. I might add that Einstein, who snarkily referred to photographers as lichtaffen, meaning “monkeys drawn to light,” did not live by 3 a.m. wake-up calls. Let’s not confuse nobility with glamour. What transfixes me, almost as much as their images, is my colleagues’ cheerful capacity for misery. Apparently they wouldn’t have it any other way. The lodestone of the camera tugged at each of them from their disparate origins (a small town in Indiana or Azerbaijan, a polio isolation ward, the South African military), and over time their work would reflect differentiated passions: human conflict and vanishing cultures, big cats and tiny insects, the desert and the sea. What do the National Geographic photographers share? A hunger for the unknown, the courage to be ignorant, and the wisdom to recognize that, as one says, “the photograph is never taken—it is always given.” In the field I’ve seen some of my lens-toting compatriots sit for days, even weeks, with their subjects, just listening to them, learning what it is they have to teach the world, before at last lifting the camera to the eye. Our photographers have spent literally years immersed in the sequestered worlds of Sami reindeer herders, Japanese geisha, and New Guinea birds of paradise. The fruit of that commitment can be seen in their photographs. What’s not visible is their sense of responsibility toward those who dared to trust the stranger by opening the door to their quiet world. It’s a far riskier and time-consuming proposition to forgo the manipulated shot and instead view photography as a collaborative venture between two souls on either side of the lens.


Conscience is the other trait that binds these photographers. To experience the beauty of harp seals swimming in the Gulf of St. Lawrence is also to see the frailty of their habitat: scores of seal pups drowning due to the collapse of ice floes, a direct consequence of climate change. To witness the calamity of war in the gold-mining region of the Democratic Republic of the Congo is also to envision a glimmer of hope: Show the gold merchants in Switzerland what their profiteering has wrought, and maybe they’ll cease their purchases.
   
    In the past 125 years, it turns out, Kierkegaard has been proved that both wrong and right about photography. The images in National Geographic have revealed a world not of sameness. but of wondrous diversity. But they have also, increasingly, documented societies and species and landscapes threatened by our urge for homogenization. The magazine’s latter-day explorers are often tasked with photographing places and creatures that a generation later may live only in these pages. How do you walk away from that? If my colleagues suffer a shared addiction, it’s to using the formidable reach and influence of this iconic magazine to help save the planet. Does that sound vainglorious? Ask the Swiss gold merchants. They saw Marcus Bleasdale’s images at a Geneva exhibit, and their Congolese gold purchases halted almost overnight.

    Of course, every professional photographer hopes for The Epic Shot, the once-in-a-lifetime collision of opportunity and skill that gains a photograph instant entry into the pantheon alongside Joe Rosenthal’s Iwo Jima, Bob Jackson’s encounter with Jack Ruby gunning down Lee Harvey Oswald, and the Apollo 8 astronauts’ color depictions of planet Earth in its beaming entirety. And yet, game-changing photographs are not what National Geographic photographers do. The most iconic photograph ever to grace these pages is not of anyone or anything historic. Rather, it’s of Sharbat Gula, an Afghan girl of maybe 12 when photographer Steve McCurry encountered her in 1984 at a refugee camp in Pakistan. What her intense, sea-green eyes told the world from the cover of National Geographic’s June 1985 issue a thousand diplomats and relief workers could not. The Afghan girl’s stare drilled into our collective subconscious and stopped a heedless Western world dead in its tracks. Here was the snare of truth. We knew her instantly, and we could no longer avoid caring.
    McCurry shot his immortal portrait well before the proliferation of the Internet and the invention of the smartphone. In a world seemingly benumbed by a daily avalanche of images, could those eyes still cut through the clutter and tell us something urgent about ourselves and about the imperiled beauty of the world we inhabit? I think the question answers itself.

satellite perfomance and update from india


A satellite is any object that orbits another object. All masses that are part of the solar system, including the Earth, are satellites either of the Sun, or satellites of those objects, such as the Moon. It is not always a simple matter to decide which is the ‘satellite’ in a pair of bodies. Because all objects exert gravity, the motion of the primary object is also affected by the satellite. If two objects are ufficiently similar in mass, they are generally referred to as a binary system rather than a primary object and satellite. The general criterion for an object to be a satellite is that the center of mass of the two objects is inside the primary object. In popular usage, the term ‘satellite’ normally refers to an artificial satellite (a man-made object that orbits the Earth or another body).

In May, 1946, the Preliminary Design of an Experimental World-Circling Spaceship stated, “A satellite vehicle with appropriate instrumentation can be expected to be one of the most potent scientific tools of the Twentieth Century. The achievement of a satellite craft would produce repercussions comparable to the explosion of the atomic bomb…”

The space age began in 1946, as scientists began using captured German V-2 rockets to make measurements in the upper atmosphere. Before this period, scientists used balloons that went up to 30 km and radio waves to study the ionosphere. From 1946 to 1952, upper-atmosphere research was conducted using V-2s and Aerobee rockets. This allowed measurements of atmospheric pressure, density, and temperature up to 200 km. The U.S. had been considering launching orbital satellites since 1945 under the Bureau of Aeronautics of the United States Navy. The Air Force’s Project RAND eventually released the above report, but did not believe that the satellite was a potential military weapon; rather they considered it to be a tool for science, politics, and propaganda. Following pressure by the American Rocket Society, the National Science Foundation, and the International Geophysical Year, military interest picked up and in early 1955 the Air Force and Navy were working on Project Orbiter, which involved using a Jupiter C rocket to launch a small satellite called Explorer 1 on January 31, 1958.

On July 29, 1955, the White House announced that the U.S. intended to launch satellites by the spring of 1958. This became known as Project Vanguard. On July 31, the Soviets announced that they intended to launch a satellite by the fall of 1957 and on October 4, 1957 Sputnik I was launched into orbit, which triggered the Space Race between the two nations.

The largest artificial satellite currently orbiting the earth is the International Space Station, which can sometimes be seen with the unaided human eye.

Types of satellites

· Astronomical satellites: These are satellites used for observation of distant planets, galaxies, and other outer space objects.

· Communications satellites: These are artificial satellites stationed in space for the purposes of telecommunications using radio at microwave frequencies. Most communications satellites use geosynchronous orbits or near-geostationary orbits, although some recent systems use low Earth-orbiting satellites.

· Earth observation satellites are satellites specifically designed to observe Earth from orbit, similar to reconnaissance satellites but intended for non-military uses such as environmental monitoring, meteorology, map making etc. (See especially Earth Observing System.)

· Navigation satellites are satellites which use radio time signals transmitted to enable mobile receivers on the ground to determine their exact location. The relatively clear line of sight between the satellites and receivers on the ground, combined with ever-improving electronics, allows satellite navigation systems to measure location to accuracies on the order of a few metres in real time.

· Reconnaissance satellites are Earth observation satellite or communications satellite deployed for military or intelligence applications. Little is known about the full power of these satellites, as governments who operate them usually keep information pertaining to their reconnaissance satellites classified.

· Solar power satellites are proposed satellites built in high Earth orbit that use microwave power transmission to beam solar power to very large antenna on Earth where it can be used in place of conventional power sources.

· Space stations are man-made structures that are designed for human beings to live on in outer space. A space station is distinguished from other manned spacecraft by its lack of major propulsion or landing facilities — instead, other vehicles are used as transport to and from the station. Space stations are designed for medium-term living in orbit, for periods of weeks, months, or even years.

· Weather satellites are satellites that primarily are used to monitor the weather and/or climate of the Earth.

· Miniaturized satellites are satellites of unusually low weights and small sizes. New classifications are used to categorize these satellites: minisatellite (500-200 kg), microsatellite (below 200 kg), nanosatellite (below 10 kg).

Orbit types

Many times satellites are characterized by their orbit. Although a satellite may orbit at almost any height, satellites are commonly categorized by their altitude:

· Low Earth Orbit (LEO: 200 – 1200km above the Earth’s surface)

· Medium Earth Orbit (ICO or MEO: 1200 – 35286 km)

· Geosynchronous Orbit (GEO: 35786 km above Earth’s surface) and Geostationary Orbit ( zero inclination      geosynchronous orbit). These orbits are of particular interest for communication satellites and will be discussed   in detail later.

· High Earth Orbit (HEO: above 35786 km)

The following orbits are special orbits that are also used to categorize satellites:

· Molniya orbits: Is a class of a highly elliptic orbit. A satellite placed in this orbit spends most of its time over a   designated area of the earth, a phenomenon known as apogee dwell. Molniya orbits are named after a series of   Soviet/Russian Molniya communications satellites that have been using this class of orbits since the mid 1960s.

· Heliosynchronous or sun-synchronous orbit: A heliosynchronous orbit, or more commonly a sun-synchronous orbit is an orbit in which an object always passes over any given point of the Earth’s surface at the same local solar time. This is a useful characteristic for satellites that image the earth’s surface in visible or infrared wavelengths (e.g. weather, spy and remote sensing satellites).

· Polar orbit : A satellite in a polar orbit passes above or nearly above both poles of the planet (or other celestial body) on each revolution.

· Hohmann transfer orbit: For this particular orbit type, it is more common to identify the satellite as a spacecraft. In astronautics and aerospace engineering, the Hohmann transfer orbit is an orbital maneuver that moves a spacecraft from one orbit to another.

· Supersynchronous orbit or drift orbit : orbit above GEO. Satellites will drift in a westerly direction.

· Subsynchronous orbit or drift orbit: orbits close to but below GEO. Used for satellites undergoing station changes in an eastern direction.

Communication Satellites

A communications satellite (sometimes abbreviated to comsat) is an artificial satellite stationed in space for the purposes of telecommunications. Modern communications satellites use geosynchronous orbits, Molniya orbits or low Earth orbits.

For fixed services, communications satellites provide a technology complementary to that of fiber optic submarine communication cables. For mobile applications, such as communications to ships and planes satellite based communicationis only the viable means of communications as application of other technologies, such as cable, are impractical or impossible.

Early missions: The origin of satellite communication can be traced to an article written by Arthur C. Clarke in 1945. He suggested that a radio relay satellite in an equatorial orbit with a period of 24 hours would remain stationary with respect to earth’s surface and can be used for long-range radio communication, as it will over come the limitations imposed by earth curvature. Sputnik 1, The world’s first artificial (non communication) satellite, was launched on October 4, 1957. The first satellite to relay communications was Project SCORE in 1958, which used a tape recorder to store and forward voice messages. It was used to send a Christmas greeting to the world from President Eisenhower. NASA launched an Echo satellite in 1960. This 100-foot aluminized Mylar balloon served as a passive reflector for radio communications. Courier 1B, (built by Philco) also was launched in 1960, was the world’s first active repeater satellite. Given below are the details of milestones in satellite communcation history: -
· Herman Potocnik – describes a space station in geosynchronous orbit – 1928
· Arthur C. Clarke – proposes a station in geosynchronous orbit to relay communications and broadcast television –   1945
· Project SCORE – first communications satellite – 1958
· Echo I – first passive reflector satellite – August 1960
· Courier 1B – first active repeater satellite – October 1960
· Telstar – the first active direct relay satellite designed to transmit television and high-speed data  communications. Telstar was placed in an elliptical orbit (completed once every 2 hours and 37 minutes), rotating  at a 45° angle above the equator. July 1962
· Syncom – first communications satellite in geosynchronous orbit. Syncom 2 revolved around the earth once per  day at constant speed, but because it still had north-south motion special equipment was needed to track it. 1963
· OSCAR-III – first amateur radio communications satellite – March 1965
· Molniya – first Soviet communication satellite, highly elliptic orbit – October 1965
· Early Bird – INTELSAT’s first satellite for commercial service – April 1965
· Orbita – first national TV network based on satellite television – November 1967
· Anik 1 – the first national satellite television system, Canada, – 1973
· Westar 1, the USA’s first geosynchronous communications satellite – April 1974
· Ekran – first serial Direct-To-Home TV communication satellite 1976
· Palapa A1 – first Indonesia communications satellite – July 8 1976
· TDRSS – first satellite designed to provide communications relay services for other spacecraft. – 1983
· Mars Global Surveyor – first communications satellite in orbit around another planet (Mars) – 1997
· Cassini spacecraft relays to Earth images from the Huygens probe as it lands on Saturn’s moon, Titan, the longest relay to date. — January 14, 2005

Depending on the need the communication satellites can be placed in various types of orbits. We discuss few common types: -

(a) Geostationary orbits Satellites: A satellite in a geostationary orbit appears to be in a fixed position to an earth-based observer. A geostationary satellite revolves around the earth at a constant speed once per day over the equator. The geostationary orbit is useful for communications applications because ground based antennae, which must be directed toward the satellite, can operate effectively without the need for expensive equipment to track the satellite’s motion. Especially for applications that require a large number of ground antennae (such as direct TV distribution), the savings in ground equipment can more than justify the extra cost and onboard complexity of lifting a satellite into the relatively high geostationary orbit.

The concept of the geostationary communications satellite was first proposed by Arthur C. Clarke, building on work by Konstantin Tsiolkovsky and on the 1929 work by Herman Potočnik (writing as Herman Noordung) Das Problem der Befahrung des Weltraums – der Raketen-motor. In October 1945 Clarke published an article titled “Extra-terrestrial Relays” in the British magazine Wireless World. The article described the fundamentals behind the deployment of artificial satellites in geostationary orbits for the purpose of relaying radio signals. Thus Arthur C. Clarke is often quoted as being the inventor of the communications satellite.

The first geostationary communications satellite was Anik 1, a Canadian satellite launched in 1972. The United States launched their own geostationary communication satellites afterward, with Western Union launching their Westar 1 satellite in 1974, and RCA Americom (later GE Americom, now SES Americom) launching Satcom 1 in 1975.
It was Satcom 1 that was instrumental in helping early cable TV channels such as WTBS (now TBS Superstation), HBO, CBN (now ABC Family), and The Weather Channel become successful, because these channels distributed their programming to all of the local cable TV headends using the satellite. Additionally, it was the first satellite used by broadcast TV networks in the United States, like ABC, NBC, and CBS, to distribute their programming to all of their local affiliate stations. The reason that Satcom 1 was so widely used is that it had twice the communications capacity of Westar 1 (24 transponders as opposed to Westar 1′s 12), which resulted in lower transponder usage costs.

By 2000 Hughes Space and Communications (now Boeing Satellite Systems) had built nearly 40 percent of the satellites in service worldwide. Other major satellite manufacturers include Space Systems/Loral, Lockheed Martin (owns former RCA Astro Electronics/GE Astro Space business), Northrop Grumman, Alcatel Space and EADS Astrium.

(b) Low-Earth-orbiting satellites: A low Earth orbit typically is a circular orbit about 150 kilometers above the earth’s surface and, correspondingly, a period (time to revolve around the earth) of about 90 minutes. Because of their low altitude, these satellites are only visible from within a radius of roughly 1000 kilometers from the sub-satellite point. In addition, satellites in low earth orbit change their position relative to the ground position quickly. So even for local applications, a large number of satellites are needed if the mission requires uninterrupted connectivity.

Low earth orbiting satellites are less expensive to position in space than geostationary satellites and, because of their closer proximity to the ground, require lower signal strength. So there is a trade off between the number of satellites and their cost. In addition, there are important differences in the onboard and ground equipment needed to support the two types of missions.

A group of satellites working in concert thus is known as a satellite constellation. Two such constellations which were intended for provision for hand held telephony, primarily to remote areas, were the Iridium and Globalstar. The Iridium system has 66 satellites. Another LEO satellite constellation, with backing from Microsoft entrepreneur Paul Allen, was to have as many as 720 satellites. It is also possible to offer discontinuous coverage using a low Earth orbit satellite capable of storing data received while passing over one part of Earth and transmitting it later while passing over another part. This will be the case with the CASCADE system of Canada’s CASSIOPE communications satellite.

(c) Molniya satellites: As mentioned, geostationary satellites are constrained to operate above the equator. As a consequence, they are not always suitable for providing services at high latitudes: for at high latitudes a geostationary satellite may appear low on (or even below) the horizon, affecting connectivity and causing multipathing (interference caused by signals reflecting off the ground into the ground antenna). The first satellite of Molniya series was launched on April 23, 1965 and was used for experimental transmission of TV signal from Moscow uplink station to downlink stations, located in Russian Far East, in Khabarovsk, Magadan and Vladivostok. In November of 1967 Soviet engineers created a unique system of national TV network of satellite television, called Orbita that was based on Molniya satellites.

Molniya orbits can be an appealing alternative in such cases. The Molniya orbit is highly inclined, guaranteeing good elevation over selected positions during the northern portion of the orbit. (Elevation is the extent of the satellite’s position above the horizon. Thus a satellite at the horizon has zero elevation and a satellite directly overhead has elevation of 90 degrees). Furthermore, the Molniya orbit is so designed that the satellite spends the great majority of its time over the far northern latitudes, during which its ground footprint moves only slightly. Its period is one half day, so that the satellite is available for operation over the targeted region for eight hours every second revolution. In this way a constellation of three Molniya satellites (plus in-orbit spares) can provide uninterrupted coverage.

Molniya satellites are typically used for telephony and TV services over Russia. Another application is to use them for mobile radio systems (even at lower latitudes) since cars traveling through urban areas need access to satellites at high elevation in order to secure good connectivity, e.g. in the presence of tall buildings.

Applications of Satellites

(a) Telephony: One of the major applications of a communication satellite is in provision of long distance telephone services. The connectivity is through frequency division multiple access (FDMA) or time division multiple access(TDMA) predominantly. Telephone subscribers can be connected through a network of exchanges which are in turn connected to satellite earth stations which uplink the traffic to satellite for further processing.

(b) Television and Radio: There are two types of satellites used for television and radio:

(i) Direct Broadcast Satellite (DBS): A direct broadcast satellite is a communications satellite that transmits to small DBS satellite dishes (usually 18″ to 24″ in diameter). Direct broadcast satellites generally operate in the upper portion of the Ku band. DBS technology is used for DTH-oriented (Direct-To-Home) satellite TV services, such as DirecTV and Dish Network in the United States, ExpressVu in Canada, and Sky Digital in the UK.

(ii) Fixed Service Satellite (FSS): Use the C band, and the lower portions of the Ku bands. They are normally used for broadcast feeds to and from television networks and local affiliate stations (such as program feeds for network and syndicated programming, live shots, and backhauls), as well as being used for distance learning by schools & universities, business television (BTV), videoconferencing, and general commercial telecommunications. FSS satellites are also used to distribute national cable channels to cable TV headends. FSS satellites differ from DBS satellites in that they have a lower RF power output than the latter, requiring a much larger dish for reception (3 to 8 feet in diameter for Ku band, and 12 feet on up for C band). FSS satellite technology was also originally used for DTH satellite TV from the late 1970s to the early 1990s in the USA in the form of TVRO (TeleVision Receive Only) receivers and dishes (a.k.a. big-dish, or more pejoratively known as big ugly dish, systems). It was also used in its Ku band form for the now-defunct Primestar satellite TV service.

(c) Mobile satellite technologies: Initially available for broadcast to stationary TV receivers, by 2004 popular mobile direct broadcast applications made their appearance with that arrival of two satellite radio systems in the United States: Sirius and XM Satellite Radio Holdings. Some manufacturers have also introduced special antennas for mobile reception of DBS television. Using GPS technology as a reference, these antennas automatically re-aim to the satellite no matter where or how the vehicle (that the antenna is mounted on) is situated. These mobile satellite antennas are popular with some recreational vehicle owners. Such mobile DBS antennas are also used by JetBlue Airways for DirecTV (supplied by LiveTV, a subsidiary of JetBlue), which passengers can view on-board on LCD screens mounted in the seats.

(d) Amateur radio: Amateur radio operators have access to the OSCAR satellites that have been designed specifically to carry amateur radio traffic. Most such satellites operate as space borne repeaters, and are generally accessed by amateurs equipped with UHF or VHF radio equipment and highly directional antennas such as Yagis or dish antennas. Due to the limitations of ground-based amateur equipment, most amateur satellites are launched into fairly low Earth orbits, and are designed to deal with only a limited number of brief contacts at any given time. Some satellites also provide data-forwarding services using the X.25 or similar protocols.

Satellite Broadband Services: In recent years, satellite communication technology has been used as a means to connect to the Internet via broadband data connections. This is can be very useful for users to test who are located in very remote areas, and can’t access a wireline broadband or dialup connection.

Countries with satellite launch capability

This list includes counties with an independent capability to place satellites in orbit, including production of the necessary launch vehicle. Many more countries have built satellites that were launched with the aid of others. The French and British capabilities are now subsumed by the European Union under the European Space Agency.

First launch by country
Country Year of first launch First satellite
India 1980 “Rohini”
Russia 1957 “Sputnik 1″
United States 1958 “Explorer 1″
France 1965 “Asterix”
Japan 1970 “Osumi”
China 1970 “Dong Fang Hong I”
United Kingdom 1971 “Prospero X-3″
European Union 1979 “Ariane 1″
Israel 1988 “Ofea 1″
Iran 2005 “Sina 1″

In 1998, North Korea claimed to have launched a satellite, but this was never confirmed, and widely believed to be a cover for the test launch of the Taepodong-1 missile over Japan