Showing posts with label Networking. Show all posts
Showing posts with label Networking. Show all posts
 Optical fibers

Optical fibers

Optical fiber


An optical fiber (or optical fibre) is a flexible, transparent fiber made by drawingglass (silica) or plastic to a diameter slightly thicker than that of a human hair.Optical fibers are used most often as a means to transmit light between the two ends of the fiber and find wide usage in fiber-optic communications, where they permit transmission over longer distances and at higher bandwidths (data rates) than wire cables. Fibers are used instead of metal wires because signals travel along them with lesser amounts of loss; in addition, fibers are also immune toelectromagnetic interference, a problem which metal wires suffer from excessively. Fibers are also used for illumination, and are wrapped in bundles so that they may be used to carry images, thus allowing viewing in confined spaces, as in the case of a fiberscope. Specially designed fibers are also used for a variety of other applications, some of them being fiber optic sensors and fiber lasers.
Optical fibers typically include a transparent core surrounded by a transparentcladding material with a lower index of refraction. Light is kept in the core by the phenomenon of total internal reflection which causes the fiber to act as awaveguide. Fibers that support many propagation paths or transverse modes are called multi-mode fibers (MMF), while those that support a single mode are calledsingle-mode fibers (SMF). Multi-mode fibers generally have a wider core diameter and are used for short-distance communication links and for applications where high power must be transmitted. Single-mode fibers are used for most communication links longer than 1,000 meters (3,300 ft).
An important aspect of a fiber optic communication is that of extension of the fiber optic cables such that the losses brought about by joining two different cables is kept to a minimum. Joining lengths of optical fiber often proves to be more complex than joining electrical wire or cable and involves the carefully cleaving of the fibers, perfect alignment of the fiber cores and the splicing of these aligned fiber cores. For applications that demand a permanent connection a mechanical splicewhich holds the ends of the fibers together mechanically could be used or a fusion splice that uses heat to fuse the ends of the fibers together could be used. Temporary or semi-permanent connections are made by means of specializedoptical fiber connectors.
The field of applied science and engineering concerned with the design and application of optical fibers is known as fiber optics.

History


Daniel Colladon first described this “light fountain” or “light pipe” in an 1842 article titled On the reflections of a ray of light inside a parabolic liquid stream. This particular illustration comes from a later article by Colladon, in 1884.
Guiding of light by refraction, the principle that makes fiber optics possible, was first demonstrated by Daniel Colladon and Jacques Babinet in Paris in the early 1840s.John Tyndall included a demonstration of it in his public lectures in London, 12 years later. Tyndall also wrote about the property of total internal reflection in an introductory book about the nature of light in 1870:
When the light passes from air into water, the refracted ray is bent towardsthe perpendicular... When the ray passes from water to air it is bent from the perpendicular... If the angle which the ray in water encloses with the perpendicular to the surface be greater than 48 degrees, the ray will not quit the water at all: it will be totally reflected at the surface.... The angle which marks the limit where total reflection begins is called the limiting angle of the medium. For water this angle is 48°27′, for flint glass it is 38°41′, while for diamond it is 23°42′.
Unpigmented human hairs have also been shown to act as an optical fiber.
Practical applications, such as close internal illumination during dentistry, appeared early in the twentieth century. Image transmission through tubes was demonstrated independently by the radio experimenter Clarence Hansell and the television pioneer John Logie Baird in the 1920s. The principle was first used for internal medical examinations by Heinrich Lamm in the following decade. Modern optical fibers, where the glass fiber is coated with a transparent cladding to offer a more suitable refractive index, appeared later in the decade. Development then focused on fiber bundles for image transmission. Harold Hopkins and Narinder Singh Kapany at Imperial College in London achieved low-loss light transmission through a 75 cm long bundle which combined several thousand fibers. Their article titled "A flexible fibrescope, using static scanning" was published in the journal Nature in 1954. The first fiber optic semi-flexible gastroscope was patented by Basil Hirschowitz, C. Wilbur Peters, and Lawrence E. Curtiss, researchers at theUniversity of Michigan, in 1956. In the process of developing the gastroscope, Curtiss produced the first glass-clad fibers; previous optical fibers had relied on air or impractical oils and waxes as the low-index cladding material.
A variety of other image transmission applications soon followed.
In 1880 Alexander Graham Bell and Sumner Tainter invented the Photophone at the Volta Laboratory in Washington, D.C., to transmit voice signals over an optical beam. It was an advanced form of telecommunications, but subject to atmospheric interferences and impractical until the secure transport of light that would be offered by fiber-optical systems. In the late 19th and early 20th centuries, light was guided through bent glass rods to illuminate body cavities. Jun-ichi Nishizawa, a Japanese scientist at Tohoku University, also proposed the use of optical fibers for communications in 1963, as stated in his book published in 2004 in India. Nishizawa invented other technologies that contributed to the development of optical fiber communications, such as the graded-index optical fiber as a channel for transmitting light from semiconductor lasers. The first working fiber-optical data transmission system was demonstrated by German physicistManfred Bรถrner at Telefunken Research Labs in Ulm in 1965, which was followed by the first patent application for this technology in 1966. Charles K. Kao and George A. Hockham of the British company Standard Telephones and Cables(STC) were the first to promote the idea that the attenuation in optical fibers could be reduced below 20 decibels per kilometer (dB/km), making fibers a practical communication medium. They proposed that the attenuation in fibers available at the time was caused by impurities that could be removed, rather than by fundamental physical effects such as scattering. They correctly and systematically theorized the light-loss properties for optical fiber, and pointed out the right material to use for such fibers — silica glass with high purity. This discovery earned Kao the Nobel Prize in Physics in 2009.
NASA used fiber optics in the television cameras that were sent to the moon. At the time, the use in the cameras was classified confidential, and only those with sufficient security clearance or those accompanied by someone with the right security clearance were permitted to handle the cameras.
The crucial attenuation limit of 20 dB/km was first achieved in 1970, by researchers Robert D. Maurer, Donald Keck, Peter C. Schultz, and Frank Zimar working for American glass maker Corning Glass Works, now Corning Incorporated. They demonstrated a fiber with 17 dB/km attenuation by doping silica glass with titanium. A few years later they produced a fiber with only 4 dB/km attenuation using germanium dioxide as the core dopant. Such low attenuation ushered in the era of optical fiber telecommunication. In 1981, General Electric produced fused quartz ingots that could be drawn into strands 25 miles (40 km) long.
Attenuation in modern optical cables is far less than in electrical copper cables, leading to long-haul fiber connections with repeater distances of 70–150 kilometers (43–93 mi). The erbium-doped fiber amplifier, which reduced the cost of long-distance fiber systems by reducing or eliminating optical-electrical-optical repeaters, was co-developed by teams led byDavid N. Payne of the University of Southampton and Emmanuel Desurvire at Bell Labs in 1986. Robust modern optical fiber uses glass for both core and sheath, and is therefore less prone to aging. It was invented by Gerhard Bernsee ofSchott Glass in Germany in 1973.
The emerging field of photonic crystals led to the development in 1991 of photonic-crystal fiber,which guides light bydiffraction from a periodic structure, rather than by total internal reflection. The first photonic crystal fibers became commercially available in 2000. Photonic crystal fibers can carry higher power than conventional fibers and their wavelength-dependent properties can be manipulated to improve performance.

Uses

Communication

Main article: Fiber-optic communication
Optical fiber can be used as a medium for telecommunication and computer networking because it is flexible and can be bundled as cables. It is especially advantageous for long-distance communications, because light propagates through the fiber with little attenuation compared to electrical cables. This allows long distances to be spanned with few repeaters.
The per-channel light signals propagating in the fiber have been modulated at rates as high as 111 gigabits per second(Gbit/s) by NTT, although 10 or 40 Gbit/s is typical in deployed systems. In June 2013, researchers demonstrated transmission of 400 Gbit/s over a single channel using 4-mode orbital angular momentum multiplexing.
Each fiber can carry many independent channels, each using a different wavelength of light (wavelength-division multiplexing (WDM)). The net data rate (data rate without overhead bytes) per fiber is the per-channel data rate reduced by the FEC overhead, multiplied by the number of channels (usually up to eighty in commercial dense WDM systems as of 2008). As of 2011 the record for bandwidth on a single core was 101 Tbit/s (370 channels at 273 Gbit/s each). The record for a multi-core fiber as of January 2013 was 1.05 petabits per second. In 2009, Bell Labs broke the 100 (petabit per second)×kilometer barrier (15.5 Tbit/s over a single 7,000 km fiber).
For short distance application, such as a network in an office building, fiber-optic cabling can save space in cable ducts. This is because a single fiber can carry much more data than electrical cables such as standard category 5 Ethernet cabling, which typically runs at 100 Mbit/s or 1 Gbit/s speeds. Fiber is also immune to electrical interference; there is no cross-talk between signals in different cables, and no pickup of environmental noise. Non-armored fiber cables do not conduct electricity, which makes fiber a good solution for protecting communications equipment in high voltageenvironments, such as power generation facilities, or metal communication structures prone to lightning strikes. They can also be used in environments where explosive fumes are present, without danger of ignition. Wiretapping (in this case, fiber tapping) is more difficult compared to electrical connections, and there are concentric dual-core fibers that are said to be tap-proof.
Fibers are often also used for short-distance connections between devices. For example, most high-definition televisionsoffer a digital audio optical connection. This allows the streaming of audio over light, using the TOSLINK protocol.

Advantages over copper wiring

The advantages of optical fiber communication with respect to copper wire systems are:
Broad bandwidth
A single optical fiber can carry 3,000,000 full-duplex voice calls or 90,000 TV channels.
Immunity to electromagnetic interference
Light transmission through optical fibers is unaffected by other electromagnetic radiation nearby. The optical fiber is electrically non-conductive, so it does not act as an antenna to pick up electromagnetic signals. Information traveling inside the optical fiber is immune to electromagnetic interference, even electromagnetic pulses generated by nuclear devices.
Low attenuation loss over long distances
Attenuation loss can be as low as 0.2 dB/km in optical fiber cables, allowing transmission over long distances without the need for repeaters.
Electrical insulator
Optical fibers do not conduct electricity, preventing problems with ground loops and conduction of lightning. Optical fibers can be strung on poles alongside high voltage power cables.
Material cost and theft prevention
Conventional cable systems use large amounts of copper. In some places, this copper is a target for theft due to its value on the scrap market.

Sensors

Main article: Fiber optic sensor
Fibers have many uses in remote sensing. In some applications, the sensor is itself an optical fiber. In other cases, fiber is used to connect a non-fiberoptic sensor to a measurement system. Depending on the application, fiber may be used because of its small size, or the fact that no electrical power is needed at the remote location, or because many sensors can be multiplexed along the length of a fiber by using different wavelengths of light for each sensor, or by sensing the time delay as light passes along the fiber through each sensor. Time delay can be determined using a device such as an optical time-domain reflectometer.
Optical fibers can be used as sensors to measure strain, temperature, pressure and other quantities by modifying a fiber so that the property to measure modulates the intensity, phase, polarization, wavelength, or transit time of light in the fiber. Sensors that vary the intensity of light are the simplest, since only a simple source and detector are required. A particularly useful feature of such fiber optic sensors is that they can, if required, provide distributed sensing over distances of up to one meter. In contrast, highly localized measurements can be provided by integrating miniaturized sensing elements with the tip of the fiber. These can be implemented by various micro- and nanofabrication technologies, such that they do not exceed the microscopic boundary of the fiber tip, allowing such applications as insertion into blood vessels via hypodermic needle.
Extrinsic fiber optic sensors use an optical fiber cable, normally a multi-mode one, to transmit modulated light from either a non-fiber optical sensor—or an electronic sensor connected to an optical transmitter. A major benefit of extrinsic sensors is their ability to reach otherwise inaccessible places. An example is the measurement of temperature inside aircraft jet engines by using a fiber to transmit radiation into a radiation pyrometer outside the engine. Extrinsic sensors can be used in the same way to measure the internal temperature of electrical transformers, where the extreme electromagnetic fieldspresent make other measurement techniques impossible. Extrinsic sensors measure vibration, rotation, displacement, velocity, acceleration, torque, and twisting. A solid state version of the gyroscope, using the interference of light, has been developed. The fiber optic gyroscope (FOG) has no moving parts, and exploits the Sagnac effect to detect mechanical rotation.
Common uses for fiber optic sensors includes advanced intrusion detection security systems. The light is transmitted along a fiber optic sensor cable placed on a fence, pipeline, or communication cabling, and the returned signal is monitored and analyzed for disturbances. This return signal is digitally processed to detect disturbances and trip an alarm if an intrusion has occurred.

Power transmission

Optical fiber can be used to transmit power using a photovoltaic cell to convert the light into electricity. While this method of power transmission is not as efficient as conventional ones, it is especially useful in situations where it is desirable not to have a metallic conductor as in the case of use near MRI machines, which produce strong magnetic fields. Other examples are for powering electronics in high-powered antenna elements and measurement devices used in high-voltage transmission equipment.
 Introduction to Ring Topologie

Introduction to Ring Topologie

Ring network


ring network is a network topology in which each node connects to exactly two other nodes, forming a single continuous pathway for signals through each node - a ring. Data travel from node to node, with each node along the way handling every packet.
Rings can be unidirectional, with all traffic travelling either clockwise or anticlockwise around the ring, or bidirectional (as in SONET/SDH). Because a unidirectional ring topology provides only one pathway between any two nodes, unidirectional ring networks may be disrupted by the failure of a single link. A node failure or cable break might isolate every node attached to the ring. In response, some ring networks add a "counter-rotating ring" (C-Ring) to form a redundant topology: in the event of a break, data are wrapped back onto the complementary ring before reaching the end of the cable, maintaining a path to every node along the resulting C-Ring. Such "dual ring" networks include Spatial Reuse Protocol, Fiber Distributed Data Interface (FDDI), and Resilient Packet Ring. 802.5 networks - also known as IBM token ring networks - avoid the weakness of a ring topology altogether: they actually use a star topology at the physical layer and a media access unit (MAU) to imitate a ring at the datalink layer.
Some SONET/SDH rings have two sets of bidirectional links between nodes. This allows maintenance or failures at multiple points of the ring usually without loss of the primary traffic on the outer ring by switching the traffic onto the inner ring past the failure points.

Advantages

See also: Ring Protection
  • Very orderly network where every device has access to the token and the opportunity to transmit
  • Performs better than a bus topology under heavy network load
  • Does not require a central node to manage the connectivity between the computers
  • Due to the point to point line configuration of devices with a device on either side (each device is connected to its immediate neighbor), it is quite easy to install and reconfigure since adding or removing a device requires moving just two connections.
  • Point to point line configuration makes it easy to identify and isolate faults.
  • reconfiguration for line faults of bidirectional rings can be very fast, as switching happens at a high level, and thus the traffic does not require individual rerouting

Disadvantages

  • One malfunctioning workstation can create problems for the entire network. This can be solved by using a dual ring or a switch that closes off the break.
  • Moving, adding and changing the devices can affect the network
  • Communication delay is directly proportional to number of nodes in the network
  • Bandwidth is shared on all links between devices
  • More difficult to configure than a Star: node adjunction = Ring shutdown and reconfiguration

Misconceptions

  • "Token Ring is an example of a ring topology." 802.5 (Token Ring) networks do not use a ring topology at layer 1. As explained above, IBM Token Ring (802.5) networks imitate a ring at layer 2 but use a physical star at layer 1.
  • "Rings prevent collisions." The term "ring" only refers to the layout of the cables. It is true that there are no collisions on an IBM Token Ring, but this is because of the layer 2 Media Access Control method, not the physical topology (which again is a star, not a ring.) Token passing, not rings, prevent collisions.
  • "Token passing happens on rings." Token passing is a way of managing access to the cable, implemented at the MAC sublayer of layer 2. Ring topology is the cable layout at layer one. It is possible to do token passing on a bus (802.4) a star (802.5) or a ring (FDDI). Token passing is not restricted to rings.
How Search Engine Works ( Continue )

How Search Engine Works ( Continue )



Market share

Google is the world's most popular search engine, with a market share of 66.44 percent as of December, 2014. Baidu comes in at second place.
The world's most popular search engines are:[16]
Search engineMarket share in December 2014
Google66.44%
Baidu11.15%
Bing10.29%
Yahoo!9.31%
AOL0.53%
Ask0.21%
Lycos0.01%
Search engineMarket share in October 2014
Google58.01%
Baidu29.06%
Bing8.01%
Yahoo!4.01%
AOL0.21%
Ask0.10%
Excite0.00%
Search engineMarket share in July 2014
Google68.69%
Baidu17.17%
Yahoo!6.74%
Bing6.22%
Excite0.22%
Ask0.13%
AOL0.13%

East Asia and Russia

East Asian countries and Russia constitute a few places where Google is not the most popular search engine.
Yandex commands a marketshare of 61.9 per cent in Russia, compared to Google's 28.3 percent. In China, Baidu is the most popular search engine. South Korea's homegrown search portal, Naver, is used for 70 per cent online searches in the country. Yahoo! Japan and Yahoo! Taiwan are the most popular avenues for internet search in Japan and Taiwan, respectively.

Search engine bias

Although search engines are programmed to rank websites based on some combination of their popularity and relevancy, empirical studies indicate various political, economic, and social biases in the information they provide. These biases can be a direct result of economic and commercial processes (e.g., companies that advertise with a search engine can become also more popular in its organic search results), and political processes (e.g., the removal of search results to comply with local laws). For example, Google will not surface certain Neo-Nazi websites in France and Germany, where Holocaust denial is illegal.
Biases can also be a result of social processes, as search engine algorithms are frequently designed to exclude non-normative viewpoints in favor of more "popular" results. Indexing algorithms of major search engines skew towards coverage of U.S.-based sites, rather than websites from non-U.S. countries.
Google Bombing is one example of an attempt to manipulate search results for political, social or commercial reasons.

Customized results and filter bubbles

Many search engines such as Google and Bing provide customized results based on the user's activity history. This leads to an effect that has been called a filter bubble. The term describes a phenomenon in which websites use algorithms to selectively guess what information a user would like to see, based on information about the user (such as location, past click behaviour and search history). As a result, websites tend to show only information that agrees with the user's past viewpoint, effectively isolating the user in a bubble that tends to exclude contrary information. Prime examples are Google's personalized search results and Facebook's personalized news stream. According to Eli Pariser, who coined the term, users get less exposure to conflicting viewpoints and are isolated intellectually in their own informational bubble. Pariser related an example in which one user searched Google for "BP" and got investment news about British Petroleum while another searcher got information about the Deep water Horizon oil spill and that the two search results pages were "strikingly different". The bubble effect may have negative implications for civic discourse, according to Pariser.
Since this problem has been identified, competing search engines have emerged that seek to avoid this problem by not tracking or "bubbling" users.

Faith-based search engines

The global growth of the Internet and popularity of electronic contents in the Arab and Muslim World during the last decade has encouraged faith adherents, notably in the Middle East and Asian sub-continent, to "dream" of their own faith-based i.e. "Islamic" search engines or filtered search portals filters that would enable users to avoid accessing forbidden websites such as pornography and would only allow them to access sites that are compatible to the Islamic faith. Shortly before the Muslim holy month of Ramadan, Halal-googling, which collects results from other search engines like Google and Bing, was introduced to the world in July 2013 to present the halal results to its users, nearly two years after I’m-Halal, another search engine initially (launched on September 2011) to serve Middle East Internet had to close its search service due to what its owner blamed on lack of funding.
While lack of investment and slow pace in technologies in the Muslim World as the main consumers or targeted end users has hindered progress and thwarted success of serious Islamic search engine, the spectacular failure of heavily invested Muslim lifestyle web projects like Muxlim, which received millions of dollars from investors like Rite Internet Ventures, has - according to I’m-Halal shutdown notice - made almost laughable the idea that the next Facebook or Google can only come from the Middle East if you support your bright youth. Yet Muslim internet experts have been determining for years what is or is not allowed according to the "Law of Islam" and have been categorizing websites and such into being either "halal" or "haram". All the existing and past Islamic search engines are merely custom search indexed or monetized by web major search giants like Google, Yahoo and Bing with only certain filtering systems applied to ensure that their users can't access Haram sites, which include such sites as nudity, gay, gambling or anything that is deemed to be anti-Islamic.
Another religiously-oriented search engine is Jewogle, which is the Jewish version of Google and yet another is SeekFind.org, which is a Christian website that includes filters preventing users from seeing anything on the internet that attacks or degrades their faith.
How Search Engine Works

How Search Engine Works

Web search engine


web search engine: is a software system that is designed to search for information on the World Wide Web. The search results are generally presented in a line of results often referred to as search engine results pages (SERPs). The information may be a mix of web pages, images, and other types of files. Some search engines also mine data available in databases or open directories. Unlike web directories, which are maintained only by human editors, search engines also maintain real-time information by running an algorithm on a web crawler.

History



The first tool used for searching on the
 Internet was Archie. The name stands for "archive" without the "v". It was created in 1990 by Alan Emtage, Bill Heelan and J. Peter Deutsch, computer science students at McGill University in Montreal. The program downloaded the directory listings of all the files located on public anonymous FTP (File Transfer Protocol) sites, creating a searchable database of file names; however, Archie did not index the contents of these sites since the amount of data was so limited it could be readily searched manually.During early development of the web, there was a list of web-servers edited by Tim Berners-Lee and hosted on the CERN web-server. One historical snapshot of the list in 1992 remains, but as more and more web-servers went online the central list could no longer keep up. On the NCSA site, new servers were announced under the title "What's New!"
The rise of Gopher (created in 1991 by Mark McCahill at theUniversity of Minnesota) led to two new search programs,Veronica and Jughead. Like Archie, they searched the file names and titles stored in Gopher index systems. Veronica (Very Easy Rodent-Oriented Net-wide Index to ComputerizedArchives) provided a keyword search of most Gopher menu titles in the entire Gopher listings. Jughead (Jonzy's UniversalGopher Hierarchy Excavation And Display) was a tool for obtaining menu information from specific Gopher servers. While the name of the search engine "Archie" was not a reference to the Archie comic book series, "Veronica" and "Jughead" are characters in the series, thus referencing their predecessor.
In the summer of 1993, no search engine existed for the web, though numerous specialized catalogues were maintained by hand. Oscar Nierstrasz at the University of Geneva wrote a series of Perl scripts that periodically mirrored these pages and rewrote them into a standard format. This formed the basis for W3 Catalog, the web's first primitive search engine, released on September 2, 1993.
In June 1993, Matthew Gray, then at MIT, produced what was probably the first web robot, the Perl-based World Wide Web Wanderer, and used it to generate an index called 'Wandex'. The purpose of the Wanderer was to measure the size of the World Wide Web, which it did until late 1995. The web's second search engine Aliweb appeared in November 1993. Aliweb did not use a web robot, but instead depended on being notified by website administrators of the existence at each site of an index file in a particular format.
Jump Station (created in December 1993 by Jonathon Fletcher) used a web robot to find web pages and to build its index, and used a web form as the interface to its query program. It was thus the first WWW resource-discovery tool to combine the three essential features of a web search engine (crawling, indexing, and searching) as described below. Because of the limited resources available on the platform it ran on, its indexing and hence searching were limited to the titles and headings found in the web pages the crawler encountered.
One of the first "all text" crawler-based search engines wasWebCrawler, which came out in 1994. Unlike its predecessors, it allowed users to search for any word in any webpage, which has become the standard for all major search engines since. It was also the first one widely known by the public. Also in 1994,Lycos (which started at Carnegie Mellon University) was launched and became a major commercial endeavor.
Soon after, many search engines appeared and vied for popularity. These included Magellan, Excite, Infoseek, Inktomi,Northern Light, and AltaVista. Yahoo! was among the most popular ways for people to find web pages of interest, but its search function operated on its web directory, rather than its full-text copies of web pages. Information seekers could also browse the directory instead of doing a keyword-based search.
In 1996, Netscape was looking to give a single search engine an exclusive deal as the featured search engine on Netscape's web browser. There was so much interest that instead Netscape struck deals with five of the major search engines: for $5 million a year, each search engine would be in rotation on the Netscape search engine page. The five engines were Yahoo!, Magellan, Lycos, Infoseek, and Excite.[6][7]
Google adopted the idea of selling search terms in 1998, from a small search engine company named goto.com. This move had a significant effect on the SE business, which went from struggling to one of the most profitable businesses in the internet.
Search engines were also known as some of the brightest stars in the Internet investing frenzy that occurred in the late 1990's. Several companies entered the market spectacularly, receiving record gains during their initial public offerings. Some have taken down their public search engine, and are marketing enterprise-only editions, such as Northern Light. Many search engine companies were caught up in the dot-com bubble, a speculation-driven market boom that peaked in 1999 and ended in 2001.
Around 2000, Google's search engine rose to prominence. The company achieved better results for many searches with an innovation called Page Rank, as was explained in the paper Anatomy of a Search Engine written by Sergey Brin and Larry Page, the later founders of Google. This iterative algorithm ranks web pages based on the number and Page Rank of other web sites and pages that link there, on the premise that good or desirable pages are linked to more than others. Google also maintained a minimalist interface to its search engine. In contrast, many of its competitors embedded a search engine in a web portal. In fact, Google search engine became so popular that spoof engines emerged such as Mystery Seeker.
By 2000, Yahoo! was providing search services based on Inktomi's search engine. Yahoo! acquired Inktomi in 2002, andOverture (which owned AlltheWeb and AltaVista) in 2003. Yahoo! switched to Google's search engine until 2004, when it launched its own search engine based on the combined technologies of its acquisitions.
Microsoft first launched MSN Search in the fall of 1998 using search results from Inktomi. In early 1999 the site began to display listings from Looksmart, blended with results from Inktomi. For a short time in 1999, MSN Search used results from AltaVista instead. In 2004, Microsoft began a transition to its own search technology, powered by its own web crawler(called msnbot).
Microsoft's rebranded search engine, Bing, was launched on June 1, 2009. On July 29, 2009, Yahoo! and Microsoft finalized a deal in which Yahoo! Search would be powered by Microsoft Bing technology.

How web search engines work


  1. Web crawling
  2. Indexing
  3. Searching
Web search engines work by storing information about many web pages, which they retrieve from the HTML markup of the pages. These pages are retrieved by a Web crawler (sometimes also known as a spider) — an automated Web crawler which follows every link on the site. The site owner can exclude specific pages by using robots.txt.
The search engine then analyzes the contents of each page to determine how it should be indexed (for example, words can be extracted from the titles, page content, headings, or special fields called meta tags). Data about web pages are stored in an index database for use in later queries. A query from a user can be a single word. The index helps find information relating to the query as quickly as possible. Some search engines, such as Google, store all or part of the source page (referred to as a cache) as well as information about the web pages, whereas others, such as Alta Vista, store every word of every page they find. This cached page always holds the actual search text since it is the one that was actually indexed, so it can be very useful when the content of the current page has been updated and the search terms are no longer in it. This problem might be considered a mild form of linkrot, and Google's handling of it increases usability by satisfying user expectations that the search terms will be on the returned webpage. This satisfies the principle of least astonishment, since the user normally expects that the search terms will be on the returned pages. Increased search relevance makes these cached pages very useful as they may contain data that may no longer be available elsewhere.

High-level architecture of a standard Web crawler
When a user enters a query into a search engine (typically by using keywords), the engine examines its index and provides a listing of best-matching web pages according to its criteria, usually with a short summary containing the document's title and sometimes parts of the text. The index is built from the information stored with the data and the method by which the information is indexed. From 2007 the Google.com search engine has allowed one to search by date by clicking "Show search tools" in the leftmost column of the initial search results page, and then selecting the desired date range. Most search engines support the use of the boolean operators AND, OR and NOT to further specify the search query. Boolean operators are for literal searches that allow the user to refine and extend the terms of the search. The engine looks for the words or phrases exactly as entered. Some search engines provide an advanced feature called proximity search, which allows users to define the distance between keywords. There is also concept-based searching where the research involves using statistical analysis on pages containing the words or phrases you search for. As well, natural language queries allow the user to type a question in the same form one would ask it to a human. A site like this would be ask.com.

The usefulness of a search engine depends on the relevance of the result set it gives back. While there may be millions of web pages that include a particular word or phrase, some pages may be more relevant, popular, or authoritative than others. Most search engines employ methods to rank the results to provide the "best" results first. How a search engine decides which pages are the best matches, and what order the results should be shown in, varies widely from one engine to another. The methods also change over time as Internet usage changes and new techniques evolve. There are two main types of search engine that have evolved: one is a system of predefined and hierarchically ordered keywords that humans have programmed extensively. The other is a system that generates an "inverted index" by analyzing texts it locates. This first form relies much more heavily on the computer itself to do the bulk of the work.
Most Web search engines are commercial ventures supported by advertising revenue and thus some of them allow advertisers to have their listings ranked higher in search results for a fee. Search engines that do not accept money for their search results make money by running search related ads alongside the regular search engine results. The search engines make money every time someone clicks on one of these ads.
Creating a VPN Server on Windows.

Creating a VPN Server on Windows.

Manually Vpn Services:Create On Windows 7 & 8 + 8.1

Recently i have updated a post related with VPN services across the world. Many VPN services have been seen by me and i have tested them and came up with 2 best VPN services across entire internet community.
So here these two services are :



1. Best VPN USA ( www.bestvpnusa.com )
2. VPNip ( www.vpnip.net)

So lets talk about Best VPN USA first, i have posted about it in my recent post if you want detail review and want to know how to start with it you can click here
Best VPN USA is over all great and user interface is also simple and understandable but user have to update his/her VPN password at daily
bases, even 2 times a day (but it happens sometime.
Ans when we take a review on VPNip then its quite better then Best VPN USA.Both sites are same in all aspects but VPNip have a good feature and that is you don't have to change your password daily.Yeah its pretty cool for a internet geek that he/she browse the things without showing his/her ip address and location.So conclusion is i prefer VPNip rather than Best VPN USA.Now lets start with the setup of VPNip.

FOR WINDOW 7

Step 1: Open "Network and Sharing Center" from Control Panel or you can search it in search bar. Then choose "Set up a new connection or network".

Step 2: In next window click "Connect to a workplace".

Step 3: In next window choose "Use my Internet Connection (VPN)"

Step 4: Now write "na.vpnip.net" (as it is) as "the name of VPN server"

Step 5: Then use "VPNip" as the name the connection in the destination name box and then click Next.

Step 6: Now in next window enter your "user name and password" so in username box you write
"vpnip.net" and in password box you write "2013"

Step 7: In the "Connect to a network screen", you should see the "VPNip" connection that you have just set up.

Step 8: Now setup is totally complete. To connect right click on the connection icon (viewable at Change adapter settings) and select connect/disconnect.

Enjoy free blocked websites!!!!

FOR WINDOW 8

Step 1: Search "vpn" and you will see 3 results for it.

Step 2: Select "Set up a Virtual private network (vpn) connection".

Step 3: In "internet box" type "na.vpnip.net" and click "create"

Step 4: Now you are done. you can find your newly made vpn conection in wifi connections or you can search "view network connections" in search and select "view network connections" ther you will find your new connection just right click on it and connect.

Step 5: After you connect you will be ask to give "username and password" so in username box you write "vpnip.net" and in password box you write "2013"

All done.Now enjoy free blocked websites!!!!

Note : in case of error regarding wrong password or username please visit to the VPNip website
( www.vpnip.net ) or click here.There you will find new password and username too in case of changing.
Proxy Server.

Proxy Server.

Inside pc networks, a new proxy server can be a server (a pc technique or an application) in which acts for intermediary with regard to asks for via customers trying to find assets via some other hosting space. Complaintant links for the proxy server, looking for a few support, for example a data file, relationship, web page, or some other source available from the distinct server along with the proxy server examines this obtain in order to shorten along with control the complexity. Proxies were being devised to add construction along with encapsulation to help allocated techniques. Nowadays, most proxies are generally net proxies, facilitating access to articles upon online along with offering anonymity.

Types of proxy.


A proxy server may reside on the user's local computer, or at various points between the user's computer and destination servers on the Internet.
  • A proxy server that passes requests and responses unmodified is usually called a gateway or sometimes a tunneling proxy.
  • A forward proxy is an Internet-facing proxy used to retrieve from a wide range of sources (in most cases anywhere on the Internet).
  • A reverse proxy is usually an Internet-facing proxy used as a front-end to control and protect access to a server on a private network. A reverse proxy commonly also performs tasks such as load-balancing, authentication, decryption or caching.

Open Proxies.

An start proxy is often a forwarding proxy server that is certainly obtainable by means of virtually any Web user. Gordon Lyon rates you can find "hundreds involving thousands" involving start proxies on-line. An nameless start proxy permits users to cover their IP deal with although surfing around the world wide web or utilizing additional Web providers. You will find different degrees of anonymity nonetheless, as well as a amount of strategies of 'tricking' the client straight into exposing themselves regardless of the proxy being utilized.

Reverse Proxies.


Some sort of slow proxy (or surrogate) is often a proxy server that will appears to buyers to get an ordinary server. Demands are generally submitted to be able to one or more proxy servers which often cope with your ask. The answer through the proxy server will be came back just as if the idea came specifically through the beginning server, causing your client zero information about the origin servers. Slow proxies are generally installed from the area connected with one or more web servers. Almost all traffic from the Internet is actually a destination connected with one of many neighborhood's web servers goes through your proxy server. The employment of "reverse" stems in their version "forward proxy" because slow proxy sits closer to the internet server as well as serves just a minimal set of internet websites. There are several reasons for setting up slow proxy servers:

Encryption or SSL speeding: when risk-free internet sites are manufactured, your SSL encryption is frequently not necessarily completed from the web server alone, however by the slow proxy that will gives you SSL speeding equipment. Observe Risk-free Electrical sockets Layer. Additionally, a bunch can offer one particular "SSL proxy" to provide SSL encryption to have an haphazard variety of hosting companies; taking away the need for the separate SSL Server Certificate for each number, with all the disadvantage that most hosting companies driving your SSL proxy need to talk about one common DNS label or IP address regarding SSL associations. This issue can to some extent become triumph over utilizing the SubjectAltName element connected with X. 509 certificates.
Fill managing: your slow proxy can spread the load to several web servers, each web server providing a unique app location. When this occurs, your slow proxy needs to spin your Web addresses in each site (translation from outside the body recognized Web addresses on the interior locations).
Serve/cache static articles: Some sort of slow proxy can offload the internet servers by means of caching static articles just like pictures along with other static graphical articles.
Compression: your proxy server can enhance as well as decrease the content to be able to improve the load period.
Tea spoon eating: lowers reference application due to sluggish buyers on-line servers by means of caching the content the internet server delivered as well as slowly "spoon feeding" the idea on the customer. That specially benefits dynamically made websites.
Stability: your proxy server is an extra layer connected with security and will force away a few OS as well as World wide web Server certain problems. On the other hand, it not provide any defense from problems contrary to the web app or support alone, that's generally considered the bigger danger.
Extranet Publishing: a slow proxy server struggling with the web can often connect to your firewall server interior with an firm, offering extranet usage of a few capabilities although maintaining your servers driving your firewalls. In the event that used in this way, stability steps should be considered to shield the remainder of your respective infrastructure in the event this kind of server will be jeopardized, seeing that their web app will be subjected to attack from the internet.

Uses of Proxy Servers.


Monitoring And Filtering.


Some sort of content-filtering world wide web proxy server offers management handle above the content material which can be relayed available as one or even each information from the proxy. It is common in each business oriented and also non-commercial organizations (especially schools) to make sure that Net use adjusts in order to satisfactory work with policy.
Some sort of content material filtering proxy usually support person authentication, to manipulate world wide web entry. It also generally creates firelogs, both to supply thorough specifics of your Web addresses looked at simply by certain users, or to keep track of bandwidth use studies. It could additionally speak in order to daemon-based and/or ICAP-based antivirus software program to provide stability against virus and other spyware simply by checking incoming content material instantly prior to that goes in your system.
Numerous do the job locations, educational institutions and also schools prohibit the world wide web web-sites and also on the internet services which can be delivered inside their complexes. Governing bodies additionally censor unfavorable content material. That is completed both using a particular proxy, named some sort of content material filtration (both business oriented and also no cost merchandise are usually available), or even with a cache-extension project for example ICAP, that allows plug-in extension cables to an available caching architectural mastery.
Asks might be strained simply by various methods, such as a LINK or even DNS blacklists blacklist, LINK regex filtering, MIME filtering, or even content material keyword filtering. Many merchandise happen to be seen to hire content material analysis strategies to watch out for qualities common simply by selected types of content material companies. [citation needed] Blacklists in many cases are furnished and also taken care of simply by web-filtering firms, typically arranged into classes (pornography, wagering, shopping, support systems, and so forth. ).
Assuming your required LINK will be satisfactory, the content will be subsequently fetched with the proxy. At this time some sort of dynamic filtration might be used about the returning journey. By way of example, JPEG documents may be clogged based on fleshtone meets, or even words filter systems may dynamically find unwelcome words. In the event the content material will be invalidated subsequently a HTTP retrieve malfunction might be go back towards the requester.
Many world wide web filtering firms work with a internet-wide creeping trading program that assesses the likelihood that a content material is usually a selected sort. The particular resultant data source will be subsequently fixed simply by information labour based on problems or even acknowledged imperfections inside content-matching algorithms.
Many proxies check out outbound content material, e. gary., regarding facts reduction elimination; or even check out content material regarding destructive software program.

Filtering Of Encrypted Data.

Web filtering proxies are not able to peer inside secure sockets HTTP transactions, assuming the chain-of-trust of SSL/TLS has not been tampered with.
The SSL/TLS chain-of-trust relies on trusted root certificate authorities. In a workplace setting where the client is managed by the organization, trust might be granted to a root certificate whose private key is known to the proxy. Consequently, a root certificate generated by the proxy is installed into the browser CA list by IT staff.
In such situations, proxy analysis of the contents of a SSL/TLS transaction becomes possible. The proxy is effectively operating a man-in-the-middle attack, allowed by the client's trust of a root certificate the proxy owns.

Bypassing Filters And Censorship.

If the destination server filters content based on the origin of the request, the use of a proxy can circumvent this filter. For example, a server using IP-based geolocation to restrict its service to a certain country can be accessed using a proxy located in that country to access the service.
Web proxies are the most common means of bypassing government censorship, although no more than 3% of Internet users use any circumvention tools.
In some cases users can circumvent proxies which filter using blacklists using services designed to proxy information from a non-blacklisted location.

Logging And Eavesdropping.

Proxies can be installed in order to eavesdrop upon the data-flow between client machines and the web. All content sent or accessed – including passwords submitted and cookies used – can be captured and analyzed by the proxy operator. For this reason, passwords to online services (such as webmail and banking) should always be exchanged over a cryptographically secured connection, such as SSL. By chaining proxies which do not reveal data about the original requester, it is possible to obfuscate activities from the eyes of the user's destination. However, more traces will be left on the intermediate hops, which could be used or offered up to trace the user's activities. If the policies and administrators of these other proxies are unknown, the user may fall victim to a false sense of security just because those details are out of sight and mind. In what is more of an inconvenience than a risk, proxy users may find themselves being blocked from certain Web sites, as numerous forums and Web sites block IP addresses from proxies known to have spammed or trolled the site. Proxy bouncing can be used to maintain your privacy.

Improving Performance.

caching proxy server accelerates service requests by retrieving content saved from a previous request made by the same client or even other clients. Caching proxies keep local copies of frequently requested resources, allowing large organizations to significantly reduce their upstream bandwidth usage and costs, while significantly increasing performance. Most ISPs and large businesses have a caching proxy. Caching proxies were the first kind of proxy server. Web proxies are commonly used to cache web pages from a web server. Poorly implemented caching proxies can cause problems, such as an inability to use user authentication. A proxy that is designed to mitigate specific link related issues or degradations is a Performance Enhancing Proxy (PEPs). These typically are used to improve TCP performance in the presence of high round-trip times or high packet loss (such as wireless or mobile phone networks); or highly asymmetric links featuring very different upload and download rates. PEPs can make more efficient use of the network, for example by merging TCP ACKs or compressing data sent at the application layer Another important use of the proxy server is to reduce the hardware cost. An organization may have many systems on the same network or under control of a single server, prohibiting the possibility of an individual connection to the Internet for each system. In such a case, the individual systems can be connected to one proxy server, and the proxy server connected to the main server.

Translation.

A translation proxy is a proxy server that is used to localize a website experience for different markets. Traffic from global audiences is routed through the translation proxy to the source website. As visitors browse the proxied site, requests go back to the source site where pages are rendered. Original language content in the response is replaced by translated content as it passes back through the proxy. The translations used in a translation proxy can be either machine translation, human translation, or a combination of machine and human translation. Different translation proxy implementations have different capabilities. Some allow further customization of the source site for local audiences such as excluding source content or substituting source content with original local content.

Accessing Devices Anonymously.

An anonymous proxy server (sometimes called a web proxy) generally attempts to anonymize web surfing. There are different varieties of anonymizers. The destination server (the server that ultimately satisfies the web request) receives requests from the anonymizing proxy server, and thus does not receive information about the end user's address. The requests are not anonymous to the anonymizing proxy server, however, and so a degree of trust is present between the proxy server and the user. Many proxy servers are funded through a continued advertising link to the user.
Access control: Some proxy servers implement a logon requirement. In large organizations, authorized users must log on to gain access to the web. The organization can thereby track usage to individuals. Some anonymizing proxy servers may forward data packets with header lines such as HTTP_VIA, HTTP_X_FORWARDED_FOR, or HTTP_FORWARDED, which may reveal the IP address of the client. Other anonymizing proxy servers, known as elite or high-anonymity proxies, only include the REMOTE_ADDR header with the IP address of the proxy server, making it appear that the proxy server is the client. A website could still suspect a proxy is being used if the client sends packets which include a cookie from a previous visit that did not use the high-anonymity proxy server. Clearing cookies, and possibly the cache, would solve this problem.