At the high end of the bandwidth spectrum, a group of network research scientists fromnearly 200 universities and a number of major corporations joined together in 1996 torecapture the original enthusiasm of the ARPANET with an advanced research networkcalled Internet2. When the National Science Foundation turned over the Internet backboneto commercial interests in 1995, many scientists felt that they had lost a large, living laboratory.Internet2 is the replacement for that laboratory. An experimental test bed for newnetworking technologies that is separate from the original Internet, Internet2 has achievedbandwidths of 10 Gbps and more on parts of its network.Internet2 is also used by universities to conduct large collaborative research projectsthat require several supercomputers connected at very fast speeds, or that use multiplevideo feeds—things that would be impossible on the Internet given its lower bandwidth limits.For example, doctors at medical schools that are members of Internet2 regularly use itstechnology to do live videoconference consultations during complex surgeries. Internet2serves as a proving ground for new technologies and applications of those technologies thatwill eventually find their way to the Internet. In 2008, CERN (the birthplace of the originalWeb in Switzerland) began using Internet2 to share data generated by its new particle acceleratorwith a research network of 70 U.S. universities. Every few weeks, each universitydownloads about two terabytes (a terabyte is one thousand gigabytes) of data within afour-hour time period.The Internet2 project is focused mainly on technology development. In contrast, TimBerners-Lee began a project in 2001 that has a goal of blending technologies and informationinto a next-generation Web. This Semantic Web project envisions words on Web pagesbeing tagged (using XML) with their meanings. The Web would become a huge machinereadabledatabase. People could use intelligent programs called software agents to read theXML tags to determine the meaning of the words in their contexts. For example, a softwareagent given the instruction to find an airline ticket with certain terms (date, cities, costlimit) would launch a search on the Web and return with an electronic ticket that meets thecriteria. Instead of a user having to visit several Web sites to gather information, compareprices and itineraries, and make a decision, the software agent would automatically do thesearching, comparing, and purchasing.The key elements that must be added to Web standards so that software agents canperform these functions include XML, a resource description framework, and an ontology.You have already seen how XML tags can describe the semantics of data elements. Aresource description framework (RDF) is a set of standards for XML syntax. It would functionas a dictionary for all XML tags used on the Web. An ontology is a set of standards thatdefines, in detail, the relationships among RDF standards and specific XML tags within aparticular knowledge domain. For example, the ontology for cooking would include conceptssuch as ingredients, utensils, and ovens; however, it would also include rules and behavioralexpectations, such as that ingredients can be mixed using utensils, that the resulting productcan be eaten by people, and that ovens generate heat within a confined area. Ontologiesand the RDF would provide the intelligence about the knowledge domain so that softwareagents could make decisions as humans would.The development of the Semantic Web is expected to take many years. The first stepin this project is to develop ontologies for specific subjects. Thus far, several areas of
đang được dịch, vui lòng đợi..
