Briefly describe the technologies that led businesses into the second wave of electronic commerce.Answer: The increase in broadband connections in homes is a key element in the B2C component of the second wave. The increased use of home Internet connections to transfer large audio and video files is generally seen as the impetus for the large numbers of people spending the extra money required to obtain a broadband connection during the second wave. The increased speed of broadband not only makes Internet use more efficient, but it also can alter the way people use the Web. For example, a broadband connection allows a user to watch movies and television programs onlinesomething that is impossible to do with a dial-up connection. This opens up more opportunities for businesses to make online sales. It also changes the way that online retailers can present their products to Web site visitors.Another group of technologies emerged in the second wave that made new online businesses possible. The general term for these technologies is Web 2.0, and they include software that allows users of Web sites to participate in the creation, editing, and distribution of content on a Web site owned and operated by a third party. Sites such as Wikipedia, YouTube, and Facebook use Web 2.0 technologies.
Name and briefly describe the two business activities that were the earliest implementations of electronic commerce.Answer: Since the mid-1960s, banks have been using electronic funds transfers (EFTs, also called wire transfers), which are electronic transmissions of account exchange information over private communications networks. Initially used to transfer money between business checking accounts, the use of EFTs gradually expanded to include payroll deposits to employees accounts, automatic payment of auto and mortgage loans, and deposit of government payments to individuals, such as U.S. Social Security System remittances.
Many business analysts have discussed the concept of the first-mover advantage. What are some of the disadvantages of being a first mover?Answer: First movers must invest large amounts of money in new technologies and make guesses about what customers will want when those technologies are functioning. The combination of high uncertainty and the need for large investments makes being a first mover very risky. As many business strategists have noted, It is the second mouse that gets the cheese.What are transaction costs, and why are they important considerations in electronic commerce?Answer: Transaction costs are the total of all costs that a buyer and seller incur as they gather information and negotiate a purchase-sale transaction. Businesses and individuals can use electronic commerce to reduce transaction costs by improving the flow of information and increasing the coordination of actions. By reducing the cost of searching for potential buyers and sellers and increasing the number of potential market participants, electronic commerce can change the attractiveness of vertical integration for many firms.
How might a university use SWOT analysis to identify new degree programs that it could offer online?Answer: In SWOT analysis, the analyst first looks into the business unit to identify its strengths and weaknesses. The analyst then reviews the environment in which the business unit operates and identifies opportunities presented by that environment and the threats posed by that environment.
In its early years, the Internet was a military project that became a science project with funding from the National Science Foundation. In one or two paragraphs, describe its transition to an environment that allowed and supported commercial activity.Answer: In the early 1960s, the U.S. Department of Defense began examining ways to connect computers to each other and to weapons installations distributed all over the world. Employing many of the best communications technology researchers, the Defense Department funded research at leading universities and institutes. The goal of this research was to design a worldwide network that could remain operational, even if parts of the network were destroyed by enemy military action or sabotage. In 1969, Defense Department researchers in the Advanced Research Projects Agency (ARPA) used this direct connection network model to connect four computersone each at the University of California at Los Angeles, SRI International, the University of California at Santa Barbara, and the University of Utahinto a network called the ARPANET. The ARPANET was the earliest of the networks that eventually combined to become what we now call the Internet.E-mail was born in 1972 when Ray Tomlinson, a researcher who used the network, wrote a program that could send and receive messages over the network. This new method of communicating became widely used very quickly. As personal computers became more powerful, affordable, and available during the 1980s, companies increasingly used them to construct their own internal networks. Although these networks included e-mail software that employees could use to send messages to each other, businesses wanted their employees to be able to communicate with people outside their corporate networks.
In 1989, the NSF permitted two commercial e-mail services (MCI Mail and
CompuServe) to establish limited connections to the Internet for the sole purpose of exchanging e-mail transmissions with users of the Internet. These connections allowed commercial enterprises to send e-mail directly to Internet addresses, and allowed members of the research and education communities on the Internet to send e-mail directly to MCI Mail and CompuServe addresses..
In one or two paragraphs, explain what is meant by the term Internet of Things.Answer: The most common perception of the Internet is that it connects computers to one another and, by doing so, connects the users of those computers to each other. In recent years, devices other than computers have been connected to the Internet, such as mobile phones and tablet devices. Once again, the connection of these devices to the Internet serves to connect the users of those devices to each other. However, the connection of devices to the Internet that are not used by persons is increasing rapidly. These devicessuch as switches, optical scanners, sensors that detect changes in temperature, light, moisture, or the existence of vibration or movementcan be connected to the Internet and used by computers to manage automatically environmental conditions (such as heating and cooling or lighting levels) or security procedures. These interconnected devices can be located in houses, offices, factories, autos, appliances, and so on.Computers can also be connected to each other using the Internet to conduct business transactions without human intervention. The subset of the Internet that includes these computers and sensors connected to each other for communication and automatic transaction processing is often called the Internet of Things.
Briefly describe the functions performed by routers in an interconnected network.As an individual packet travels from one network to another, the computers through which the packet travels determine the most efficient route for getting the packet to its destination. The most efficient route changes from second to second, depending on how much traffic each computer on the Internet is handling at each moment. The computers that decide how best to forward each packet are called routing computers, router computers, routers, gateway computers (because they act as the gateway from a LAN or WAN to the Internet), border routers, or edge routers.
In about 100 words, explain the differences between a closed (or proprietary) architecture and an open architecture. In your answer, be sure to explain which is used for the Internet and why it is used.Answer: The first packet-switched network, the ARPANET, connected only a few universities and research centers. Following its inception in 1969, this experimental network grew during the next few years and began using the Network Control Protocol (NCP). In the early days of computing, each computer manufacturer created its own protocol, so computers made by different manufacturers could not be connected to each other. This practice was called proprietary architecture or closed architecture. NCP was designed so it could be used by any computer manufacturer and was made available to any company that wanted it. This open architecture philosophy that was developed for the evolving ARPANET, included the use of a common protocol for all computers connected to the Internet and four key rules for message handling:In about 100 words, describe the function of the Internet Corporation for Assigned Names and Numbers. Include a discussion of the differences between gTLDs and sTLDs in your answer.Answer: Since 1998, the Internet Corporation for Assigned Names and Numbers (ICANN) has had the responsibility of managing domain names and coordinating them with the IP address registrars. ICANN is also responsible for setting standards for the router computers that make up the Internet.
Since taking over these responsibilities, ICANN has added a number of new TLDs. Some of these are generic top-level domains (gTLDs), which are available to specified categories of users. Note that ICANN is itself responsible for the maintenance of gTLDs. Other new domains are sponsored top-level domains (sTLDs), which are TLDs for which an organization other than ICANN is responsible.
The Web uses a client/server architecture. In about 100 words, describe the client and server elements of this architecture, including specific examples of software and hardware that are used to form the Web.Answer: The Web is software that runs on computers that are connected to each other through the Internet. Web client computers run software called Web client software or Web browser software. Examples of popular Web browser software include Google Chrome, Microsoft Internet Explorer, and Mozilla Firefox.Web browser software sends requests for Web page files to other computers, which are called Web servers and a Web server computer runs software called Web server software. The Web server software receives requests from many different Web clients and responds by sending files back to those Web client computers. Each Web client computers Web client software then renders those files into a Web page. Thus, the purpose of a Web server is to respond to requests for Web pages from Web clients. This combination of client computers running Web client software and server computers running Web server software is an example of a client/server architecture.
provide examples of at least two situations in which an organization would use XML and two situations in which an organization would use HTML.Hypertext Markup Language (HTML) was derived from the more generic meta language SGML. HTML defines the structure and content of Web pages using markup symbols called tags. Over time, HTML has evolved to include a large number of tags that accommodate graphics, Cascading Style Sheets, and other Web page elements.Although Extensible Markup Language (XML) is also derived from SGML, it differs from HTML in two important respects. First, XML is not a markup language with defined tags. It is a framework within which individuals, companies, and other organizations can create their own sets of tags.
Write a paragraph in which you describe the changes that occurred in virtual communities when the bandwidths available to Internet users increased.Answer: In the mid-1990s, virtual communities formed in Web chat rooms and sites devoted to specific topics or the general exchange of information. As the bandwidths available to Internet users increased, photos and video became commonplace additions to the discussions in these communities.Some companies use a social networking strategy in which they avoid making direct advertising or brand statements. In about 100 words, outline the advantages of that strategy.Answer: Starbucks does not use social media to broadcast information about its products or build its brand. Instead, Starbucks uses social media to learn from their customers and find new ways to engage them with the companys brand, products, and services. By intentionally avoiding active participation in its own social media outreach, Starbucks social media efforts are focused on listening to its customers discussions with each other and learning from those discussions.By using social media to participate in the environment of an industry or product, companies can interact with their customers (or suppliers) in ways that are different from and more expansive than the roles traditionally taken in buyerseller relationships.
In two or three paragraphs, describe the differences between a blog and a microblog. Be sure to discuss when a company might prefer to use a microblog in its social networking efforts rather than a blog.Answer: Web logs, or blogs, are Web sites that contain commentary on current events or specific issues written by individuals. Many blogs invite visitors to add comments, which the blog owner may or may not edit. The result is a continuing discussion of the topic with the possibility that many interested persons will contribute to that discussion. Because blog sites encourage interaction among people interested in a particular topic, they are a form of a social networking site.
Sites such as Twitter are considered to be microblogs because they function as a very informal blog site with entries (messages, or tweets) that are limited to 140 characters in length.
In two or three paragraphs, outline at least three different ways in which a social networking site might monetize its visitors.Answer: Small social networking sites that have a specialized appeal can draw enough visitors to generate significant amounts of advertising revenue, especially compared to the costs of running such a site. For example, software developer Eric Nakagawa posted a picture of a grinning fat cat on his Web site in 2007 with the caption I can has cheezburger? as a joke. He followed that with several more cat photos and funny captions over the next few weeks and added a blog so that people could post comments about the pictures. Within a few months, the site was getting more than 100,000 visitors a day. Nakagawa found that a site with that kind of traffic could charge between $100 and $600 per day for a single ad.Although most social networking sites use advertising to support their operations, some do charge a fee for some services. For example, the Yahoo! Web portal offers most of its services free (supported by advertising), but it does sell some of its social networking features, such as its All-Star Games package. Yahoo! also sells other features, such as more space to store messages and attached files, as part of its premium e-mail service. These fees help support the operation of the social networking elements of the site.
In about 100 words, outline at least three ways in which a mobile phones GPS capabilities can be used to provide benefits to users of a social network.Answer: Virtually all mobile devices have global positioning satellite (GPS) service capabilities, which means that apps can combine the phone users location with the availability of retail stores and services to create mobile business opportunities. For example, some apps can direct the user to specific business locations (such as restaurants, movie theaters, or auto repair facilities) based on the users current location.Write a paragraph in which you describe the main task(s) performed by a Web server.Answer: A Web server computer runs software called Web server software. Web server software receives requests from many different Web clients and responds by sending files back to those Web client computers. Each Web client computers Web client software renders those files into a Web page. Thus, the purpose of a Web server is to respond to requests for Web pages from Web clients.In a paragraph or two, describe the two basic approaches that can be used to create dynamic Web pages.Answer: Web site designers can incorporate dynamic content using two basic approaches. In the first approach, called client-side scripting, software operates on the Web client (the browser) to change what is displayed on the Web page in response to a users actions (such as mouse clicks or keyboard text input). In client-side scripting, changes are generated within the browser using software such as JavaScript or Adobe Flash. The Web client retrieves a file from the Web server that includes code (JavaScript, for example). The code instructs the Web client to request specific page elements from the Web server and dictates how they will be displayed in the Web browser.In the second approach, called server-side scripting, a program running on a Web server creates a Web page in response to a request for specific information from a Web client. The content of the request can be determined by several things, including text that a user has entered into a Web form in the browser, extra text added to the end of a URL, the type of Web browser making the request, or simply the passage of time. For example, if you are logged into an online banking site and do not enter any text or click anywhere on the page for a few minutes, the Web server might end your connection and send a page to your browser indicating that your session has expired.
Write a paragraph in which you explain the purpose of a request message in a two-tier client-server architecture.Answer: The basic Web client/server model is a two-tier model because it has only one client and one server. All communication takes place on the Internet between the client and the server. Of course, other computers are involved in forwarding packets of information across the Internet, but the messages are created and read only by the client and the server computers in a two-tier client/server architecture. The message that a Web client sends to request a file or files from a Web server is called a request message.In about 100 words, describe how an n-tier architecture might be used by an online business. Include in your answer an outline of the functions that would likely be performed by computers configured in this way.Answer: Architectures that have more than three tiers are often called n-tier architectures. N-tier systems can track customer purchases stored in shopping carts, look up sales tax rates, keep track of customer preferences, update in-stock inventory databases, and keep the product catalog current.
In a paragraph or two, explain why national laws designed to limit spam are largely ineffective.Answer: Legal solutions to the spam problem have achieved only limited success in reducing spam because it is expensive for governments to prosecute spammers. To become cost effective, prosecutors must be able to identify spammers easily (to reduce the cost of bringing an action against them) and must have a greater likelihood of winning the cases they file (or must see a greater social benefit to winning).
The best way to make spammers easier to find has been to make technical changes in the e-mail transport mechanism in the Internets infrastructure.
In a paragraph, define throughput and response time. Explain why each is an important consideration in specifying a Web server hardware configuration.
Answer: Elements affecting overall server performance include hardware, operating system software, server software, connection bandwidth, user capacity, and type of Web pages being delivered. The number of users the server can handle is also important. This can be difficult to measure because both the bandwidth of the Internet connection and the sizes of the Web pages delivered can affect that number. Two factors to evaluate when measuring a servers Web page delivery capability are throughput and response time. Throughput is the number of HTTP requests that a particular hardware and software combination can process in a unit of time. Response time is the amount of time a server requires to process one request.
Identify the benefits and costs of using a decentralized instead of a centralized server architecture in an online business operation. Summarize your findings in about 100 words.Answer: Each approach (centralized and decentralized) has benefits and drawbacks. The decentralized architecture spreads risk over a large number of servers. If one server becomes inoperable, the site can continue to operate without much degradation in capability. The smaller servers used in the decentralized architecture are less expensive than the large servers used in the centralized approach. That is, the total cost of 100 small servers is usually less than the cost of one large server with the same capacity as the 100 small servers.