The Services Offered by a Web Design Agency: From Website Creation to SEO
The Process of Web Designing: Steps Taken by the Agency to Create Your Online Presence
What is Design
Historical facts about Design
Plan for the construction of an object or system
"Designed" and "Designing" redirect here. For other uses, see Design (disambiguation).
( Learn how and when to remove this message )
Braun ABW30 wall clock designed by Dieter Rams and Dietrich Lubs [de] (early 1980s)Victorinox Swiss Army knifeCutlery designed by architect and designer Zaha Hadid (2007). The slightly oblique end part of the fork and the spoons, as well as the knife handle, are examples of designing for both aesthetic form and practical function.Early concept design sketches by the architect Erling Viksjø, exploring the relationships between existing and proposed new buildingsBarényi Béla, considered to be the father of safe driving and safety tests, preparing for safety development, which is a core part of the designing process
A design is the concept of or proposal for an object, process, or system. The word design refers to something that is or has been intentionally created by a thinking agent, and is sometimes used to refer to the inherent nature of something – its design. The verb to design expresses the process of developing a design. In some cases, the direct construction of an object without an explicit prior plan may also be considered to be a design (such as in arts and crafts). A design is expected to have a purpose within a certain context, usually having to satisfy certain goals and constraints and to take into account aesthetic, functional, economic, environmental, or socio-political considerations. Traditional examples of designs include architectural and engineering drawings, circuit diagrams, sewing patterns, and less tangible artefacts such as business process models. [ 1 ] [ 2 ]
Designing
[ edit ]
People who produce designs are called designers . The term 'designer' usually refers to someone who works professionally in one
How to Choose the Right Web Design Agency for Your Business: Factors to Consider
Understanding Your Business Needs and Goals
Choosing the right web design agency for your company ain't as easy as one might think! It's a process that requires careful consideration of several factors. First off, understanding your business needs is crucial. What do you hope to achieve with your website? Are you looking to simply establish an online presence, or do you need advanced features like e-commerce or interactive elements? You need an agency that can align their services with these goals.
Evaluating Agency Expertise and Experience
Secondly, don’t forget about the agency's expertise and experience. Make sure they've got a robust portfolio demonstrating their skills (don't just take their word for it!). Have they worked on projects similar to yours before? Does their design aesthetic match what you're envisioning for your brand? Also, remember that good communication is key in any partnership; so, look for an agency that communicates clearly and responds promptly. In effect this means, choosing the right web design agency involves both a practical assessment of capabilities and a gut check on compatibility.
Case Studies and Success Stories from Top Performing Web Design Agencies
Realizing Client Vision
Top performing web design agencies often have a knack for understanding and delivering their clients' vision. They don't just create websites; they build online experiences that shape the way users interact with brands. A case study involving one such agency reveals how they revamped a client's outdated website, transforming it into an engaging, user-friendly platform. They not only took on board the client's requirements but also used their expertise to make strategic improvements.
Negotiating Design Challenges
Designing is never a walk in the park! It involves navigating through various challenges, be it technical glitches or creative hurdles. A success story from another high-ranking web design agency outlines how they tackled a particularly challenging project head-on. Despite numerous setbacks (including unresponsive elements and compatibility issues), the team persevered and ultimately delivered an exceptional final product.
Delivering Value
In essence, what sets top performing web design agencies apart is their commitment to delivering value to their clients. One case study highlights an agency that significantly boosted its client's online presence by designing a robust e-commerce site from scratch. The new site resulted in increased traffic and conversion rates for the client - clear proof of the agency's ability to deliver results!
Addressing Changing Trends
Changing trends are inevitable in this industry, aren't they? Top-performing agencies know this all too well and strive to stay ahead of these changes. A recent success story illustrates how one such agency successfully adapted its approach to accommodate emerging trends like mobile-first design and voice-activated search optimization.
Innovative Problem Solving
To put it short, problem-solving forms the crux of successful web design projects. Whether it’s tweaking code or finding innovative layout solutions, top-performing agencies have shown time and again that they’re not afraid to think outside of the box. In one case study, an agency implemented a unique navigation system to enhance user experience on a client's site – demonstrating their ability to innovate within design constraints.
Future Trends in Web Designing: How Agencies are Adapting for Tomorrow's Digital Landscape
Choosing the best web design agency in Sydney can be a daunting task!. The marketplace is crowded with agencies offering similar services, making it challenging to discern who will do the best job for your specific needs.
The Importance of Local SEO Sydney!
For any business, big or small, online presence is a crucial factor in today's digital era.. Out of the many strategies that can help enhance this presence, one that stands out due to its effectiveness and efficiency is Search Engine Optimization (SEO), particularly local SEO.
The web design industry, a rapidly evolving field, is always buzzing with new trends and techniques.. Sydney, known for its iconic Opera House and Harbour Bridge, has also become a hotbed of digital innovation, with its web design trends setting the pace for the rest of the world.
The Emergence of a Digital Marketing Agency in Sydney
With the rampant advancement of technology, the world is transforming into a digital global village.. This rapid shift has also brought significant changes to business operations, and marketing strategies are no exception.
Posted by on 2025-01-30
About Google Search
Search engine from Google
"Google.com" redirects here. For the company itself, see Google.
Google Search (also known simply as Google or Google.com) is a search engine operated by Google. It allows users to search for information on the Web by entering keywords or phrases. Google Search uses algorithms to analyze and rank websites based on their relevance to the search query. It is the most popular search engine worldwide.
The order of search results returned by Google is based, in part, on a priority rank system called "PageRank". Google Search also provides many different options for customized searches, using symbols to include, exclude, specify or require certain search behavior, and offers specialized interactive experiences, such as flight status and package tracking, weather forecasts, currency, unit, and time conversions, word definitions, and more.
Analysis of the frequency of search terms may indicate economic, social and health trends.[10] Data about the frequency of use of search terms on Google can be openly inquired via Google Trends and have been shown to correlate with flu outbreaks and unemployment levels, and provide the information faster than traditional reporting methods and surveys. As of mid-2016, Google's search engine has begun to rely on deep neural networks.[11]
In August 2024, a US judge in Virginia ruled that Google's search engine held an illegal monopoly over Internet search.[12][13] The court found that Google maintained its market dominance by paying large amounts to phone-makers and browser-developers to make Google its default search engine.[14]
Despite Google search's immense index, sources generally assume that Google is only indexing less than 5% of the total Internet, with the rest belonging to the deep web, inaccessible through its search tools.[15][19][20]
In 2012, Google changed its search indexing tools to demote sites that had been accused of piracy.[21] In October 2016, Gary Illyes, a webmaster trends analyst with Google, announced that the search engine would be making a separate, primary web index dedicated for mobile devices, with a secondary, less up-to-date index for desktop use. The change was a response to the continued growth in mobile usage, and a push for web developers to adopt a mobile-friendly version of their websites.[22][23] In December 2017, Google began rolling out the change, having already done so for multiple websites.[24]
In August 2009, Google invited web developers to test a new search architecture, codenamed "Caffeine", and give their feedback. The new architecture provided no visual differences in the user interface, but added significant speed improvements and a new "under-the-hood" indexing infrastructure. The move was interpreted in some quarters as a response to Microsoft's recent release of an upgraded version of its own search service, renamed Bing, as well as the launch of Wolfram Alpha, a new search engine based on "computational knowledge".[25][26] Google announced completion of "Caffeine" on June 8, 2010, claiming 50% fresher results due to continuous updating of its index.[27]
With "Caffeine", Google moved its back-end indexing system away from MapReduce and onto Bigtable, the company's distributed database platform.[28][29]
In August 2018, Danny Sullivan from Google announced a broad core algorithm update. As per current analysis done by the industry leaders Search Engine Watch and Search Engine Land, the update was to drop down the medical and health-related websites that were not user friendly and were not providing good user experience. This is why the industry experts named it "Medic".[30]
Google reserves very high standards for YMYL (Your Money or Your Life) pages. This is because misinformation can affect users financially, physically, or emotionally. Therefore, the update targeted particularly those YMYL pages that have low-quality content and misinformation. This resulted in the algorithm targeting health and medical-related websites more than others. However, many other websites from other industries were also negatively affected.[31]
By 2012, it handled more than 3.5 billion searches per day.[32] In 2013 the European Commission found that Google Search favored Google's own products, instead of the best result for consumers' needs.[33] In February 2015 Google announced a major change to its mobile search algorithm which would favor mobile friendly over other websites. Nearly 60% of Google searches come from mobile phones. Google says it wants users to have access to premium quality websites. Those websites which lack a mobile-friendly interface would be ranked lower and it is expected that this update will cause a shake-up of ranks. Businesses who fail to update their websites accordingly could see a dip in their regular websites traffic.[34]
Google's rise was largely due to a patented algorithm called PageRank which helps rank web pages that match a given search string.[35] When Google was a Stanford research project, it was nicknamed BackRub because the technology checks backlinks to determine a site's importance. Other keyword-based methods to rank search results, used by many search engines that were once more popular than Google, would check how often the search terms occurred in a page, or how strongly associated the search terms were within each resulting page. The PageRank algorithm instead analyzes human-generated links assuming that web pages linked from many important pages are also important. The algorithm computes a recursive score for pages, based on the weighted sum of other pages linking to them. PageRank is thought to correlate well with human concepts of importance. In addition to PageRank, Google, over the years, has added many other secret criteria for determining the ranking of resulting pages. This is reported to comprise over 250 different indicators,[36][37] the specifics of which are kept secret to avoid difficulties created by scammers and help Google maintain an edge over its competitors globally.
PageRank was influenced by a similar page-ranking and site-scoring algorithm earlier used for RankDex, developed by Robin Li in 1996. Larry Page's patent for PageRank filed in 1998 includes a citation to Li's earlier patent. Li later went on to create the Chinese search engine Baidu in 2000.[38][39]
In a potential hint of Google's future direction of their Search algorithm, Google's then chief executive Eric Schmidt, said in a 2007 interview with the Financial Times: "The goal is to enable Google users to be able to ask the question such as 'What shall I do tomorrow?' and 'What job shall I take?'".[40] Schmidt reaffirmed this during a 2010 interview with The Wall Street Journal: "I actually think most people don't want Google to answer their questions, they want Google to tell them what they should be doing next."[41]
Because Google is the most popular search engine, many webmasters attempt to influence their website's Google rankings. An industry of consultants has arisen to help websites increase their rankings on Google and other search engines. This field, called search engine optimization, attempts to discern patterns in search engine listings, and then develop a methodology for improving rankings to draw more searchers to their clients' sites. Search engine optimization encompasses both "on page" factors (like body copy, title elements, H1 heading elements and image alt attribute values) and Off Page Optimization factors (like anchor text and PageRank). The general idea is to affect Google's relevance algorithm by incorporating the keywords being targeted in various places "on page", in particular the title element and the body copy (note: the higher up in the page, presumably the better its keyword prominence and thus the ranking). Too many occurrences of the keyword, however, cause the page to look suspect to Google's spam checking algorithms. Google has published guidelines for website owners who would like to raise their rankings when using legitimate optimization consultants.[42] It has been hypothesized, and, allegedly, is the opinion of the owner of one business about which there have been numerous complaints, that negative publicity, for example, numerous consumer complaints, may serve as well to elevate page rank on Google Search as favorable comments.[43] The particular problem addressed in The New York Times article, which involved DecorMyEyes, was addressed shortly thereafter by an undisclosed fix in the Google algorithm. According to Google, it was not the frequently published consumer complaints about DecorMyEyes which resulted in the high ranking but mentions on news websites of events which affected the firm such as legal actions against it. Google Search Console helps to check for websites that use duplicate or copyright content.[44]
In 2013, Google significantly upgraded its search algorithm with "Hummingbird". Its name was derived from the speed and accuracy of the hummingbird.[45] The change was announced on September 26, 2013, having already been in use for a month.[46] "Hummingbird" places greater emphasis on natural language queries, considering context and meaning over individual keywords.[45] It also looks deeper at content on individual pages of a website, with improved ability to lead users directly to the most appropriate page rather than just a website's homepage.[47] The upgrade marked the most significant change to Google search in years, with more "human" search interactions[48] and a much heavier focus on conversation and meaning.[45] Thus, web developers and writers were encouraged to optimize their sites with natural writing rather than forced keywords, and make effective use of technical web development for on-site navigation.[49]
In 2023, drawing on internal Google documents disclosed as part of the United States v. Google LLC (2020) antitrust case, technology reporters claimed that Google Search was "bloated and overmonetized"[50] and that the "semantic matching" of search queries put advertising profits before quality.[51]Wired withdrew Megan Gray's piece after Google complained about alleged inaccuracies, while the author reiterated that «As stated in court, "A goal of Project Mercury was to increase commercial queries"».[52]
In March 2024, Google announced a significant update to its core search algorithm and spam targeting, which is expected to wipe out 40 percent of all spam results.[53] On March 20th, it was confirmed that the roll out of the spam update was complete.[54]
On September 10, 2024, the European-based EU Court of Justice found that Google held an illegal monopoly with the way the company showed favoritism to its shopping search, and could not avoid paying €2.4 billion.[55] The EU Court of Justice referred to Google's treatment of rival shopping searches as "discriminatory" and in violation of the Digital Markets Act.[55]
At the top of the search page, the approximate result count and the response time two digits behind decimal is noted. Of search results, page titles and URLs, dates, and a preview text snippet for each result appears. Along with web search results, sections with images, news, and videos may appear.[56] The length of the previewed text snipped was experimented with in 2015 and 2017.[57][58]
"Universal search" was launched by Google on May 16, 2007, as an idea that merged the results from different kinds of search types into one. Prior to Universal search, a standard Google search would consist of links only to websites. Universal search, however, incorporates a wide variety of sources, including websites, news, pictures, maps, blogs, videos, and more, all shown on the same search results page.[59][60]Marissa Mayer, then-vice president of search products and user experience, described the goal of Universal search as "we're attempting to break down the walls that traditionally separated our various search properties and integrate the vast amounts of information available into one simple set of search results.[61]
In June 2017, Google expanded its search results to cover available job listings. The data is aggregated from various major job boards and collected by analyzing company homepages. Initially only available in English, the feature aims to simplify finding jobs suitable for each user.[62][63]
In May 2009, Google announced that they would be parsing website microformats to populate search result pages with "Rich snippets". Such snippets include additional details about results, such as displaying reviews for restaurants and social media accounts for individuals.[64]
In May 2016, Google expanded on the "Rich snippets" format to offer "Rich cards", which, similarly to snippets, display more information about results, but shows them at the top of the mobile website in a swipeable carousel-like format.[65] Originally limited to movie and recipe websites in the United States only, the feature expanded to all countries globally in 2017.[66]
The Knowledge Graph is a knowledge base used by Google to enhance its search engine's results with information gathered from a variety of sources.[67] This information is presented to users in a box to the right of search results.[68] Knowledge Graph boxes were added to Google's search engine in May 2012,[67] starting in the United States, with international expansion by the end of the year.[69] The information covered by the Knowledge Graph grew significantly after launch, tripling its original size within seven months,[70] and being able to answer "roughly one-third" of the 100 billion monthly searches Google processed in May 2016.[71] The information is often used as a spoken answer in Google Assistant[72] and Google Home searches.[73] The Knowledge Graph has been criticized for providing answers without source attribution.[71]
A Google Knowledge Panel[74] is a feature integrated into Google search engine result pages, designed to present a structured overview of entities such as individuals, organizations, locations, or objects directly within the search interface. This feature leverages data from Google's Knowledge Graph,[75] a database that organizes and interconnects information about entities, enhancing the retrieval and presentation of relevant content to users.
The content within a Knowledge Panel[76] is derived from various sources, including Wikipedia and other structured databases, ensuring that the information displayed is both accurate and contextually relevant. For instance, querying a well-known public figure may trigger a Knowledge Panel displaying essential details such as biographical information, birthdate, and links to social media profiles or official websites.
The primary objective of the Google Knowledge Panel is to provide users with immediate, factual answers, reducing the need for extensive navigation across multiple web pages.
In May 2017, Google enabled a new "Personal" tab in Google Search, letting users search for content in their Google accounts' various services, including email messages from Gmail and photos from Google Photos.[77][78]
Google Discover, previously known as Google Feed, is a personalized stream of articles, videos, and other news-related content. The feed contains a "mix of cards" which show topics of interest based on users' interactions with Google, or topics they choose to follow directly.[79] Cards include, "links to news stories, YouTube videos, sports scores, recipes, and other content based on what [Google] determined you're most likely to be interested in at that particular moment."[79] Users can also tell Google they're not interested in certain topics to avoid seeing future updates.
Google Discover launched in December 2016[80] and received a major update in July 2017.[81] Another major update was released in September 2018, which renamed the app from Google Feed to Google Discover, updated the design, and adding more features.[82]
Discover can be found on a tab in the Google app and by swiping left on the home screen of certain Android devices. As of 2019, Google will not allow political campaigns worldwide to target their advertisement to people to make them vote.[83]
At the 2023 Google I/O event in May, Google unveiled Search Generative Experience (SGE), an experimental feature in Google Search available through Google Labs which produces AI-generated summaries in response to search prompts.[84] This was part of Google's wider efforts to counter the unprecedented rise of generative AI technology, ushered by OpenAI's launch of ChatGPT, which sent Google executives to a panic due to its potential threat to Google Search.[85] Google added the ability to generate images in October.[86] At I/O in 2024, the feature was upgraded and renamed AI Overviews.[87]
Early AI Overview response to the problem of "cheese not sticking to pizza"
AI Overviews was rolled out to users in the United States in May 2024.[87] The feature faced public criticism in the first weeks of its rollout after errors from the tool went viral online. These included results suggesting users add glue to pizza or eat rocks,[88] or incorrectly claiming Barack Obama is Muslim.[89] Google described these viral errors as "isolated examples", maintaining that most AI Overviews provide accurate information.[88][90] Two weeks after the rollout of AI Overviews, Google made technical changes and scaled back the feature, pausing its use for some health-related queries and limiting its reliance on social media posts.[91]Scientific American has criticised the system on environmental grounds, as such a search uses 30 times more energy than a conventional one.[92] It has also been criticized for condensing information from various sources, making it less likely for people to view full articles and websites. When it was announced in May 2024, Danielle Coffey, CEO of the News/Media Alliance was quoted as saying "This will be catastrophic to our traffic, as marketed by Google to further satisfy user queries, leaving even less incentive to click through so that we can monetize our content."[93]
In August 2024, AI Overviews were rolled out in the UK, India, Japan, Indonesia, Mexico and Brazil, with local language support.[94] On October 28, 2024, AI Overviews was rolled out to 100 more countries, including Australia and New Zealand.[95]
In late June 2011, Google introduced a new look to the Google homepage in order to boost the use of the Google+ social tools.[96]
One of the major changes was replacing the classic navigation bar with a black one. Google's digital creative director Chris Wiggins explains: "We're working on a project to bring you a new and improved Google experience, and over the next few months, you'll continue to see more updates to our look and feel."[97] The new navigation bar has been negatively received by a vocal minority.[98]
In November 2013, Google started testing yellow labels for advertisements displayed in search results, to improve user experience. The new labels, highlighted in yellow color, and aligned to the left of each sponsored link help users differentiate between organic and sponsored results.[99]
On December 15, 2016, Google rolled out a new desktop search interface that mimics their modular mobile user interface. The mobile design consists of a tabular design that highlights search features in boxes. and works by imitating the desktop Knowledge Graph real estate, which appears in the right-hand rail of the search engine result page, these featured elements frequently feature Twitter carousels, People Also Search For, and Top Stories (vertical and horizontal design) modules. The Local Pack and Answer Box were two of the original features of the Google SERP that were primarily showcased in this manner, but this new layout creates a previously unseen level of design consistency for Google results.[100]
Google offers a "Google Search" mobile app for Android and iOS devices.[101] The mobile apps exclusively feature Google Discover and a "Collections" feature, in which the user can save for later perusal any type of search result like images, bookmarks or map locations into groups.[102] Android devices were introduced to a preview of the feed, perceived as related to Google Now, in December 2016,[103] while it was made official on both Android and iOS in July 2017.[104][105]
In April 2016, Google updated its Search app on Android to feature "Trends"; search queries gaining popularity appeared in the autocomplete box along with normal query autocompletion.[106] The update received significant backlash, due to encouraging search queries unrelated to users' interests or intentions, prompting the company to issue an update with an opt-out option.[107] In September 2017, the Google Search app on iOS was updated to feature the same functionality.[108]
In December 2017, Google released "Google Go", an app designed to enable use of Google Search on physically smaller and lower-spec devices in multiple languages. A Google blog post about designing "India-first" products and features explains that it is "tailor-made for the millions of people in [India and Indonesia] coming online for the first time".[109]
A definition link is provided for many search terms.
Google Search consists of a series of localized websites. The largest of those, the google.com site, is the top most-visited website in the world.[110] Some of its features include a definition link for most searches including dictionary words, the number of results you got on your search, links to other searches (e.g. for words that Google believes to be misspelled, it provides a link to the search results using its proposed spelling), the ability to filter results to a date range,[111] and many more.
Google search accepts queries as normal text, as well as individual keywords.[112] It automatically corrects apparent misspellings by default (while offering to use the original spelling as a selectable alternative), and provides the same results regardless of capitalization.[112] For more customized results, one can use a wide variety of operators, including, but not limited to:[113][114]
OR or | – Search for webpages containing one of two similar queries, such as marathon OR race
AND – Search for webpages containing two similar queries, such as marathon AND runner
- (minus sign) – Exclude a word or a phrase, so that "apple -tree" searches where word "tree" is not used
"" – Force inclusion of a word or a phrase, such as "tallest building"
* – Placeholder symbol allowing for any substitute words in the context of the query, such as "largest * in the world"
.. – Search within a range of numbers, such as "camera $50..$100"
site: – Search within a specific website, such as "site:youtube.com"
define: – Search for definitions for a word or phrase, such as "define:phrase"
stocks: – See the stock price of investments, such as "stocks:googl"
related: – Find web pages related to specific URL addresses, such as "related:www.wikipedia.org"
cache: – Highlights the search-words within the cached pages, so that "cache:www.google.com xxx" shows cached content with word "xxx" highlighted.
( ) – Group operators and searches, such as (marathon OR race) AND shoes
filetype: or ext: – Search for specific file types, such as filetype:gif
before: – Search for before a specific date, such as spacex before:2020-08-11
after: – Search for after a specific date, such as iphone after:2007-06-29
@ – Search for a specific word on social media networks, such as "@twitter"
Google also offers a Google Advanced Search page with a web interface to access the advanced features without needing to remember the special operators.[115]
Google applies query expansion to submitted search queries, using techniques to deliver results that it considers "smarter" than the query users actually submitted. This technique involves several steps, including:[116]
Word stemming – Certain words can be reduced so other, similar terms, are also found in results, so that "translator" can also search for "translation"
Acronyms – Searching for abbreviations can also return results about the name in its full length, so that "NATO" can show results for "North Atlantic Treaty Organization"
Misspellings – Google will often suggest correct spellings for misspelled words
Synonyms – In most cases where a word is incorrectly used in a phrase or sentence, Google search will show results based on the correct synonym
Translations – The search engine can, in some instances, suggest results for specific words in a different language
Ignoring words – In some search queries containing extraneous or insignificant words, Google search will simply drop those specific words from the query
A screenshot of suggestions by Google Search when "wikip" is typed
In 2008, Google started to give users autocompletedsearch suggestions in a list below the search bar while typing, originally with the approximate result count previewed for each listed search suggestion.[117]
"I'm Feeling Lucky" redirects here. For the 2011 book by Douglas Edwards, see I'm Feeling Lucky (book).
Google's homepage includes a button labeled "I'm Feeling Lucky". This feature originally allowed users to type in their search query, click the button and be taken directly to the first result, bypassing the search results page. Clicking it while leaving the search box empty opens Google's archive of Doodles.[118] With the 2010 announcement of Google Instant, an automatic feature that immediately displays relevant results as users are typing in their query, the "I'm Feeling Lucky" button disappears, requiring that users opt-out of Instant results through search settings to keep using the "I'm Feeling Lucky" functionality.[119] In 2012, "I'm Feeling Lucky" was changed to serve as an advertisement for Google services; users hover their computer mouse over the button, it spins and shows an emotion ("I'm Feeling Puzzled" or "I'm Feeling Trendy", for instance), and, when clicked, takes users to a Google service related to that emotion.[120]
Tom Chavez of "Rapt", a firm helping to determine a website's advertising worth, estimated in 2007 that Google lost $110 million in revenue per year due to use of the button, which bypasses the advertisements found on the search results page.[121]
Besides the main text-based search-engine function of Google search, it also offers multiple quick, interactive features. These include, but are not limited to:[122][123][124]
During Google's developer conference, Google I/O, in May 2013, the company announced that users on Google Chrome and ChromeOS would be able to have the browser initiate an audio-based search by saying "OK Google", with no button presses required. After having the answer presented, users can follow up with additional, contextual questions; an example include initially asking "OK Google, will it be sunny in Santa Cruz this weekend?", hearing a spoken answer, and reply with "how far is it from here?"[125][126] An update to the Chrome browser with voice-search functionality rolled out a week later, though it required a button press on a microphone icon rather than "OK Google" voice activation.[127] Google released a browser extension for the Chrome browser, named with a "beta" tag for unfinished development, shortly thereafter.[128] In May 2014, the company officially added "OK Google" into the browser itself;[129] they removed it in October 2015, citing low usage, though the microphone icon for activation remained available.[130] In May 2016, 20% of search queries on mobile devices were done through voice.[131]
In addition to its tool for searching web pages, Google also provides services for searching images, Usenetnewsgroups, news websites, videos (Google Videos), searching by locality, maps, and items for sale online. Google Videos allows searching the World Wide Web for video clips.[132] The service evolved from Google Video, Google's discontinued video hosting service that also allowed to search the web for video clips.[132]
There are also products available from Google that are not directly search-related. Gmail, for example, is a webmail application, but still includes search features; Google Browser Sync does not offer any search facilities, although it aims to organize your browsing time.
In 2009, Google claimed that a search query requires altogether about 1 kJ or 0.0003 kW·h,[135] which is enough to raise the temperature of one liter of water by 0.24 °C. According to green search engine Ecosia, the industry standard for search engines is estimated to be about 0.2 grams of CO2 emission per search.[136] Google's 40,000 searches per second translate to 8 kg CO2 per second or over 252 million kilos of CO2 per year.[137]
On certain occasions, the logo on Google's webpage will change to a special version, known as a "Google Doodle". This is a picture, drawing, animation, or interactive game that includes the logo. It is usually done for a special event or day although not all of them are well known.[138] Clicking on the Doodle links to a string of Google search results about the topic. The first was a reference to the Burning Man Festival in 1998,[139][140] and others have been produced for the birthdays of notable people like Albert Einstein, historical events like the interlocking Lego block's 50th anniversary and holidays like Valentine's Day.[141] Some Google Doodles have interactivity beyond a simple search, such as the famous "Google Pac-Man" version that appeared on May 21, 2010.
Google has been criticized for placing long-term cookies on users' machines to store preferences, a tactic which also enables them to track a user's search terms and retain the data for more than a year.[142]
Since 2012, Google Inc. has globally introduced encrypted connections for most of its clients, to bypass governative blockings of the commercial and IT services.[143]
In 2003, The New York Times complained about Google's indexing, claiming that Google's caching of content on its site infringed its copyright for the content.[144] In both Field v. Google and Parker v. Google, the United States District Court of Nevada ruled in favor of Google.[145][146]
A 2019 New York Times article on Google Search showed that images of child sexual abuse had been found on Google and that the company had been reluctant at times to remove them.[147]
Google flags search results with the message "This site may harm your computer" if the site is known to install malicious software in the background or otherwise surreptitiously. For approximately 40 minutes on January 31, 2009, all search results were mistakenly classified as malware and could therefore not be clicked; instead a warning message was displayed and the user was required to enter the requested URL manually. The bug was caused by human error.[148][149][150][151] The URL of "/" (which expands to all URLs) was mistakenly added to the malware patterns file.[149][150]
In 2007, a group of researchers observed a tendency for users to rely exclusively on Google Search for finding information, writing that "With the Google interface the user gets the impression that the search results imply a kind of totality. ... In fact, one only sees a small part of what one could see if one also integrates other research tools."[152]
In 2011, Google Search query results have been shown by Internet activist Eli Pariser to be tailored to users, effectively isolating users in what he defined as a filter bubble. Pariser holds algorithms used in search engines such as Google Search responsible for catering "a personal ecosystem of information".[153] Although contrasting views have mitigated the potential threat of "informational dystopia" and questioned the scientific nature of Pariser's claims,[154] filter bubbles have been mentioned to account for the surprising results of the U.S. presidential election in 2016 alongside fake news and echo chambers, suggesting that Facebook and Google have designed personalized online realities in which "we only see and hear what we like".[155]
In 2012, the US Federal Trade Commission fined Google US$22.5 million for violating their agreement not to violate the privacy of users of Apple's Safari web browser.[156] The FTC was also continuing to investigate if Google's favoring of their own services in their search results violated antitrust regulations.[157]
In a November 2023 disclosure, during the ongoing antitrust trial against Google, an economics professor at the University of Chicago revealed that Google pays Apple 36% of all search advertising revenue generated when users access Google through the Safari browser. This revelation reportedly caused Google's lead attorney to cringe visibly.[citation needed] The revenue generated from Safari users has been kept confidential, but the 36% figure suggests that it is likely in the tens of billions of dollars.
Both Apple and Google have argued that disclosing the specific terms of their search default agreement would harm their competitive positions. However, the court ruled that the information was relevant to the antitrust case and ordered its disclosure. This revelation has raised concerns about the dominance of Google in the search engine market and the potential anticompetitive effects of its agreements with Apple.[158]
Google search engine robots are programmed to use algorithms that understand and predict human behavior. The book, Race After Technology: Abolitionist Tools for the New Jim Code[159] by Ruha Benjamin talks about human bias as a behavior that the Google search engine can recognize. In 2016, some users Google searched "three Black teenagers" and images of criminal mugshots of young African American teenagers came up. Then, the users searched "three White teenagers" and were presented with photos of smiling, happy teenagers. They also searched for "three Asian teenagers", and very revealing photos of Asian girls and women appeared. Benjamin concluded that these results reflect human prejudice and views on different ethnic groups. A group of analysts explained the concept of a racist computer program: "The idea here is that computers, unlike people, can't be racist but we're increasingly learning that they do in fact take after their makers ... Some experts believe that this problem might stem from the hidden biases in the massive piles of data that the algorithms process as they learn to recognize patterns ... reproducing our worst values".[159]
On August 5, 2024, Google lost a lawsuit which started in 2020 in D.C. Circuit Court, with Judge Amit Mehta finding that the company had an illegal monopoly over Internet search.[160] This monopoly was held to be in violation of Section 2 of the Sherman Act.[161] Google has said it will appeal the ruling[162], though they did propose to loosen search deals with Apple and others requiring them to set Google as the default search engine.[163]
As people talk about "googling" rather than searching, the company has taken some steps to defend its trademark, in an effort to prevent it from becoming a generic trademark.[164][165] This has led to lawsuits, threats of lawsuits, and the use of euphemisms, such as calling Google Search a famous web search engine.[166]
Until May 2013, Google Search had offered a feature to translate search queries into other languages. A Google spokesperson told Search Engine Land that "Removing features is always tough, but we do think very hard about each decision and its implications for our users. Unfortunately, this feature never saw much pick up".[167]
Instant search was announced in September 2010 as a feature that displayed suggested results while the user typed in their search query, initially only in select countries or to registered users.[168] The primary advantage of the new system was its ability to save time, with Marissa Mayer, then-vice president of search products and user experience, proclaiming that the feature would save 2–5 seconds per search, elaborating that "That may not seem like a lot at first, but it adds up. With Google Instant, we estimate that we'll save our users 11 hours with each passing second!"[169] Matt Van Wagner of Search Engine Land wrote that "Personally, I kind of like Google Instant and I think it represents a natural evolution in the way search works", and also praised Google's efforts in public relations, writing that "With just a press conference and a few well-placed interviews, Google has parlayed this relatively minor speed improvement into an attention-grabbing front-page news story".[170] The upgrade also became notable for the company switching Google Search's underlying technology from HTML to AJAX.[171]
Instant Search could be disabled via Google's "preferences" menu for those who didn't want its functionality.[172]
The publication 2600: The Hacker Quarterly compiled a list of words that Google Instant did not show suggested results for, with a Google spokesperson giving the following statement to Mashable:[173]
There are several reasons you may not be seeing search queries for a particular topic. Among other things, we apply a narrow set of removal policies for pornography, violence, and hate speech. It's important to note that removing queries from Autocomplete is a hard problem, and not as simple as blacklisting particular terms and phrases.
In search, we get more than one billion searches each day. Because of this, we take an algorithmic approach to removals, and just like our search algorithms, these are imperfect. We will continue to work to improve our approach to removals in Autocomplete, and are listening carefully to feedback from our users.
Our algorithms look not only at specific words, but compound queries based on those words, and across all languages. So, for example, if there's a bad word in Russian, we may remove a compound word including the transliteration of the Russian word into English. We also look at the search results themselves for given queries. So, for example, if the results for a particular query seem pornographic, our algorithms may remove that query from Autocomplete, even if the query itself wouldn't otherwise violate our policies. This system is neither perfect nor instantaneous, and we will continue to work to make it better.
PC Magazine discussed the inconsistency in how some forms of the same topic are allowed; for instance, "lesbian" was blocked, while "gay" was not, and "cocaine" was blocked, while "crack" and "heroin" were not. The report further stated that seemingly normal words were also blocked due to pornographic innuendos, most notably "scat", likely due to having two completely separate contextual meanings, one for music and one for a sexual practice.[174]
On July 26, 2017, Google removed Instant results, due to a growing number of searches on mobile devices, where interaction with search, as well as screen sizes, differ significantly from a computer.[175][176]
"Instant previews" allowed previewing screenshots of search results' web pages without having to open them. The feature was introduced in November 2010 to the desktop website and removed in April 2013 citing low usage.[177][178]
Various search engines provide encrypted Web search facilities. In May 2010 Google rolled out SSL-encrypted web search.[179] The encrypted search was accessed at encrypted.google.com[180] However, the web search is encrypted via Transport Layer Security (TLS) by default today, thus every search request should be automatically encrypted if TLS is supported by the web browser.[181] On its support website, Google announced that the address encrypted.google.com would be turned off April 30, 2018, stating that all Google products and most new browsers use HTTPS connections as the reason for the discontinuation.[182]
Google Real-Time Search was a feature of Google Search in which search results also sometimes included real-time information from sources such as Twitter, Facebook, blogs, and news websites.[183] The feature was introduced on December 7, 2009,[184] and went offline on July 2, 2011, after the deal with Twitter expired.[185] Real-Time Search included Facebook status updates beginning on February 24, 2010.[186] A feature similar to Real-Time Search was already available on Microsoft's Bing search engine, which showed results from Twitter and Facebook.[187] The interface for the engine showed a live, descending "river" of posts in the main region (which could be paused or resumed), while a bar chart metric of the frequency of posts containing a certain search term or hashtag was located on the right hand corner of the page above a list of most frequently reposted posts and outgoing links. Hashtag search links were also supported, as were "promoted" tweets hosted by Twitter (located persistently on top of the river) and thumbnails of retweeted image or video links.
In January 2011, geolocation links of posts were made available alongside results in Real-Time Search. In addition, posts containing syndicated or attached shortened links were made searchable by the link: query option. In July 2011, Real-Time Search became inaccessible, with the Real-Time link in the Google sidebar disappearing and a custom 404 error page generated by Google returned at its former URL. Google originally suggested that the interruption was temporary and related to the launch of Google+;[188] they subsequently announced that it was due to the expiry of a commercial arrangement with Twitter to provide access to tweets.[189]
List of search engines by popularity – Software system for finding relevant information on the WebPages displaying short descriptions of redirect targets
^Sherman, Chris; Price, Gary (May 22, 2008). "The Invisible Web: Uncovering Sources Search Engines Can't See". Illinois Digital Environment for Access to Learning and Scholarship. University of Illinois at Urbana–Champaign. hdl:2142/8528.
^Megan Gray (October 2, 2023). "How Google Alters Search Queries to Get at Your Wallet". Archived from the original on October 2, 2023. This onscreen Google slide had to do with a "semantic matching" overhaul to its SERP algorithm. When you enter a query, you might expect a search engine to incorporate synonyms into the algorithm as well as text phrase pairings in natural language processing. But this overhaul went further, actually altering queries to generate more commercial results.
^Parramore, Lynn (October 10, 2010). "The Filter Bubble". The Atlantic. Archived from the original on August 22, 2017. Retrieved April 20, 2011. Since Dec. 4, 2009, Google has been personalized for everyone. So when I had two friends this spring Google 'BP,' one of them got a set of links that was about investment opportunities in BP. The other one got information about the oil spill
^Mostafa M. El-Bermawy (November 18, 2016). "Your Filter Bubble is Destroying Democracy". Wired. Retrieved March 3, 2017. The global village that was once the internet ... digital islands of isolation that are drifting further apart each day ... your experience online grows increasingly personalized
^"Google Instant Search: The Complete User's Guide". Search Engine Land. September 8, 2010. Archived from the original on October 20, 2021. Retrieved October 5, 2021. Google Instant only works for searchers in the US or who are logged in to a Google account in selected countries outside the US
This article is about domain names in the Internet. For other uses, see Domain (disambiguation).
An annotated example of a domain name
In the Internet, a domain name is a string that identifies a realm of administrative autonomy, authority or control. Domain names are often used to identify services provided through the Internet, such as websites, email services and more. Domain names are used in various networking contexts and for application-specific naming and addressing purposes. In general, a domain name identifies a network domain or an Internet Protocol (IP) resource, such as a personal computer used to access the Internet, or a server computer.
Domain names are formed by the rules and procedures of the Domain Name System (DNS). Any name registered in the DNS is a domain name. Domain names are organized in subordinate levels (subdomains) of the DNS root domain, which is nameless. The first-level set of domain names are the top-level domains (TLDs), including the generic top-level domains (gTLDs), such as the prominent domains com, info, net, edu, and org, and the country code top-level domains (ccTLDs). Below these top-level domains in the DNS hierarchy are the second-level and third-level domain names that are typically open for reservation by end-users who wish to connect local area networks to the Internet, create other publicly accessible Internet resources or run websites, such as "wikipedia.org". The registration of a second- or third-level domain name is usually administered by a domain name registrar who sell its services to the public.
A fully qualified domain name (FQDN) is a domain name that is completely specified with all labels in the hierarchy of the DNS, having no parts omitted. Traditionally a FQDN ends in a dot (.) to denote the top of the DNS tree.[1] Labels in the Domain Name System are case-insensitive, and may therefore be written in any desired capitalization method, but most commonly domain names are written in lowercase in technical contexts.[2] A hostname is a domain name that has at least one associated IP address.
Domain names serve to identify Internet resources, such as computers, networks, and services, with a text-based label that is easier to memorize than the numerical addresses used in the Internet protocols. A domain name may represent entire collections of such resources or individual instances. Individual Internet host computers use domain names as host identifiers, also called hostnames. The term hostname is also used for the leaf labels in the domain name system, usually without further subordinate domain name space. Hostnames appear as a component in Uniform Resource Locators (URLs) for Internet resources such as websites (e.g., en.wikipedia.org).
Domain names are also used as simple identification labels to indicate ownership or control of a resource. Such examples are the realm identifiers used in the Session Initiation Protocol (SIP), the Domain Keys used to verify DNS domains in e-mail systems, and in many other Uniform Resource Identifiers (URIs).
An important function of domain names is to provide easily recognizable and memorizable names to numerically addressed Internet resources. This abstraction allows any resource to be moved to a different physical location in the address topology of the network, globally or locally in an intranet. Such a move usually requires changing the IP address of a resource and the corresponding translation of this IP address to and from its domain name.
Domain names are used to establish a unique identity. Organizations can choose a domain name that corresponds to their name, helping Internet users to reach them easily.
A generic domain is a name that defines a general category, rather than a specific or personal instance, for example, the name of an industry, rather than a company name. Some examples of generic names are books.com, music.com, and travel.info. Companies have created brands based on generic names, and such generic domain names may be valuable.[3]
Domain names are often simply referred to as domains and domain name registrants are frequently referred to as domain owners, although domain name registration with a registrar does not confer any legal ownership of the domain name, only an exclusive right of use for a particular duration of time. The use of domain names in commerce may subject them to trademark law.
The practice of using a simple memorable abstraction of a host's numerical address on a computer network dates back to the ARPANET era, before the advent of today's commercial Internet. In the early network, each computer on the network retrieved the hosts file (host.txt) from a computer at SRI (now SRI International),[4][5] which mapped computer hostnames to numerical addresses. The rapid growth of the network made it impossible to maintain a centrally organized hostname registry and in 1983 the Domain Name System was introduced on the ARPANET and published by the Internet Engineering Task Force as RFC 882 and RFC 883.
The following table shows the first five .com domains with the dates of their registration:[6]
The hierarchy of labels in a fully qualified domain name
The domain name space consists of a tree of domain names. Each node in the tree holds information associated with the domain name. The tree sub-divides into zones beginning at the DNS root zone.
A domain name consists of one or more parts, technically called labels, that are conventionally concatenated, and delimited by dots, such as example.com.
The right-most label conveys the top-level domain; for example, the domain name www.example.com belongs to the top-level domain com.
The hierarchy of domains descends from the right to the left label in the name; each label to the left specifies a subdivision, or subdomain of the domain to the right. For example: the label example specifies a node example.com as a subdomain of the com domain, and www is a label to create www.example.com, a subdomain of example.com. Each label may contain from 1 to 63 octets. The empty label is reserved for the root node and when fully qualified is expressed as the empty label terminated by a dot. The full domain name may not exceed a total length of 253 ASCII characters in its textual representation.[8]
A hostname is a domain name that has at least one associated IP address. For example, the domain names www.example.com and example.com are also hostnames, whereas the com domain is not. However, other top-level domains, particularly country code top-level domains, may indeed have an IP address, and if so, they are also hostnames.
Hostnames impose restrictions on the characters allowed in the corresponding domain name. A valid hostname is also a valid domain name, but a valid domain name may not necessarily be valid as a hostname.
When the Domain Name System was devised in the 1980s, the domain name space was divided into two main groups of domains.[9] The country code top-level domains (ccTLD) were primarily based on the two-character territory codes of ISO-3166 country abbreviations. In addition, a group of seven generic top-level domains (gTLD) was implemented which represented a set of categories of names and multi-organizations.[10] These were the domains gov, edu, com, mil, org, net, and int. These two types of top-level domains (TLDs) are the highest level of domain names of the Internet. Top-level domains form the DNS root zone of the hierarchical Domain Name System. Every domain name ends with a top-level domain label.
During the growth of the Internet, it became desirable to create additional generic top-level domains. As of October 2009, 21 generic top-level domains and 250 two-letter country-code top-level domains existed.[11] In addition, the ARPA domain serves technical purposes in the infrastructure of the Domain Name System.
During the 32nd International Public ICANN Meeting in Paris in 2008,[12] ICANN started a new process of TLD naming policy to take a "significant step forward on the introduction of new generic top-level domains." This program envisions the availability of many new or already proposed domains, as well as a new application and implementation process.[13] Observers believed that the new rules could result in hundreds of new top-level domains to be registered.[14] In 2012, the program commenced, and received 1930 applications.[15] By 2016, the milestone of 1000 live gTLD was reached.
For special purposes, such as network testing, documentation, and other applications, IANA also reserves a set of special-use domain names.[17] This list contains domain names such as example, local, localhost, and test. Other top-level domain names containing trade marks are registered for corporate use. Cases include brands such as BMW, Google, and Canon.[18]
Below the top-level domains in the domain name hierarchy are the second-level domain (SLD) names. These are the names directly to the left of .com, .net, and the other top-level domains. As an example, in the domain example.co.uk, co is the second-level domain.
Next are third-level domains, which are written immediately to the left of a second-level domain. There can be fourth- and fifth-level domains, and so on, with virtually no limitation. Each label is separated by a full stop (dot). An example of an operational domain name with four levels of domain labels is sos.state.oh.us. 'sos' is said to be a sub-domain of 'state.oh.us', and 'state' a sub-domain of 'oh.us', etc. In general, subdomains are domains subordinate to their parent domain. An example of very deep levels of subdomain ordering are the IPv6 reverse resolution DNS zones, e.g., 1.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.0.ip6.arpa, which is the reverse DNS resolution domain name for the IP address of a loopback interface, or the localhost name.
Second-level (or lower-level, depending on the established parent hierarchy) domain names are often created based on the name of a company (e.g., bbc.co.uk), product or service (e.g. hotmail.com). Below these levels, the next domain name component has been used to designate a particular host server. Therefore, ftp.example.com might be an FTP server, www.example.com would be a World Wide Web server, and mail.example.com could be an email server, each intended to perform only the implied function. Modern technology allows multiple physical servers with either different (cf. load balancing) or even identical addresses (cf. anycast) to serve a single hostname or domain name, or multiple domain names to be served by a single computer. The latter is very popular in Web hosting service centers, where service providers host the websites of many organizations on just a few servers.
The hierarchical DNS labels or components of domain names are separated in a fully qualified name by the full stop (dot, .).
The character set allowed in the Domain Name System is based on ASCII and does not allow the representation of names and words of many languages in their native scripts or alphabets. ICANN approved the Internationalized domain name (IDNA) system, which maps Unicode strings used in application user interfaces into the valid DNS character set by an encoding called Punycode. For example, københavn.eu is mapped to xn--kbenhavn-54a.eu. Many registries have adopted IDNA.
The first commercial Internet domain name, in the TLD com, was registered on 15 March 1985 in the name symbolics.com by Symbolics Inc., a computer systems firm in Cambridge, Massachusetts.
By 1992, fewer than 15,000 com domains had been registered.
In the first quarter of 2015, 294 million domain names had been registered.[19] A large fraction of them are in the com TLD, which as of December 21, 2014, had 115.6 million domain names,[20] including 11.9 million online business and e-commerce sites, 4.3 million entertainment sites, 3.1 million finance related sites, and 1.8 million sports sites.[21] As of July 15, 2012, the com TLD had more registrations than all of the ccTLDs combined.[22]
As of December 31, 2023,[update] 359.8 million domain names had been registered.[23]
The right to use a domain name is delegated by domain name registrars, which are accredited by the Internet Corporation for Assigned Names and Numbers (ICANN), the organization charged with overseeing the name and number systems of the Internet. In addition to ICANN, each top-level domain (TLD) is maintained and serviced technically by an administrative organization operating a registry. A registry is responsible for maintaining the database of names registered within the TLD it administers. The registry receives registration information from each domain name registrar authorized to assign names in the corresponding TLD and publishes the information using a special service, the WHOIS protocol.
Registries and registrars usually charge an annual fee for the service of delegating a domain name to a user and providing a default set of name servers. Often, this transaction is termed a sale or lease of the domain name, and the registrant may sometimes be called an "owner", but no such legal relationship is actually associated with the transaction, only the exclusive right to use the domain name. More correctly, authorized users are known as "registrants" or as "domain holders".
ICANN publishes the complete list of TLD registries and domain name registrars. Registrant information associated with domain names is maintained in an online database accessible with the WHOIS protocol. For most of the 250 country code top-level domains (ccTLDs), the domain registries maintain the WHOIS (Registrant, name servers, expiration dates, etc.) information.
Some domain name registries, often called network information centers (NIC), also function as registrars to end-users. The major generic top-level domain registries, such as for the com, net, org, info domains and others, use a registry-registrar model consisting of hundreds of domain name registrars (see lists at ICANN[24] or VeriSign).[25] In this method of management, the registry only manages the domain name database and the relationship with the registrars. The registrants (users of a domain name) are customers of the registrar, in some cases through additional layers of resellers.
There are also a few other alternative DNS root providers that try to compete or complement ICANN's role of domain name administration, however, most of them failed to receive wide recognition, and thus domain names offered by those alternative roots cannot be used universally on most other internet-connecting machines without additional dedicated configurations.
In the process of registering a domain name and maintaining authority over the new name space created, registrars use several key pieces of information connected with a domain:
Administrative contact. A registrant usually designates an administrative contact to manage the domain name. The administrative contact usually has the highest level of control over a domain. Management functions delegated to the administrative contacts may include management of all business information, such as name of record, postal address, and contact information of the official registrant of the domain and the obligation to conform to the requirements of the domain registry in order to retain the right to use a domain name. Furthermore, the administrative contact installs additional contact information for technical and billing functions.
Technical contact. The technical contact manages the name servers of a domain name. The functions of a technical contact include assuring conformance of the configurations of the domain name with the requirements of the domain registry, maintaining the domain zone records, and providing continuous functionality of the name servers (that leads to the accessibility of the domain name).
Billing contact. The party responsible for receiving billing invoices from the domain name registrar and paying applicable fees.
Name servers. Most registrars provide two or more name servers as part of the registration service. However, a registrant may specify its own authoritative name servers to host a domain's resource records. The registrar's policies govern the number of servers and the type of server information required. Some providers require a hostname and the corresponding IP address or just the hostname, which must be resolvable either in the new domain, or exist elsewhere. Based on traditional requirements (RFC 1034), typically a minimum of two servers is required.
A domain name consists of one or more labels, each of which is formed from the set of ASCII letters, digits, and hyphens (a–z, A–Z, 0–9, -), but not starting or ending with a hyphen. The labels are case-insensitive; for example, 'label' is equivalent to 'Label' or 'LABEL'. In the textual representation of a domain name, the labels are separated by a full stop (period).
Domain names are often seen in analogy to real estate in that domain names are foundations on which a website can be built, and the highest quality domain names, like sought-after real estate, tend to carry significant value, usually due to their online brand-building potential, use in advertising, search engine optimization, and many other criteria.
A few companies have offered low-cost, below-cost or even free domain registration with a variety of models adopted to recoup the costs to the provider. These usually require that domains be hosted on their website within a framework or portal that includes advertising wrapped around the domain holder's content, revenue from which allows the provider to recoup the costs. Domain registrations were free of charge when the DNS was new. A domain holder may provide an infinite number of subdomains in their domain. For example, the owner of example.org could provide subdomains such as foo.example.org and foo.bar.example.org to interested parties.
Many desirable domain names are already assigned and users must search for other acceptable names, using Web-based search features, or WHOIS and dig operating system tools. Many registrars have implemented domain name suggestion tools which search domain name databases and suggest available alternative domain names related to keywords provided by the user.
The business of resale of registered domain names is known as the domain aftermarket. Various factors influence the perceived value or market value of a domain name. Most of the high-prize domain sales are carried out privately.[26] Also, it is called confidential domain acquiring or anonymous domain acquiring.[27]
Intercapping is often used to emphasize the meaning of a domain name, because DNS names are not case-sensitive. Some names may be misinterpreted in certain uses of capitalization. For example: Who Represents, a database of artists and agents, chose whorepresents.com,[28] which can be misread. In such situations, the proper meaning may be clarified by placement of hyphens when registering a domain name. For instance, Experts Exchange, a programmers' discussion site, used expertsexchange.com, but changed its domain name to experts-exchange.com.[29]
A domain name may point to multiple IP addresses to provide server redundancy for the services offered, a feature that is used to manage the traffic of large, popular websites.
Web hosting services, on the other hand, run servers that are typically assigned only one or a few addresses while serving websites for many domains, a technique referred to as virtual web hosting. Such IP address overloading requires that each request identifies the domain name being referenced, for instance by using the HTTP request header fieldHost:, or Server Name Indication.
Critics often claim abuse of administrative power over domain names. Particularly noteworthy was the VeriSign Site Finder system which redirected all unregistered .com and .net domains to a VeriSign webpage. For example, at a public meeting with VeriSign to air technical concerns about Site Finder,[30] numerous people, active in the IETF and other technical bodies, explained how they were surprised by VeriSign's changing the fundamental behavior of a major component of Internet infrastructure, not having obtained the customary consensus. Site Finder, at first, assumed every Internet query was for a website, and it monetized queries for incorrect domain names, taking the user to VeriSign's search site. Other applications, such as many implementations of email, treat a lack of response to a domain name query as an indication that the domain does not exist, and that the message can be treated as undeliverable. The original VeriSign implementation broke this assumption for mail, because it would always resolve an erroneous domain name to that of Site Finder. While VeriSign later changed Site Finder's behaviour with regard to email, there was still widespread protest about VeriSign's action being more in its financial interest than in the interest of the Internet infrastructure component for which VeriSign was the steward.
Despite widespread criticism, VeriSign only reluctantly removed it after the Internet Corporation for Assigned Names and Numbers (ICANN) threatened to revoke its contract to administer the root name servers. ICANN published the extensive set of letters exchanged, committee reports, and ICANN decisions.[31]
There is also significant disquiet regarding the United States Government's political influence over ICANN. This was a significant issue in the attempt to create a .xxxtop-level domain and sparked greater interest in alternative DNS roots that would be beyond the control of any single country.[32]
Additionally, there are numerous accusations of domain name front running, whereby registrars, when given whois queries, automatically register the domain name for themselves. Network Solutions has been accused of this.[33]
In the early 21st century, the US Department of Justice (DOJ) pursued the seizure of domain names, based on the legal theory that domain names constitute property used to engage in criminal activity, and thus are subject to forfeiture. For example, in the seizure of the domain name of a gambling website, the DOJ referenced 18 U.S.C.§ 981 and 18 U.S.C.§ 1955(d).[34][1] In 2013 the US government seized Liberty Reserve, citing 18 U.S.C.§ 982(a)(1).[35]
The U.S. Congress passed the Combating Online Infringement and Counterfeits Act in 2010. Consumer Electronics Association vice president Michael Petricone was worried that seizure was a blunt instrument that could harm legitimate businesses.[36][37] After a joint operation on February 15, 2011, the DOJ and the Department of Homeland Security claimed to have seized ten domains of websites involved in advertising and distributing child pornography, but also mistakenly seized the domain name of a large DNS provider, temporarily replacing 84,000 websites with seizure notices.[38]
PIPCU and other UK law enforcement organisations make domain suspension requests to Nominet which they process on the basis of breach of terms and conditions. Around 16,000 domains are suspended annually, and about 80% of the requests originate from PIPCU.[40]
ICANN Business Constituency (BC) has spent decades trying to make IDN variants work at the second level, and in the last several years at the top level. Domain name variants are domain names recognized in different character encodings, like a single domain presented in traditional Chinese and simplified Chinese. It is an Internationalization and localization problem. Under Domain Name Variants, the different encodings of the domain name (in simplified and traditional Chinese) would resolve to the same host.[42][43]
According to John Levine, an expert on Internet related topics, "Unfortunately, variants don't work. The problem isn't putting them in the DNS, it's that once they're in the DNS, they don't work anywhere else."[42]
A fictitious domain name is a domain name used in a work of fiction or popular culture to refer to a domain that does not actually exist, often with invalid or unofficial top-level domains such as ".web", a usage exactly analogous to the dummy 555 telephone number prefix used in film and other media. The canonical fictitious domain name is "example.com", specifically set aside by IANA in RFC 2606 for such use, along with the .example TLD.
Domain names used in works of fiction have often been registered in the DNS, either by their creators or by cybersquatters attempting to profit from it. This phenomenon prompted NBC to purchase the domain name Hornymanatee.com after talk-show host Conan O'Brien spoke the name while ad-libbing on his show. O'Brien subsequently created a website based on the concept and used it as a running gag on the show.[44] Companies whose works have used fictitious domain names have also employed firms such as MarkMonitor to park fictional domain names in order to prevent misuse by third parties.[45]
Misspelled domain names, also known as typosquatting or URL hijacking, are domain names that are intentionally or unintentionally misspelled versions of popular or well-known domain names. The goal of misspelled domain names is to capitalize on internet users who accidentally type in a misspelled domain name, and are then redirected to a different website.
Misspelled domain names are often used for malicious purposes, such as phishing scams or distributing malware. In some cases, the owners of misspelled domain names may also attempt to sell the domain names to the owners of the legitimate domain names, or to individuals or organizations who are interested in capitalizing on the traffic generated by internet users who accidentally type in the misspelled domain names.
To avoid being caught by a misspelled domain name, internet users should be careful to type in domain names correctly, and should avoid clicking on links that appear suspicious or unfamiliar. Additionally, individuals and organizations who own popular or well-known domain names should consider registering common misspellings of their domain names in order to prevent others from using them for malicious purposes.
The term Domain name spoofing (or simply though less accurately, Domain spoofing) is used generically to describe one or more of a class of phishing attacks that depend on falsifying or misrepresenting an internet domain name.[46][47] These are designed to persuade unsuspecting users into visiting a web site other than that intended, or opening an email that is not in reality from the address shown (or apparently shown).[48] Although website and email spoofing attacks are more widely known, any service that relies on domain name resolution may be compromised.
There are a number of better-known types of domain spoofing:
Typosquatting, also called "URL hijacking", a "sting site", or a "fake URL", is a form of cybersquatting, and possibly brandjacking which relies on mistakes such as typos made by Internet users when inputting a website address into a web browser or composing an email address. Should a user accidentally enter an incorrect domain name, they may be led to any URL (including an alternative website owned by a cybersquatter).[49]
The typosquatter's URL will usually be one of five kinds, all similar to the victim site address:
A common misspelling, or foreign language spelling, of the intended site
IDN homograph attack. This type of attack depends on registering a domain name that is similar to the 'target' domain, differing from it only because its spelling includes one or more characters that come from a different alphabet but look the same to the naked eye. For example, the Cyrillic, Latin, and Greek alphabets each have their own letter A, each of which has its own binary code point. Turkish has a dotless letter i (ı) that may not be perceived as different from the ASCII letter i. Most web browsers warn of 'mixed alphabet' domain names,[50][51][52][53] Other services, such as email applications, may not provide the same protection. Reputable top level domain and country code domain registrars will not accept applications to register a deceptive name but this policy cannot be presumed to be infallible.
User Friendly was a webcomic written by J. D. Frazer, also known by his pen name Illiad. Starting in 1997, the strip was one of the earliest webcomics to make its creator a living. The comic is set in a fictional internet service provider and draws humor from dealing with clueless users and geeky subjects. The comic ran seven days a week until 2009, when updates became sporadic, and since 2010 it had been in re-runs only. The webcomic was shut down in late February 2022, after an announcement from Frazer.[1]
User Friendly is set inside a fictional ISP, Columbia Internet.[2] According to reviewer Eric Burns, the strip is set in a world where "[u]sers were dumbasses who asked about cupholders that slid out of their computers, marketing executives were perverse and stupid and deserved humiliation, bosses were clueless and often naively cruel, and I.T. workers were somewhat shortsighted and misguided, but the last bastion of human reason... Every time we see Greg working, it's to deal with yet another annoying, self-important clueless user who hasn't gotten his brain around the digital world".[3] Although mostly gag-a-day, the comic often had ongoing running arcs and occasionally continuing character through-lines.
A.J., Illiad's alter ego,[4] represents "the creative guy" in the strip, maintaining and designing websites. As a web designer, he's uncomfortably crammed in that tiny crevice between the techies and the marketing people. This means he's not disliked by anyone, but they all look at him funny from time to time. A.J. is shy and sensitive, loves most computer games and nifty art, and has a big-brother relationship with the Dust Puppy. A.J. is terrified of grues and attempts to avoid them.[# 2] He was released from the company on two separate occasions but returned shortly thereafter.
In the strip as of September 16, 2005, he and Miranda (another character) are dating. They also have previously dated, but split up over a misunderstanding.
The Chief is Columbia Internet's CEO. He is the leader of the techies and salespeople.
Illiad based the character on a former boss, saying, "The Chief is based on my business mentor. He was the vice president that I reported to back in the day. The Chief, like my mentor, is tall (!) and thin and sports a bushy ring around a bald crown, plus a very thick moustache." The Chief bears a superficial resemblance to the Pointy-Haired Boss of Dilbert fame. However, Illiad says that The Chief was not inspired by the Dilbert character.[# 3] His personality is very different from the PHB, as well: he manages in the laissez-faire style, as opposed to the Marketing-based, micro-managing stance of the PHB. He has encouraged the office to standardise on Linux (much to Stef's chagrin).[# 4]
Born in a server from a combination of dust, lint, and quantum events, the Dust Puppy looks similar to a ball of dust and lint, with eyes, feet and an occasional big toothy smile. He was briefly absent from the strip after accidentally being blown with compressed air while sleeping inside a dusty server.
Although the Dust Puppy is very innocent and unworldly, he plays a superb game of Quake. He also created an artificial intelligence named Erwin, with whom he has been known to do occasional song performances (or filks).
Dust Puppy is liked by most of the other characters, with the exceptions of Stef and the Dust Puppy's evil nemesis, the Crud Puppy.
Crud Puppy (Lord Ignatius Crud)[# 6] is the evil twin, born from the crud in Stef's keyboard; he is the nemesis of the Dust Puppy and sometimes takes the role of "bad guy" in the series. Examples include being the attorney/legal advisor of both Microsoft and then AOL or controlling a "Thing" suit in the Antarctic. He is most often seen in later strips in an Armani suit, usually sitting at the local bar with Cthulhu. The Crud Puppy first appeared in the strip on February 24, 1998.[# 7]
Erwin first appeared in the January 25, 1998 strip. Erwin is a highly advanced Artificial Intelligence (AI) created overnight during experimentation in artificial intelligence by the Dust Puppy, who was feeling kind of bored. Erwin is written in COBOL[# 8] because Dust Puppy "lost a bet".[# 9] Erwin passes the Turing test with flying colours, and has a dry sense of humour. He is an expert on any subject that is covered on the World Wide Web, such as Elvis sightings and alienconspiracies. Erwin is rather self-centered, and he is fond of mischievous pranks.
Greg is in charge of Technical Support in the strip. In other words, he's the guy that customers whine to when something goes wrong, which drives him nuts. He blows off steam by playing visceral games and doing bad things to the salespeople. He's not a bad sort, but his grip on his sanity hovers somewhere between weak and non-existent, and he once worked for MicrosoftQuality Assurance.
Mike is the System Administrator of the strip and is responsible for the smooth running of the network at the office. He's bright but prone to fits of anxiety. His worst nightmare is being locked in a room with a sweaty Windows 95 programmer and no hacking weapons in sight.[5] He loves hot ramen straight out of a styrofoam cup.
Miranda is a trained systems technologist, an experienced UNIX sysadmin, and very, very female. Her technical abilities unnerve the other techs, but her obvious physical charms compel them to stare at her, except for Pitr, who is convinced she is evil. Although she has few character flaws, she does express sadistic tendencies, especially towards marketers and lusers. Miranda finds Dust Puppy adorable.
She and A.J. are dating as of September 16, 2005, although she was previously frustrated by his inability to express himself and his love for her. This comes after years of missed opportunities and misunderstandings, such as when A.J. poured his feelings into an email and Miranda mistook it for the ILOVEYOU email worm and deleted it unread.[6]
Pitr is the administrator of the Columbia Internet server and a self-proclaimed Linux guru. He suddenly began to speak with a fake Slavic accent as part of his program to "Become an Evil Genius." He has almost succeeded in taking over the planet several times. His sworn enemy is Sid, who seems to outdo him at every turn. Pitr's achievements include: making the world's (second) strongest coffee, merging Coca-Cola, Pepsi into Pitr-Cola and making Columbia Internet millions with a nuclear weapon purchased from Russia, and the infamous Vigor text editor. He briefly worked for Google, nearly succeeding in world domination, but was released from there and returned to Columbia Internet. Despite his vast efforts to become the ultimate evil character, his lack of illheartedness prevents him from reaching such achievement.
Sid is the oldest of the geeks and very knowledgeable. His advanced age gives him the upper hand against Pitr, whom he has outdone on several occasions, including in a coffee-brewing competition and in a round of Jeopardy! that he hacked in his own favor. Unlike Pitr, he has no ambitions for world domination per se, but he is a friend of Hastur and Cthulhu (based on the H. P. Lovecraft Mythos characters). He was hired in September 2000 and he had formerly worked for Hewlett-Packard, with ten years' experience[# 11] It is his habit, unlike the other techs, to dress to a somewhat professional degree; when he first came to work, Smiling Man, the head accountant, expressed shock at the fact that Sid was wearing his usual blue business suit.[# 12] He is also a fan of old technology, having grown up in the age of TECO, PDP-6es, the original VT100, FORTRAN, IBM 3270 and the IBM 5150; one could, except for the decent taste in clothing, categorise him as a Real Programmer. He was once a cannabis smoker,[# 13] as contrasted with the rest of the technological staff, who prefer caffeine (Greg in the form of cola, Miranda in the form of espresso). This had the unfortunate effect of causing lung cancer and he was treated by an oncologist.[# 14] He has since recovered from the cancer and was told he has another 20 years or so to live.
Sid Dabster's beautiful daughter. The character appeared for the first time in the strip of Aug. 30, 2001.[# 15] Pearl is often seen getting the better of the boys. She is the antagonist of Miranda, and occasionally the object of Pitr's affections, much to the chagrin of Sid. Some people (both in strip and in the real world) wrongly assume that the character was named after the scripting language PERL. While this may be the true intention of the author, in the script timeline, is shown to be an error based on wordplay.[# 16]
The Smiling Man is the company comptroller. He is in charge of accounts, finances, and expenditures. He smiles all day for no reason. This in itself is enough to terrify most normal human beings (even via phone). However, the Dust Puppy, the "Evilphish", a delirious Stef, and a consultant in a purple suit have managed to get him to stop smiling first. His favourite wallpaper is a large, complex, and utterly meaningless spreadsheet.
Stef is the strip's Corporate Sales Manager. He runs most of the marketing efforts within the firm, often selling things before they exist. He is a stereotypical marketer, with an enormous ego and a condescending attitude toward the techies; they detest him and frequently retaliate with pranks. He sucks at Quake, even once managing to die at the startup screen in Quake III Arena;[# 17] in addition, he manages to die by falling into lava in any game that contains it, including games where it is normally impossible to step in said lava.[# 18] Although he admires Microsoft and frequently defends their marketing tactics, infuriating the techies, he has a real problem with Microsoft salesmen, probably because they make much more money than he does. His attitude towards women is decidedly chauvinist; he lusts after Miranda who will not have anything to do with him. Stef is definitely gormless, as demonstrated on January 14, 2005.[# 19]
In a 2008 article, reviewer Eric Burns said that as best he could tell, Frazer had produced strips seven days a week, without missing an update for, at that time, almost 11 years.[3] Frazer would draw several days' worth of comics in advance, but the Sunday comic – based on current events and in color – was always drawn for immediate release and did not relate to the regular storyline.[citation needed]
The website for User Friendly included other features such as Link of the Day and Iambe Intimate & Interactive, a weekly editorial written under the pseudonym "Iambe".[7]
Frazer started writing User Friendly in 1997.[2] According to Frazer, he started cartooning at age 12. He had tried to get into cartooning through syndicates with a strip called Dust Puppies, but it was rejected by six syndicates. Later, while working at an ISP, he drew some cartoons which his co-workers enjoyed. He then drew a month's worth of cartoons and posted them online. After that, he quit his job and then worked on the comic.[15]
Writer Xavier Xerxes said that in the very early days of webcomics, Frazer was probably one of the bigger success stories and was one of the first to make a living from a webcomic.[16] Eric Burns attributed initial success of the comic to the makeup of the early internet, saying, "In 1997, a disproportionate number of internet users... were in the I.T. Industry. When User Friendly began gathering momentum, there wasn't just little to nothing like it on the web -- it appealed and spoke to a much larger percentage of the internet reading audience than mainstream society would support outside of that filter.... in the waning years of the 20th Century, it was a safe bet that if someone had an internet connection in the first place, they'd find User Friendly funny."[3]
On April Fools' Day 1999, the site appeared to be shut down permanently after a third party sued.[17][18] In future years, the April 1st cartoon referenced back to the disruption that was caused.[19][20]
In a 2001 interview, Frazer said that he was not handling fame well, and pretended not to be famous in order to keep his life normal. He said that his income came from sponsorship, advertising, and sales of printed collections.[15] These compilations have been published by O'Reilly Media.[21]
Since 2000, User Friendly had been published in a variety of newspapers, including The National Post in Canada and the Linux Journal magazine.[22]
In a 2001 interview, Frazer estimated that about 40% of strip ideas came from reader submissions, and occasionally he would get submissions that he would use "unmodified".[23] He also said that he educated himself on the operating system BSD in order to make informed jokes about it.[15]
In 2009, Frazer was found to be copying punchlines found in the MetaFilter community. After one poster found a comment on MetaFilter that was similar to a User Friendly comic, users searched and found several other examples.[24] Initially, Frazer posted on MetaFilter saying "I get a flurry of submissions and one-liners every week, and I haven't checked many of them at all, because I rarely had to in the past" but later admitted that he had taken quotes directly from the site.[25][24] On his website, Frazer said, "I offered no attribution or asked for permission [for these punchlines], over the last couple of years I've infringed on the expression of ideas of some (who I think are) clever people. Plagiarized. My hypocrisy seems to know no bounds, as an infamous gunman was once heard saying. I sincerely apologize to my readers and to the original authors. I offer no excuses and accept full blame and responsibility. As a result, I'll be modifying the cartoons in question. No, it won't happen again. Yes, I've immersed myself in mild acid."[26]
While published books still contain at least one cartoon with a punchline taken from MetaFilter, Frazer has removed these cartoons from the website, or updated them to quote and credit the source of the punchlines, and fans searched through the archives to ensure that none of the other punchlines have been plagiarized.[27]
The strip went on hiatus from June 1, 2009[# 20] to August 2009 for personal reasons.[# 21] In this period, previous strips were re-posted.
A second hiatus lasted from December 1, 2009 until August 1, 2010, again for personal reasons. New cartoons, supplied by the community as part of a competition, started to appear as of August 2, 2010.[# 22]
From November 1, 2010 through November 21, 2010, Illiad published a special "Remembrance Day story arc", and stated that it is "vague and at this point random" what will happen to the strip afterwards, that "going daily again is highly unlikely", but that "there are still many stories that I want to tell through UF, over time".[# 23] Since then, previous comics have been re-posted on a daily basis.
After the de facto stop of publishing new content, three one-off comics commemoration special occasions were published:
On 24 February 2022, Illiad announced that the website would be shut down soon, "at the end of this month. If not, it won't be much later than that."[28]
At approximately midnight PST on the evening of 28 February 2022, the website was shut down.
User Friendly has received mixed reviews over the years.
In a 2008 review, Eric Burns of Websnark called it a "damned good comic strip", but felt it had several problems. Burns felt that the strip had not evolved in several years, saying "his strip is exactly the same today as it was in 1998... the same characters, the same humor, the same punchlines, the same punching bags as before." Burns said that characters learn no lessons, and that "[i]f Frazer uses copy and paste to put his characters in, he's been using the same clip art for the entire 21st century." Burns also criticised the stereotypical depiction of idiotic computer users as outdated. But fundamentally, Burns found the strip funny, saying anyone who had worked IT would likely find it funny, and even those who had not will find something in it amusing. Burns felt that some criticism of User Friendly came from seeing it as general webcomic, rather than one targeted at a specific audience of old-school IT geeks, and he considered that the targeted approach was a good business model.[3]
Writer T Campbell declared JD Frazer's work as "ow[ing] a heavy debt to [Scott] Adams, but his 'nerdcore' was a purer sort: the jokes were often for nerds ONLY-- NO NON-TECHIES ALLOWD [sic]." He continued "He wasn't the first webtoonist to target his audience so precisely, but he was the first to do it on a daily schedule, and that kind of single-minded dedication is something most techies could appreciate. User Friendly set the tone for nerdcore strips to follow."[29]Time magazine called User Friendly "a strip in the wry, verbal vein of Doonesbury...the humor is a combination of pop culture references and inside jokes straight outta the IT department."[30] The strip was among the most notable of a wave of similar strips, including Help Desk by Christopher B. Wright,[31]General Protection Fault by Jeffrey T. Darlington,[32]The PC Weenies by Krishna Sadasivam,[33]Geek & Poke by Oliver Widder,[34]Working Daze by John Zakour, and The Joy of Tech by Liza Schmalcel and Bruce Evans.
Comic writer and artist Joe Zabel said that User Friendly "may be one of the earliest webcomics manifestation of the use of templates... renderings of the characters that are cut and pasted directly into the comic strip... I think the main significance of User Friendly is that in 1997 it was really, really crude in every respect. Horrible artwork, terrible storyline, zilch characterization, and extremely dull, obvious jokes. And yet it was a smash hit! I think this demonstrates that the public will embrace just about anything if it's free and the circumstances are right. And it indicates that new internet users of the time were really hungry, downright starving, for entertainment.... his current work [speaking in 2005] is comparatively slick and professional. But I suspect that his early work had enormous influence, because it encouraged thousands of people with few skills and little talent to jump into the webcomics field." Zabel also credited User Friendly's success in part to its "series mascot", Dust Puppy, saying that "the popular gag-a-day cartoons almost always have some kind of mascot."[29]
The webcomic Penny Arcade produced a strip in 1999 just to criticise Frazer, saying "people will pass up steak once a week for crap every day."[35] They also criticized the commercialism of the enterprise.[36] By contrast, CNET included it on 2007 a list of "sidesplitting tech comics",[33]Mashable included it in a 2009 list of the 20 best webcomics[2] and Polygon listed it as one of the most influential webcomics of all time in 2018.[37] It has also been noted by FromDev,[38] Brainz,[39] RiskOptics,[40] DondeQ2,[41] and Pingdom.[31]CBR.com concluded the comic had aged poorly in a 2023 rundown.[42]
Lawrence I. Charters appreciated the nature of the titles used for the published books.[43] Francis Glassborow cited the specificity of the humour,[44] which also lead Retro Activity to find the strip "difficult to recommend" along with the limited art style.[45] Mike Kaltschnee also mentioned the weakness of the art, but was impressed at Illiad maintaining publication of a strip every day.[46] "Webcomics: The Influence and Continuation of the Comix Revolution" described how the strip represented the counter-cultural aspects of the open-source software movement.[47] Dustin Puryear observed how the strip represents the conflicts between the computer literate and newer less informed users.[48] Christine Moellenberndt wrote about the online community spawned around the comic strip.[49]
User Friendly – Die Deutsche Dialekt-Ausgabe (translation to several German dialects) ISBN3-89721-380-X
Ten Years of UserFriendly.Org, Manning Dec 2008, ISBN978-1-935182-12-2 a 1000-page hardback collection of every script with some comments by the author.
1980 - Arthur's Valentine 1981 - The True Francine (later republished and retitled in 1996 as "Arthur and the True Francine") 1982 - Arthur Goes to Camp 1982 - Arthur's Halloween 1983 - Arthur's April Fool 1983 - Arthur's Thanksgiving 1984 - Arthur's Christmas 1985 - Arthur's Tooth 1987 - Arthur's Baby 1989 - Arthur's Birthday 1990 - Arthur's Pet Business 1991 - Arthur Meets the President 1992 - Arthur Babysits 1993 - Arthur's Family Vacation 1993 - Arthur's New Puppy 1994 - Arthur's Chicken Pox 1994 - Arthur's First Sleepover 1995 - Arthur's TV Trouble 1996 - Arthur Writes a Story 1997 - Arthur's Computer Disaster 1998 - Arthur Lost and Found 1999 - Arthur's Underwear 2000 - Arthur's Teacher Moves In 2011 - Arthur Turns Green