image optimization strategies"Image optimization strategies outline best practices for reducing file sizes, enhancing quality, and improving metadata. Following these strategies results in faster load times, improved user experience, and better search rankings."
image optimization testing tools"Image optimization testing tools measure file sizes, load times, and display quality across devices. Using these tools helps identify areas for improvement, ensuring that your images perform well and enhance overall site performance."
image optimization toolsUsing image optimization toolssuch as compression software and format convertersstreamlines the process of reducing file sizes and improving quality. These tools help ensure that images load quickly and look great across all devices.
Best SEO Agency Sydney Australia. Best SEO Sydney Agency.image optimization tutorials"Tutorials provide step-by-step guidance for compressing, resizing, and enhancing images. Local SEO . Following these tutorials ensures that your images are fully optimized, resulting in faster load times and better search rankings."
image optimization workflow"Establishing a clear image optimization workflow streamlines the process of compressing, resizing, and adding metadata. A well-defined workflow helps maintain consistency, improves efficiency, and ensures better image performance."
image optimization workflow automation"Workflow automation streamlines image compression, resizing, and metadata updates. By automating these tasks, you save time, maintain consistent quality, and ensure that your images remain optimized at all times."
image performance benchmarks"Establishing performance benchmarks for images helps measure how well they load and render on various devices. Benchmarks provide a reference point for identifying issues, refining optimization efforts, and improving overall site performance."
image performance monitoring"Image performance monitoring tracks how well images load and render on different devices. Best SEO Audit Sydney. By analyzing performance data, you can identify bottlenecks, improve load speeds, and ensure a smooth user experience."
image quality settings"Adjusting image quality settings allows you to balance clarity and file size.
image scaling"Image scaling involves adjusting the size of images to match their intended display dimensions. By scaling images correctly, you prevent oversized files from slowing down your website and ensure a smooth user experience."
image scaling for retina displays"Scaling images for retina displays ensures that they appear sharp and clear on high-resolution screens. By preparing images specifically for retina-quality displays, you improve visual quality and user satisfaction on modern devices."
image SEO"Image SEO involves optimizing image filenames, alt text, captions, and metadata to improve search engine rankings. Effective image SEO increases visibility in image searches and drives more organic traffic to your website."
comprehensive SEO Services services.image SEO best practices"Image SEO best practices include adding descriptive alt text, optimizing filenames, using appropriate dimensions, and compressing files. Following these guidelines improves search visibility and helps attract more organic traffic to your site."
image sitemaps"An image sitemap is a file that lists the images on a website, helping search engines discover and index them. By submitting an image sitemap, you increase the visibility of your images in search results, driving more traffic to your site."
image size reduction"Image size reduction involves scaling down image dimensions to fit the intended display area. Smaller image dimensions result in faster load times, better user experience, and improved search rankings."
image usability"Image usability focuses on selecting images that are relevant, high-quality, and aligned with the content they accompany. By ensuring that images enhance rather than detract from the user experience, you increase engagement and improve search visibility."
Industry directories for links"Industry directories for links are specialized platforms that list businesses within a particular field. Submitting your site to relevant industry directories helps establish authority, improve local search visibility, and earn quality backlinks."
industry-specific keywords"Industry-specific keywords focus on terms that are unique to your niche. By targeting these phrases, you can attract a highly relevant audience and build authority within your field."
Influencer link buildingInfluencer link building involves partnering with industry influencers who can share your content and link to your site. Their endorsements not only improve your backlink profile but also increase your brands credibility and reach.
Influencer outreach for links"Influencer outreach for links involves building relationships with well-known figures in your industry who can share your content and provide backlinks. By leveraging their authority, you can increase your sites credibility and reach a larger audience."
Infographic link building"Infographic link building uses visually engaging, data-driven graphics to earn backlinks. When other websites feature your infographic and link back to your site as the source, you gain valuable backlinks and boost your contents reach."
A Web crawler, sometimes called a spider or spiderbot and often shortened to crawler, is an Internet bot that systematically browses the World Wide Web and that is typically operated by search engines for the purpose of Web indexing (web spidering).[1]
Web search engines and some other websites use Web crawling or spidering software to update their web content or indices of other sites' web content. Web crawlers copy pages for processing by a search engine, which indexes the downloaded pages so that users can search more efficiently.
Crawlers consume resources on visited systems and often visit sites unprompted. Issues of schedule, load, and "politeness" come into play when large collections of pages are accessed. Mechanisms exist for public sites not wishing to be crawled to make this known to the crawling agent. For example, including a robots.txt
file can request bots to index only parts of a website, or nothing at all.
The number of Internet pages is extremely large; even the largest crawlers fall short of making a complete index. For this reason, search engines struggled to give relevant search results in the early years of the World Wide Web, before 2000. Today, relevant results are given almost instantly.
Crawlers can validate hyperlinks and HTML code. They can also be used for web scraping and data-driven programming.
A web crawler is also known as a spider,[2] an ant, an automatic indexer,[3] or (in the FOAF software context) a Web scutter.[4]
A Web crawler starts with a list of URLs to visit. Those first URLs are called the seeds. As the crawler visits these URLs, by communicating with web servers that respond to those URLs, it identifies all the hyperlinks in the retrieved web pages and adds them to the list of URLs to visit, called the crawl frontier. URLs from the frontier are recursively visited according to a set of policies. If the crawler is performing archiving of websites (or web archiving), it copies and saves the information as it goes. The archives are usually stored in such a way they can be viewed, read and navigated as if they were on the live web, but are preserved as 'snapshots'.[5]
The archive is known as the repository and is designed to store and manage the collection of web pages. The repository only stores HTML pages and these pages are stored as distinct files. A repository is similar to any other system that stores data, like a modern-day database. The only difference is that a repository does not need all the functionality offered by a database system. The repository stores the most recent version of the web page retrieved by the crawler.[citation needed]
The large volume implies the crawler can only download a limited number of the Web pages within a given time, so it needs to prioritize its downloads. The high rate of change can imply the pages might have already been updated or even deleted.
The number of possible URLs crawled being generated by server-side software has also made it difficult for web crawlers to avoid retrieving duplicate content. Endless combinations of HTTP GET (URL-based) parameters exist, of which only a small selection will actually return unique content. For example, a simple online photo gallery may offer three options to users, as specified through HTTP GET parameters in the URL. If there exist four ways to sort images, three choices of thumbnail size, two file formats, and an option to disable user-provided content, then the same set of content can be accessed with 48 different URLs, all of which may be linked on the site. This mathematical combination creates a problem for crawlers, as they must sort through endless combinations of relatively minor scripted changes in order to retrieve unique content.
As Edwards et al. noted, "Given that the bandwidth for conducting crawls is neither infinite nor free, it is becoming essential to crawl the Web in not only a scalable, but efficient way, if some reasonable measure of quality or freshness is to be maintained."[6] A crawler must carefully choose at each step which pages to visit next.
The behavior of a Web crawler is the outcome of a combination of policies:[7]
Given the current size of the Web, even large search engines cover only a portion of the publicly available part. A 2009 study showed even large-scale search engines index no more than 40–70% of the indexable Web;[8] a previous study by Steve Lawrence and Lee Giles showed that no search engine indexed more than 16% of the Web in 1999.[9] As a crawler always downloads just a fraction of the Web pages, it is highly desirable for the downloaded fraction to contain the most relevant pages and not just a random sample of the Web.
This requires a metric of importance for prioritizing Web pages. The importance of a page is a function of its intrinsic quality, its popularity in terms of links or visits, and even of its URL (the latter is the case of vertical search engines restricted to a single top-level domain, or search engines restricted to a fixed Web site). Designing a good selection policy has an added difficulty: it must work with partial information, as the complete set of Web pages is not known during crawling.
Junghoo Cho et al. made the first study on policies for crawling scheduling. Their data set was a 180,000-pages crawl from the stanford.edu domain, in which a crawling simulation was done with different strategies.[10] The ordering metrics tested were breadth-first, backlink count and partial PageRank calculations. One of the conclusions was that if the crawler wants to download pages with high Pagerank early during the crawling process, then the partial Pagerank strategy is the better, followed by breadth-first and backlink-count. However, these results are for just a single domain. Cho also wrote his PhD dissertation at Stanford on web crawling.[11]
Najork and Wiener performed an actual crawl on 328 million pages, using breadth-first ordering.[12] They found that a breadth-first crawl captures pages with high Pagerank early in the crawl (but they did not compare this strategy against other strategies). The explanation given by the authors for this result is that "the most important pages have many links to them from numerous hosts, and those links will be found early, regardless of on which host or page the crawl originates."
Abiteboul designed a crawling strategy based on an algorithm called OPIC (On-line Page Importance Computation).[13] In OPIC, each page is given an initial sum of "cash" that is distributed equally among the pages it points to. It is similar to a PageRank computation, but it is faster and is only done in one step. An OPIC-driven crawler downloads first the pages in the crawling frontier with higher amounts of "cash". Experiments were carried in a 100,000-pages synthetic graph with a power-law distribution of in-links. However, there was no comparison with other strategies nor experiments in the real Web.
Boldi et al. used simulation on subsets of the Web of 40 million pages from the .it domain and 100 million pages from the WebBase crawl, testing breadth-first against depth-first, random ordering and an omniscient strategy. The comparison was based on how well PageRank computed on a partial crawl approximates the true PageRank value. Some visits that accumulate PageRank very quickly (most notably, breadth-first and the omniscient visit) provide very poor progressive approximations.[14][15]
Baeza-Yates et al. used simulation on two subsets of the Web of 3 million pages from the .gr and .cl domain, testing several crawling strategies.[16] They showed that both the OPIC strategy and a strategy that uses the length of the per-site queues are better than breadth-first crawling, and that it is also very effective to use a previous crawl, when it is available, to guide the current one.
Daneshpajouh et al. designed a community based algorithm for discovering good seeds.[17] Their method crawls web pages with high PageRank from different communities in less iteration in comparison with crawl starting from random seeds. One can extract good seed from a previously-crawled-Web graph using this new method. Using these seeds, a new crawl can be very effective.
A crawler may only want to seek out HTML pages and avoid all other MIME types. In order to request only HTML resources, a crawler may make an HTTP HEAD request to determine a Web resource's MIME type before requesting the entire resource with a GET request. To avoid making numerous HEAD requests, a crawler may examine the URL and only request a resource if the URL ends with certain characters such as .html, .htm, .asp, .aspx, .php, .jsp, .jspx or a slash. This strategy may cause numerous HTML Web resources to be unintentionally skipped.
Some crawlers may also avoid requesting any resources that have a "?" in them (are dynamically produced) in order to avoid spider traps that may cause the crawler to download an infinite number of URLs from a Web site. This strategy is unreliable if the site uses URL rewriting to simplify its URLs.
Crawlers usually perform some type of URL normalization in order to avoid crawling the same resource more than once. The term URL normalization, also called URL canonicalization, refers to the process of modifying and standardizing a URL in a consistent manner. There are several types of normalization that may be performed including conversion of URLs to lowercase, removal of "." and ".." segments, and adding trailing slashes to the non-empty path component.[18]
Some crawlers intend to download/upload as many resources as possible from a particular web site. So path-ascending crawler was introduced that would ascend to every path in each URL that it intends to crawl.[19] For example, when given a seed URL of http://llama.org/hamster/monkey/page.html, it will attempt to crawl /hamster/monkey/, /hamster/, and /. Cothey found that a path-ascending crawler was very effective in finding isolated resources, or resources for which no inbound link would have been found in regular crawling.
The importance of a page for a crawler can also be expressed as a function of the similarity of a page to a given query. Web crawlers that attempt to download pages that are similar to each other are called focused crawler or topical crawlers. The concepts of topical and focused crawling were first introduced by Filippo Menczer[20][21] and by Soumen Chakrabarti et al.[22]
The main problem in focused crawling is that in the context of a Web crawler, we would like to be able to predict the similarity of the text of a given page to the query before actually downloading the page. A possible predictor is the anchor text of links; this was the approach taken by Pinkerton[23] in the first web crawler of the early days of the Web. Diligenti et al.[24] propose using the complete content of the pages already visited to infer the similarity between the driving query and the pages that have not been visited yet. The performance of a focused crawling depends mostly on the richness of links in the specific topic being searched, and a focused crawling usually relies on a general Web search engine for providing starting points.
An example of the focused crawlers are academic crawlers, which crawls free-access academic related documents, such as the citeseerxbot, which is the crawler of CiteSeerX search engine. Other academic search engines are Google Scholar and Microsoft Academic Search etc. Because most academic papers are published in PDF formats, such kind of crawler is particularly interested in crawling PDF, PostScript files, Microsoft Word including their zipped formats. Because of this, general open-source crawlers, such as Heritrix, must be customized to filter out other MIME types, or a middleware is used to extract these documents out and import them to the focused crawl database and repository.[25] Identifying whether these documents are academic or not is challenging and can add a significant overhead to the crawling process, so this is performed as a post crawling process using machine learning or regular expression algorithms. These academic documents are usually obtained from home pages of faculties and students or from publication page of research institutes. Because academic documents make up only a small fraction of all web pages, a good seed selection is important in boosting the efficiencies of these web crawlers.[26] Other academic crawlers may download plain text and HTML files, that contains metadata of academic papers, such as titles, papers, and abstracts. This increases the overall number of papers, but a significant fraction may not provide free PDF downloads.
Another type of focused crawlers is semantic focused crawler, which makes use of domain ontologies to represent topical maps and link Web pages with relevant ontological concepts for the selection and categorization purposes.[27] In addition, ontologies can be automatically updated in the crawling process. Dong et al.[28] introduced such an ontology-learning-based crawler using a support-vector machine to update the content of ontological concepts when crawling Web pages.
The Web has a very dynamic nature, and crawling a fraction of the Web can take weeks or months. By the time a Web crawler has finished its crawl, many events could have happened, including creations, updates, and deletions.
From the search engine's point of view, there is a cost associated with not detecting an event, and thus having an outdated copy of a resource. The most-used cost functions are freshness and age.[29]
Freshness: This is a binary measure that indicates whether the local copy is accurate or not. The freshness of a page p in the repository at time t is defined as:
Age: This is a measure that indicates how outdated the local copy is. The age of a page p in the repository, at time t is defined as:
Coffman et al. worked with a definition of the objective of a Web crawler that is equivalent to freshness, but use a different wording: they propose that a crawler must minimize the fraction of time pages remain outdated. They also noted that the problem of Web crawling can be modeled as a multiple-queue, single-server polling system, on which the Web crawler is the server and the Web sites are the queues. Page modifications are the arrival of the customers, and switch-over times are the interval between page accesses to a single Web site. Under this model, mean waiting time for a customer in the polling system is equivalent to the average age for the Web crawler.[30]
The objective of the crawler is to keep the average freshness of pages in its collection as high as possible, or to keep the average age of pages as low as possible. These objectives are not equivalent: in the first case, the crawler is just concerned with how many pages are outdated, while in the second case, the crawler is concerned with how old the local copies of pages are.
Two simple re-visiting policies were studied by Cho and Garcia-Molina:[31]
In both cases, the repeated crawling order of pages can be done either in a random or a fixed order.
Cho and Garcia-Molina proved the surprising result that, in terms of average freshness, the uniform policy outperforms the proportional policy in both a simulated Web and a real Web crawl. Intuitively, the reasoning is that, as web crawlers have a limit to how many pages they can crawl in a given time frame, (1) they will allocate too many new crawls to rapidly changing pages at the expense of less frequently updating pages, and (2) the freshness of rapidly changing pages lasts for shorter period than that of less frequently changing pages. In other words, a proportional policy allocates more resources to crawling frequently updating pages, but experiences less overall freshness time from them.
To improve freshness, the crawler should penalize the elements that change too often.[32] The optimal re-visiting policy is neither the uniform policy nor the proportional policy. The optimal method for keeping average freshness high includes ignoring the pages that change too often, and the optimal for keeping average age low is to use access frequencies that monotonically (and sub-linearly) increase with the rate of change of each page. In both cases, the optimal is closer to the uniform policy than to the proportional policy: as Coffman et al. note, "in order to minimize the expected obsolescence time, the accesses to any particular page should be kept as evenly spaced as possible".[30] Explicit formulas for the re-visit policy are not attainable in general, but they are obtained numerically, as they depend on the distribution of page changes. Cho and Garcia-Molina show that the exponential distribution is a good fit for describing page changes,[32] while Ipeirotis et al. show how to use statistical tools to discover parameters that affect this distribution.[33] The re-visiting policies considered here regard all pages as homogeneous in terms of quality ("all pages on the Web are worth the same"), something that is not a realistic scenario, so further information about the Web page quality should be included to achieve a better crawling policy.
Crawlers can retrieve data much quicker and in greater depth than human searchers, so they can have a crippling impact on the performance of a site. If a single crawler is performing multiple requests per second and/or downloading large files, a server can have a hard time keeping up with requests from multiple crawlers.
As noted by Koster, the use of Web crawlers is useful for a number of tasks, but comes with a price for the general community.[34] The costs of using Web crawlers include:
A partial solution to these problems is the robots exclusion protocol, also known as the robots.txt protocol that is a standard for administrators to indicate which parts of their Web servers should not be accessed by crawlers.[35] This standard does not include a suggestion for the interval of visits to the same server, even though this interval is the most effective way of avoiding server overload. Recently commercial search engines like Google, Ask Jeeves, MSN and Yahoo! Search are able to use an extra "Crawl-delay:" parameter in the robots.txt file to indicate the number of seconds to delay between requests.
The first proposed interval between successive pageloads was 60 seconds.[36] However, if pages were downloaded at this rate from a website with more than 100,000 pages over a perfect connection with zero latency and infinite bandwidth, it would take more than 2 months to download only that entire Web site; also, only a fraction of the resources from that Web server would be used.
Cho uses 10 seconds as an interval for accesses,[31] and the WIRE crawler uses 15 seconds as the default.[37] The MercatorWeb crawler follows an adaptive politeness policy: if it took t seconds to download a document from a given server, the crawler waits for 10t seconds before downloading the next page.[38] Dill et al. use 1 second.[39]
For those using Web crawlers for research purposes, a more detailed cost-benefit analysis is needed and ethical considerations should be taken into account when deciding where to crawl and how fast to crawl.[40]
Anecdotal evidence from access logs shows that access intervals from known crawlers vary between 20 seconds and 3–4 minutes. It is worth noticing that even when being very polite, and taking all the safeguards to avoid overloading Web servers, some complaints from Web server administrators are received. Sergey Brin and Larry Page noted in 1998, "... running a crawler which connects to more than half a million servers ... generates a fair amount of e-mail and phone calls. Because of the vast number of people coming on line, there are always those who do not know what a crawler is, because this is the first one they have seen."[41]
A parallel crawler is a crawler that runs multiple processes in parallel. The goal is to maximize the download rate while minimizing the overhead from parallelization and to avoid repeated downloads of the same page. To avoid downloading the same page more than once, the crawling system requires a policy for assigning the new URLs discovered during the crawling process, as the same URL can be found by two different crawling processes.
A crawler must not only have a good crawling strategy, as noted in the previous sections, but it should also have a highly optimized architecture.
Shkapenyuk and Suel noted that:[42]
While it is fairly easy to build a slow crawler that downloads a few pages per second for a short period of time, building a high-performance system that can download hundreds of millions of pages over several weeks presents a number of challenges in system design, I/O and network efficiency, and robustness and manageability.
Web crawlers are a central part of search engines, and details on their algorithms and architecture are kept as business secrets. When crawler designs are published, there is often an important lack of detail that prevents others from reproducing the work. There are also emerging concerns about "search engine spamming", which prevent major search engines from publishing their ranking algorithms.
While most of the website owners are keen to have their pages indexed as broadly as possible to have strong presence in search engines, web crawling can also have unintended consequences and lead to a compromise or data breach if a search engine indexes resources that should not be publicly available, or pages revealing potentially vulnerable versions of software.
Apart from standard web application security recommendations website owners can reduce their exposure to opportunistic hacking by only allowing search engines to index the public parts of their websites (with robots.txt) and explicitly blocking them from indexing transactional parts (login pages, private pages, etc.).
Web crawlers typically identify themselves to a Web server by using the User-agent field of an HTTP request. Web site administrators typically examine their Web servers' log and use the user agent field to determine which crawlers have visited the web server and how often. The user agent field may include a URL where the Web site administrator may find out more information about the crawler. Examining Web server log is tedious task, and therefore some administrators use tools to identify, track and verify Web crawlers. Spambots and other malicious Web crawlers are unlikely to place identifying information in the user agent field, or they may mask their identity as a browser or other well-known crawler.
Web site administrators prefer Web crawlers to identify themselves so that they can contact the owner if needed. In some cases, crawlers may be accidentally trapped in a crawler trap or they may be overloading a Web server with requests, and the owner needs to stop the crawler. Identification is also useful for administrators that are interested in knowing when they may expect their Web pages to be indexed by a particular search engine.
A vast amount of web pages lie in the deep or invisible web.[43] These pages are typically only accessible by submitting queries to a database, and regular crawlers are unable to find these pages if there are no links that point to them. Google's Sitemaps protocol and mod oai[44] are intended to allow discovery of these deep-Web resources.
Deep web crawling also multiplies the number of web links to be crawled. Some crawlers only take some of the URLs in <a href="URL">
form. In some cases, such as the Googlebot, Web crawling is done on all text contained inside the hypertext content, tags, or text.
Strategic approaches may be taken to target deep Web content. With a technique called screen scraping, specialized software may be customized to automatically and repeatedly query a given Web form with the intention of aggregating the resulting data. Such software can be used to span multiple Web forms across multiple Websites. Data extracted from the results of one Web form submission can be taken and applied as input to another Web form thus establishing continuity across the Deep Web in a way not possible with traditional web crawlers.[45]
Pages built on AJAX are among those causing problems to web crawlers. Google has proposed a format of AJAX calls that their bot can recognize and index.[46]
There are a number of "visual web scraper/crawler" products available on the web which will crawl pages and structure data into columns and rows based on the users requirements. One of the main difference between a classic and a visual crawler is the level of programming ability required to set up a crawler. The latest generation of "visual scrapers" remove the majority of the programming skill needed to be able to program and start a crawl to scrape web data.
The visual scraping/crawling method relies on the user "teaching" a piece of crawler technology, which then follows patterns in semi-structured data sources. The dominant method for teaching a visual crawler is by highlighting data in a browser and training columns and rows. While the technology is not new, for example it was the basis of Needlebase which has been bought by Google (as part of a larger acquisition of ITA Labs[47]), there is continued growth and investment in this area by investors and end-users.[citation needed]
The following is a list of published crawler architectures for general-purpose crawlers (excluding focused web crawlers), with a brief description that includes the names given to the different components and outstanding features:
The following web crawlers are available, for a price::
cite book
: CS1 maint: multiple names: authors list (link)cite journal
: Cite journal requires |journal=
(help)cite journal
: Cite journal requires |journal=
(help)
![]() |
This article's "criticism" or "controversy" section may compromise the article's neutrality. (June 2024)
|
![]() |
|
![]() Screenshot of Google Maps in a web browser
|
|
Type of site
|
Web mapping |
---|---|
Available in | 74 languages |
List of languages
Afrikaans, Azerbaijani, Indonesian, Malay, Bosnian, Catalan, Czech, Danish, German (Germany), Estonian, English (United States), Spanish (Spain), Spanish (Latin America), Basque, Filipino, French (France), Galician, Croatian, Zulu, Icelandic, Italian, Swahili, Latvian, Lithuanian, Hungarian, Dutch, Norwegian, Uzbek, Polish, Portuguese (Brazil), Portuguese (Portugal), Romanian, Albanian, Slovak, Slovenian, Finnish, Swedish, Vietnamese, Turkish, Greek, Bulgarian, Kyrgyz, Kazakh, Macedonian, Mongolian, Russian, Serbian, Ukrainian, Georgian, Armenian, Hebrew, Urdu, Arabic, Persian, Amharic, Nepali, Hindi, Marathi, Bengali, Punjabi, Gujarati, Tamil, Telugu, Kannada, Malayalam, Sinhala, Thai, Lao, Burmese, Khmer, Korean, Japanese, Simplified Chinese, Traditional Chinese
|
|
Owner | |
URL | google |
Commercial | Yes |
Registration | Optional, included with a Google Account |
Launched | February 8, 2005 |
Current status | Active |
Written in | C++ (back-end), JavaScript, XML, Ajax (UI) |
Google Maps is a web mapping platform and consumer application offered by Google. It offers satellite imagery, aerial photography, street maps, 360° interactive panoramic views of streets (Street View), real-time traffic conditions, and route planning for traveling by foot, car, bike, air (in beta) and public transportation. As of 2020[update], Google Maps was being used by over one billion people every month around the world.[1]
Google Maps began as a C++ desktop program developed by brothers Lars and Jens Rasmussen in Australia at Where 2 Technologies. In October 2004, the company was acquired by Google, which converted it into a web application. After additional acquisitions of a geospatial data visualization company and a real-time traffic analyzer, Google Maps was launched in February 2005.[2] The service's front end utilizes JavaScript, XML, and Ajax. Google Maps offers an API that allows maps to be embedded on third-party websites,[3] and offers a locator for businesses and other organizations in numerous countries around the world. Google Map Maker allowed users to collaboratively expand and update the service's mapping worldwide but was discontinued from March 2017. However, crowdsourced contributions to Google Maps were not discontinued as the company announced those features would be transferred to the Google Local Guides program,[4] although users that are not Local Guides can still contribute.
Google Maps' satellite view is a "top-down" or bird's-eye view; most of the high-resolution imagery of cities is aerial photography taken from aircraft flying at 800 to 1,500 feet (240 to 460 m), while most other imagery is from satellites.[5] Much of the available satellite imagery is no more than three years old and is updated on a regular basis, according to a 2011 report.[6] Google Maps previously used a variant of the Mercator projection, and therefore could not accurately show areas around the poles.[7] In August 2018, the desktop version of Google Maps was updated to show a 3D globe. It is still possible to switch back to the 2D map in the settings.
Google Maps for mobile devices was first released in 2006; the latest versions feature GPS turn-by-turn navigation along with dedicated parking assistance features. By 2013, it was found to be the world's most popular smartphone app, with over 54% of global smartphone owners using it.[8] In 2017, the app was reported to have two billion users on Android, along with several other Google services including YouTube, Chrome, Gmail, Search, and Google Play.
Google Maps first started as a C++ program designed by two Danish brothers, Lars and Jens Eilstrup Rasmussen, and Noel Gordon and Stephen Ma, at the Sydney-based company Where 2 Technologies, which was founded in early 2003. The program was initially designed to be separately downloaded by users, but the company later pitched the idea for a purely Web-based product to Google management, changing the method of distribution.[9] In October 2004, the company was acquired by Google Inc.[10] where it transformed into the web application Google Maps. The Rasmussen brothers, Gordon and Ma joined Google at that time.
In the same month, Google acquired Keyhole, a geospatial data visualization company (with investment from the CIA), whose marquee application suite, Earth Viewer, emerged as the Google Earth application in 2005 while other aspects of its core technology were integrated into Google Maps.[11] In September 2004, Google acquired ZipDash, a company that provided real-time traffic analysis.[12]
The launch of Google Maps was first announced on the Google Blog on February 8, 2005.[13]
In September 2005, in the aftermath of Hurricane Katrina, Google Maps quickly updated its satellite imagery of New Orleans to allow users to view the extent of the flooding in various parts of that city.[14][15]
As of 2007, Google Maps was equipped with a miniature view with a draggable rectangle that denotes the area shown in the main viewport, and "Info windows" for previewing details about locations on maps.[16] As of 2024, this feature had been removed (likely several years prior).
On November 28, 2007, Google Maps for Mobile 2.0 was released.[17][18][19] It featured a beta version of a "My Location" feature, which uses the GPS / Assisted GPS location of the mobile device, if available, supplemented by determining the nearest wireless networks and cell sites.[18][19] The software looks up the location of the cell site using a database of known wireless networks and sites.[20][21] By triangulating the different signal strengths from cell transmitters and then using their location property (retrieved from the database), My Location determines the user's current location.[22]
On September 23, 2008, coinciding with the announcement of the first commercial Android device, Google announced that a Google Maps app had been released for its Android operating system.[23][24]
In October 2009, Google replaced Tele Atlas as their primary supplier of geospatial data in the US version of Maps and used their own data.[25]
On April 19, 2011, Map Maker was added to the American version of Google Maps, allowing any viewer to edit and add changes to Google Maps. This provides Google with local map updates almost in real-time instead of waiting for digital map data companies to release more infrequent updates.
On January 31, 2012, Google, due to offering its Maps for free, was found guilty of abusing the dominant position of its Google Maps application and ordered by a court to pay a fine and damages to Bottin Cartographer, a French mapping company.[26] This ruling was overturned on appeal.[27]
In June 2012, Google started mapping the UK's rivers and canals in partnership with the Canal and River Trust. The company has stated that "it would update the program during the year to allow users to plan trips which include locks, bridges and towpaths along the 2,000 miles of river paths in the UK."[28]
In December 2012, the Google Maps application was separately made available in the App Store, after Apple removed it from its default installation of the mobile operating system version iOS 6 in September 2012.[29]
On January 29, 2013, Google Maps was updated to include a map of North Korea.[30] As of May 3, 2013[update], Google Maps recognizes Palestine as a country, instead of redirecting to the Palestinian territories.[31]
In August 2013, Google Maps removed the Wikipedia Layer, which provided links to Wikipedia content about locations shown in Google Maps using Wikipedia geocodes.[32]
On April 12, 2014, Google Maps was updated to reflect the annexation of Ukrainian Crimea by Russia. Crimea is shown as the Republic of Crimea in Russia and as the Autonomous Republic of Crimea in Ukraine. All other versions show a dotted disputed border.[33]
In April 2015, on a map near the Pakistani city of Rawalpindi, the imagery of the Android logo urinating on the Apple logo was added via Map Maker and appeared on Google Maps. The vandalism was soon removed and Google publicly apologized.[34] However, as a result, Google disabled user moderation on Map Maker, and on May 12, disabled editing worldwide until it could devise a new policy for approving edits and avoiding vandalism.[35]
On April 29, 2015, users of the classic Google Maps were forwarded to the new Google Maps with the option to be removed from the interface.[36]
On July 14, 2015, the Chinese name for Scarborough Shoal was removed after a petition from the Philippines was posted on Change.org.[37]
On June 27, 2016, Google rolled out new satellite imagery worldwide sourced from Landsat 8, comprising over 700 trillion pixels of new data.[38] In September 2016, Google Maps acquired mapping analytics startup Urban Engines.[39]
In 2016, the Government of South Korea offered Google conditional access to the country's geographic database – access that already allows indigenous Korean mapping providers high-detail maps. Google declined the offer, as it was unwilling to accept restrictions on reducing the quality around locations the South Korean Government felt were sensitive (see restrictions on geographic data in South Korea).[40]
On October 16, 2017, Google Maps was updated with accessible imagery of several planets and moons such as Titan, Mercury, and Venus, as well as direct access to imagery of the Moon and Mars.[41][42]
In May 2018, Google announced major changes to the API structure starting June 11, 2018. This change consolidated the 18 different endpoints into three services and merged the basic and premium plans into one pay-as-you-go plan.[43] This meant a 1400% price raise for users on the basic plan, with only six weeks of notice. This caused a harsh reaction within the developers community.[44] In June, Google postponed the change date to July 16, 2018.
In August 2018, Google Maps designed its overall view (when zoomed out completely) into a 3D globe dropping the Mercator projection that projected the planet onto a flat surface.[45]
In January 2019, Google Maps added speed trap and speed camera alerts as reported by other users.[46][47]
On October 17, 2019, Google Maps was updated to include incident reporting, resembling a functionality in Waze which was acquired by Google in 2013.[48]
In December 2019, Incognito mode was added, allowing users to enter destinations without saving entries to their Google accounts.[49]
In February 2020, Maps received a 15th anniversary redesign.[50] It notably added a brand-new app icon, which now resembles the original icon in 2005.
On September 23, 2020, Google announced a COVID-19 Layer update for Google maps, which is designed to offer a seven-day average data of the total COVID-19-positive cases per 100,000 people in the area selected on the map. It also features a label indicating the rise and fall in the number of cases.[51]
In January 2021, Google announced that it would be launching a new feature displaying COVID-19 vaccination sites.[52]
In January 2021, Google announced updates to the route planner that would accommodate drivers of electric vehicles. Routing would take into account the type of vehicle, vehicle status including current charge, and the locations of charging stations.[53]
In June 2022, Google Maps added a layer displaying air quality for certain countries.[54]
In September 2022, Google removed the COVID-19 Layer from Google Maps due to lack of usage of the feature.[55]
Google Maps provides a route planner,[56] allowing users to find available directions through driving, public transportation, walking, or biking.[57] Google has partnered globally with over 800 public transportation providers to adopt GTFS (General Transit Feed Specification), making the data available to third parties.[58][59] The app can indicate users' transit route, thanks to an October 2019 update. The incognito mode, eyes-free walking navigation features were released earlier.[60] A July 2020 update provided bike share routes.[61]
In February 2024, Google Maps started rolling out glanceable directions for its Android and iOS apps. The feature allows users to track their journey from their device's lock screen.[62][63]
In 2007, Google began offering traffic data as a colored overlay on top of roads and motorways to represent the speed of vehicles on particular roads. Crowdsourcing is used to obtain the GPS-determined locations of a large number of cellphone users, from which live traffic maps are produced.[64][65][66]
Google has stated that the speed and location information it collects to calculate traffic conditions is anonymous.[67] Options available in each phone's settings allow users not to share information about their location with Google Maps.[68] Google stated, "Once you disable or opt out of My Location, Maps will not continue to send radio information back to Google servers to determine your handset's approximate location".[69][failed verification]
On May 25, 2007, Google released Google Street View, a feature of Google Maps providing 360° panoramic street-level views of various locations. On the date of release, the feature only included five cities in the U.S. It has since expanded to thousands of locations around the world. In July 2009, Google began mapping college campuses and surrounding paths and trails.
Street View garnered much controversy after its release because of privacy concerns about the uncensored nature of the panoramic photographs, although the views are only taken on public streets.[70][71] Since then, Google has blurred faces and license plates through automated facial recognition.[72][73][74]
In late 2014, Google launched Google Underwater Street View, including 2,300 kilometres (1,400 mi) of the Australian Great Barrier Reef in 3D. The images are taken by special cameras which turn 360 degrees and take shots every 3 seconds.[75]
In 2017, in both Google Maps and Google Earth, Street View navigation of the International Space Station interior spaces became available.
Google Maps has incorporated[when?] 3D models of hundreds of cities in over 40 countries from Google Earth into its satellite view. The models were developed using aerial photogrammetry techniques.[76][77]
At the I/O 2022 event, Google announced Immersive View, a feature of Google Maps which would involve composite 3D images generated from Street View and aerial images of locations using AI, complete with synchronous information. It was to be initially in five cities worldwide, with plans to add it to other cities later on.[78] The feature was previewed in September 2022 with 250 photorealistic aerial 3D images of landmarks,[79] and was full launched in February 2023.[80] An expansion of Immersive View to routes was announced at Google I/O 2023,[81] and was launched in October 2023 for 15 cities globally.[82]
The feature uses predictive modelling and neural radiance fields to scan Street View and aerial images to generate composite 3D imagery of locations, including both exteriors and interiors, and routes, including driving, walking or cycling, as well as generate synchronous information and forecasts up to a month ahead from historical and environmental data about both such as weather, traffic and busyness.
Immersive View has been available in the following locations:[citation needed]
Google added icons of city attractions, in a similar style to Apple Maps, on October 3, 2019. In the first stage, such icons were added to 9 cities.[83]
In December 2009, Google introduced a new view consisting of 45° angle aerial imagery, offering a "bird's-eye view" of cities. The first cities available were San Jose and San Diego. This feature was initially available only to developers via the Google Maps API.[84] In February 2010, it was introduced as an experimental feature in Google Maps Labs.[85] In July 2010, 45° imagery was made available in Google Maps in select cities in South Africa, the United States, Germany and Italy.[86]
In February 2024, Google Maps incorporated a small weather icon on the top left corner of the Android and iOS mobile apps, giving access to weather and air quality index details.[87]
Previously called Search with Live View, Lens In Maps identifies shops, restaurants, transit stations and other street features with a phone's camera and places relevant information and a category pin on top, like closing/opening times, current busyness, pricing and reviews using AI and augmented reality. The feature, if available on the device, can be accessed through tapping the Lens icon in the search bar. It was expanded to 50 new cities in October 2023 in its biggest expansion yet, after initially being released in late 2022 in Los Angeles, San Francisco, New York, London, and Paris.[88][89] Lens in Maps shares features with Live View, which also displays information relating to street features while guiding a user to a selected destination with virtual arrows, signs and guidance.[90]
Google collates business listings from multiple on-line and off-line sources. To reduce duplication in the index, Google's algorithm combines listings automatically based on address, phone number, or geocode,[91] but sometimes information for separate businesses will be inadvertently merged with each other, resulting in listings inaccurately incorporating elements from multiple businesses.[92] Google allows business owners to create and verify their own business data through Google Business Profile (GBP), formerly Google My Business (GMB).[93] Owners are encouraged to provide Google with business information including address, phone number, business category, and photos.[94] Google has staff in India who check and correct listings remotely as well as support businesses with issues.[95] Google also has teams on the ground in most countries that validate physical addresses in person.[96] In May 2024, Google announced it would discontinue the chat feature in Google Business Profile. Starting July 15, 2024, new chat conversations would be disabled, and by July 31, 2024, all chat functionalities would end.[97]
Google Maps can be manipulated by businesses that are not physically located in the area in which they record a listing. There are cases of people abusing Google Maps to overtake their competition by placing unverified listings on online directory sites, knowing the information will roll across to Google (duplicate sites). The people who update these listings do not use a registered business name. They place keywords and location details on their Google Maps business title, which can overtake credible business listings. In Australia in particular, genuine companies and businesses are noticing a trend of fake business listings in a variety of industries.[98]
Genuine business owners can also optimize their business listings to gain greater visibility in Google Maps, through a type of search engine marketing called local search engine optimization.[99]
In March 2011, indoor maps were added to Google Maps, giving users the ability to navigate themselves within buildings such as airports, museums, shopping malls, big-box stores, universities, transit stations, and other public spaces (including underground facilities). Google encourages owners of public facilities to submit floor plans of their buildings in order to add them to the service.[100] Map users can view different floors of a building or subway station by clicking on a level selector that is displayed near any structures which are mapped on multiple levels.
My Maps is a feature in Google Maps launched in April 2007 that enables users to create custom maps for personal use or sharing. Users can add points, lines, shapes, notes and images on top of Google Maps using a WYSIWYG editor.[101] An Android app for My Maps, initially released in March 2013 under the name Google Maps Engine Lite, was available until its removal from the Play Store in October 2021.[102][103][104]
Google Local Guides is a volunteer program launched by Google Maps[105] to enable users to contribute to Google Maps when registered. It sometimes provides them additional perks and benefits for their collaboration. Users can achieve Level 1 to 10, and be awarded with badges. The program is partially a successor to Google Map Maker as features from the former program became integrated into the website and app.[106]
The program consists of adding reviews, photos, basic information, and videos; and correcting information such as wheelchair accessibility.[107][108] Adding reviews, photos, videos, new places, new roads or providing useful information gives points to the users.[109] The level of users is upgraded when they get a certain amount of points.[110][111] Starting with Level 4, a star is shown near the avatar of the user.[111]
Earth Timelapse, released in April 2021, is a program in which users can see how the earth has been changed in the last 37 years. They combined the 15 million satellite images (roughly ten quadrillion pixels) to create the 35 global cloud-free Images for this program.[112]
If a user shares their location with Google, Timeline summarises this location for each day on a Timeline map.[113] Timeline estimates the mode of travel used to move between places and will also show photos taken at that location. In June 2024, Google started progressively removing access to the timeline on web browsers, with the information instead being stored on a local device.[114][115]
As the user drags the map, the grid squares are downloaded from the server and inserted into the page. When a user searches for a business, the results are downloaded in the background for insertion into the side panel and map; the page is not reloaded. A hidden iframe with form submission is used because it preserves browser history. Like many other Google web applications, Google Maps uses JavaScript extensively.[116] The site also uses protocol buffers for data transfer rather than JSON, for performance reasons.
The version of Google Street View for classic Google Maps required Adobe Flash.[117] In October 2011, Google announced MapsGL, a WebGL version of Maps with better renderings and smoother transitions.[118] Indoor maps use JPG, .PNG, .PDF, .BMP, or .GIF, for floor plans.[119]
Users who are logged into a Google Account can save locations so that they are overlaid on the map with various colored "pins" whenever they browse the application. These "Saved places" can be organized into default groups or user named groups and shared with other users. "Starred places" is one default group example. It previously automatically created a record within the now-discontinued product Google Bookmarks.
The Google Maps terms and conditions[120] state that usage of material from Google Maps is regulated by Google Terms of Service[121] and some additional restrictions. Google has either purchased local map data from established companies, or has entered into lease agreements to use copyrighted map data.[122] The owner of the copyright is listed at the bottom of zoomed maps. For example, street maps in Japan are leased from Zenrin. Street maps in China are leased from AutoNavi.[123] Russian street maps are leased from Geocentre Consulting and Tele Atlas. Data for North Korea is sourced from the companion project Google Map Maker.
Street map overlays, in some areas, may not match up precisely with the corresponding satellite images. The street data may be entirely erroneous, or simply out of date: "The biggest challenge is the currency of data, the authenticity of data," said Google Earth representative Brian McClendon. As a result, in March 2008 Google added a feature to edit the locations of houses and businesses.[124][125]
Restrictions have been placed on Google Maps through the apparent censoring of locations deemed potential security threats. In some cases the area of redaction is for specific buildings, but in other cases, such as Washington, D.C.,[126] the restriction is to use outdated imagery.
Google Maps API, now called Google Maps Platform, hosts about 17 different APIs, which are themed under the following categories: Maps, Places and Routes.[127]
After the success of reverse-engineered mashups such as chicagocrime.org and housingmaps.com, Google launched the Google Maps API in June 2005[128] to allow developers to integrate Google Maps into their websites. It was a free service that did not require an API key until June 2018 (changes went into effect on July 16), when it was announced that an API key linked to a Google Cloud account with billing enabled would be required to access the API.[129] The API currently[update] does not contain ads, but Google states in their terms of use that they reserve the right to display ads in the future.[130]
By using the Google Maps API, it is possible to embed Google Maps into an external website, onto which site-specific data can be overlaid.[131] Although initially only a JavaScript API, the Maps API was expanded to include an API for Adobe Flash applications (but this has been deprecated), a service for retrieving static map images, and web services for performing geocoding, generating driving directions, and obtaining elevation profiles. Over 1,000,000[132] web sites use the Google Maps API, making it the most heavily used web application development API.[133] In September 2011, Google announced it would deprecate the Google Maps API for Flash.[134]
The Google Maps API was free for commercial use, provided that the site on which it is being used is publicly accessible and did not charge for access, and was not generating more than 25,000 map accesses a day.[135][136] Sites that did not meet these requirements could purchase the Google Maps API for Business.[137]
As of June 21, 2018, Google increased the prices of the Maps API and requires a billing profile.[138]
Due to restrictions on geographic data in China, Google Maps must partner with a Chinese digital map provider in order to legally show Chinese map data. Since 2006, this partner has been AutoNavi.[123]
Within China, the State Council mandates that all maps of China use the GCJ-02 coordinate system, which is offset from the WGS-84 system used in most of the world. google.cn/maps (formerly Google Ditu) uses the GCJ-02 system for both its street maps[139] and satellite imagery.[140] google.com/maps also uses GCJ-02 data for the street map, but uses WGS-84 coordinates for satellite imagery,[141] causing the so-called China GPS shift problem.
Frontier alignments also present some differences between google.cn/maps and google.com/maps. On the latter, sections of the Chinese border with India and Pakistan are shown with dotted lines, indicating areas or frontiers in dispute. However, google.cn shows the Chinese frontier strictly according to Chinese claims with no dotted lines indicating the border with India and Pakistan. For example, the South Tibet region claimed by China but administered by India as a large part of Arunachal Pradesh is shown inside the Chinese frontier by google.cn, with Indian highways ending abruptly at the Chinese claim line. Google.cn also shows Taiwan and the South China Sea Islands as part of China. Google Ditu's street map coverage of Taiwan no longer omits major state organs, such as the Presidential Palace, the five Yuans, and the Supreme Court.[142][additional citation(s) needed]
Feature-wise, google.cn/maps does not feature My Maps. On the other hand, while google.cn displays virtually all text in Chinese, google.com/maps displays most text (user-selectable real text as well as those on map) in English.[citation needed] This behavior of displaying English text is not consistent but intermittent – sometimes it is in English, sometimes it is in Chinese. The criteria for choosing which language is displayed are not known publicly.[citation needed]
There are cases where Google Maps had added out-of-date neighborhood monikers. Thus, in Los Angeles, the name "Brooklyn Heights" was revived from its 1870s usage[143] and "Silver Lake Heights" from its 1920s usage,[144] or mistakenly renamed areas (in Detroit, the neighborhood "Fiskhorn" became "Fishkorn").[145] Because many companies utilize Google Maps data, these previously obscure or incorrect names then gain traction; the names are often used by realtors, hotels, food delivery sites, dating sites, and news organizations.
Google has said it created its maps from third-party data, public sources, satellites, and users, but many names used have not been connected to any official record.[143][145] According to a former Google Maps employee (who was not authorized to speak publicly), users can submit changes to Google Maps, but some submissions are ruled upon by people with little local knowledge of a place, such as contractors in India. Critics maintain that names likes "BoCoCa" (for the area in Brooklyn between Boerum Hill, Cobble Hill and Carroll Gardens), are "just plain puzzling" or simply made up.[145] Some names used by Google have been traced to non-professionally made maps with typographical errors that survived on Google Maps.[145]
In 2005 the Australian Nuclear Science and Technology Organisation (ANSTO) complained about the potential for terrorists to use the satellite images in planning attacks, with specific reference to the Lucas Heights nuclear reactor; however, the Australian Federal government did not support the organization's concern. At the time of the ANSTO complaint, Google had colored over some areas for security (mostly in the U.S.), such as the rooftop of the White House and several other Washington, D.C. buildings.[146][147][148]
In October 2010, Nicaraguan military commander Edén Pastora stationed Nicaraguan troops on the Isla Calero (in the delta of the San Juan River), justifying his action on the border delineation given by Google Maps. Google has since updated its data which it found to be incorrect.[149]
On January 27, 2014, documents leaked by Edward Snowden revealed that the NSA and the GCHQ intercepted Google Maps queries made on smartphones, and used them to locate the users making these queries. One leaked document, dating to 2008, stated that "[i]t effectively means that anyone using Google Maps on a smartphone is working in support of a GCHQ system."[150]
In May 2015, searches on Google Maps for offensive racial epithets for African Americans such as "nigger", "nigger king", and "nigger house" pointed the user to the White House; Google apologized for the incident.[151][152]
In December 2015, 3 Japanese netizens were charged with vandalism after they were found to have added an unrelated law firm's name as well as indecent names to locations such as "Nuclear test site" to the Atomic Bomb Dome and "Izumo Satya" to the Izumo Taisha.[153][154]
In February 2020, the artist Simon Weckert[155] used 99 cell phones to fake a Google Maps traffic jam.[156]
In September 2024, several schools in Taiwan and Hong Kong were altered to incorrect labels, such as "psychiatric hospitals" or "prisons". Initially, it was believed to be the result of hacker attacks. However, police later revealed that local students had carried out the prank. Google quickly corrected the mislabeled entries. Education officials in Taiwan and Hong Kong expressed concern over the incident.[157][158][159]
In August 2023, a woman driving from Alice Springs to the Harts Range Racecourse was stranded in the Central Australian desert for a night after following directions provided by Google Maps.[160][161] She later discovered that Google Maps was providing directions for the actual Harts Range instead of the rodeo. Google said it was looking into the naming of the two locations and consulting with "local and authoritative sources" to solve the issue.[160]
In February 2024, two German tourists were stranded for a week after Google Maps directed them to follow a dirt track through Oyala Thumotang National Park and their vehicle became trapped in mud.[162][163] Queensland Parks and Wildlife Service ranger Roger James said, "People should not trust Google Maps when they're travelling in remote regions of Queensland, and they need to follow the signs, use official maps or other navigational devices."[162]
In June 2019, Google Maps provided nearly 100 Colorado drivers an alternative route that led to a dirt road after a crash occurred on Peña Boulevard. The road had been turned to mud by rain, resulting in nearly 100 vehicles being trapped.[164][161] Google said in a statement, "While we always work to provide the best directions, issues can arise due to unforeseen circumstances such as weather. We encourage all drivers to follow local laws, stay attentive, and use their best judgment while driving."[164]
In September 2023, Google was sued by a North Carolina resident who alleged that Google Maps had directed her husband over the Snow Creek Bridge in Hickory the year prior, resulting in him drowning. According to the lawsuit, multiple people had notified Google about the state of the bridge, which collapsed in 2013, but Google had not updated the route information and continued to direct users over the bridge.[165][166][161] At the time of the man's death, the barriers placed to block access to the bridge had been vandalized.[167][168]
In November 2023, a hiker was rescued by helicopter on the backside of Mount Fromme in Vancouver. North Shore Rescue stated on its Facebook page that the hiker had followed a non-existent hiking trail on Google Maps. This was also the second hiker in two months to require rescuing after following the same trail. The fake trail has since been removed from the app.[169][170]
Also in November 2023, Google apologized after users were directed through desert roads after parts of Interstate 15 were closed due to a dust storm.[171] Drivers became stranded after following the suggested detour route, which was a "bumpy dirt trail".[172] Following the incident, Google stated that Google Maps would "no longer route drivers traveling between Las Vegas and Barstow down through those roads."[171]
In 2020, a teenage motorist was found frozen to death while his passenger was still alive but suffered from severe frostbite after using Google Maps, which had led them to a shorter but abandoned section of the R504 Kolyma Highway, where their Toyota Chaser became disabled.[173]
In 2024, three men from Uttar Pradesh died after their car fell from an under-construction bridge. They were using Google Maps for driving which misdirected them and the car fell into the Ramganga river.[174][175]
In February 2025, as a response to Donald Trump's Executive Order 14172, the Gulf of Mexico was renamed to "Gulf of America" for US users and "Gulf of Mexico (Gulf of America)" elsewhere, except for Mexico itself where it remained the Gulf of Mexico. The decision received criticism, with Mexican president Claudia Sheinbaum asking Google to reconsider its decision.[176] Google subsequently blocked and deleted negative reviews of the gulf after the name change occurred.[177][178]
Google Latitude was a feature that let users share their physical locations with other people. This service was based on Google Maps, specifically on mobile devices. There was an iGoogle widget for desktops and laptops as well.[179] Some concerns were expressed about the privacy issues raised by the use of the service.[180] On August 9, 2013, this service was discontinued,[181] and on March 22, 2017, Google incorporated the features from Latitude into the Google Maps app.[182]
In areas where Google Map Maker was available, for example, much of Asia, Africa, Latin America and Europe as well as the United States and Canada, anyone who logged into their Google account could directly improve the map by fixing incorrect driving directions, adding biking trails, or adding a missing building or road. General map errors in Australia, Austria, Belgium, Denmark, France, Liechtenstein, Netherlands, New Zealand, Norway, South Africa, Switzerland, and the United States could be reported using the Report a Problem link in Google Maps and would be updated by Google.[183] For areas where Google used Tele Atlas data, map errors could be reported using Tele Atlas map insight.[184]
If imagery was missing, outdated, misaligned, or generally incorrect, one could notify Google through their contact request form.[185]
In November 2016, Google announced the discontinuation of Google Map Maker as of March 2017.[186]
Android | 25.10.04 (Build 732665141) / 7 March 2025[187][188] |
---|---|
Wear OS | 25.09.00 (Build 730474011) / 25 February 2025[187][189] |
iOS | 25.10.02 / 7 March 2025[190] |
Android Go,[a] discontinued | 161.1 / 13 October 2023[191][192] |
Android (Beta) | 11.143.0303 / 20 August 2024[193] |
---|
![]() |
|
Original author(s) | |
---|---|
Initial release | 2006 |
Stable release(s) [±] | |
Preview release(s) [±] | |
Operating system |
Formerly: Java ME, Symbian, Windows Mobile |
Google Maps is available as a mobile app for the Android and iOS mobile operating systems. The first mobile version of Google Maps (then known as Google Local for Mobile) was launched in beta in November 2005 for mobile platforms supporting J2ME.[194][195][196] It was released as Google Maps for Mobile in 2006.[197] In 2007 it came preloaded on the first iPhone in a deal with Apple.[198] A version specifically for Windows Mobile was released in February 2007[199] and the Symbian app was released in November 2007.[200]
Version 2.0 of Google Maps Mobile was announced at the end of 2007, with a stand out My Location feature to find the user's location using the cell towers, without needing GPS.[201][202][203] In September 2008, Google Maps was released for and preloaded on Google's own new platform Android.[204][205]
Up until iOS 6, the built-in maps application on the iOS operating system was powered by Google Maps. However, with the announcement of iOS 6 in June 2012, Apple announced that they had created their own Apple Maps mapping service,[206] which officially replaced Google Maps when iOS 6 was released on September 19, 2012.[207] However, at launch, Apple Maps received significant criticism from users due to inaccuracies, errors and bugs.[208][209] One day later, The Guardian reported that Google was preparing its own Google Maps app,[210] which was released on December 12, 2012.[211][212] Within two days, the application had been downloaded over ten million times.[213]
The Google Maps apps for iOS and Android have many of the same features, including turn-by-turn navigation, street view, and public transit information.[214][215] Turn-by-turn navigation was originally announced by Google as a separate beta testing app exclusive to Android 2.0 devices in October 2009.[216][217] The original standalone iOS version did not support the iPad,[215] but tablet support was added with version 2.0 in July 2013.[218] An update in June 2012 for Android devices added support for offline access to downloaded maps of certain regions,[219][220] a feature that was eventually released for iOS devices, and made more robust on Android, in May 2014.[221][222]
At the end of 2015 Google Maps announced its new offline functionality,[223] but with various limitations – downloaded area cannot exceed 120,000 square kilometers[224][225] and require a considerable amount of storage space.[226] In January 2017, Google added a feature exclusively to Android that will, in some U.S. cities, indicate the level of difficulty in finding available parking spots,[227] and on both Android and iOS, the app can, as of an April 2017 update, remember where users parked.[228][229] In August 2017, Google Maps for Android was updated with new functionality to actively help the user in finding parking lots and garages close to a destination.[230] In December 2017, Google added a new two-wheeler mode to its Android app, designed for users in India, allowing for more accessibility in traffic conditions.[231][232] In 2019 the Android version introduced the new feature called live view that allows to view directions directly on the road thanks to augmented reality.[233] Google Maps won the 2020 Webby Award for Best User Interface in the category Apps, Mobile & Voice.[234] In March 2021, Google added a feature in which users can draw missing roads.[235] In June 2022, Google implemented support for toll calculation. Both iOS and Android apps report how much the user has to pay in tolls when a route that includes toll roads is input. The feature is available for roads in the US, India, Japan and Indonesia with further expansion planned. As per reports the total number of toll roads covered in this phase is around 2000.[236]
USA Today welcomed the application back to iOS, saying: "The reemergence in the middle of the night of a Google Maps app for the iPhone is like the return of an old friend. Only your friend, who'd gone missing for three months, comes back looking better than ever."[237] Jason Parker of CNET, calling it "the king of maps", said, "With its iOS Maps app, Google sets the standard for what mobile navigation should be and more."[238] Bree Fowler of the Associated Press compared Google's and Apple's map applications, saying: "The one clear advantage that Apple has is style. Like Apple devices, the maps are clean and clear and have a fun, pretty element to them, especially in 3-D. But when it comes down to depth and information, Google still reigns superior and will no doubt be welcomed back by its fans."[239] Gizmodo gave it a ranking of 4.5 stars, stating: "Maps Done Right".[240] According to The New York Times, Google "admits that it's [iOS app is] even better than Google Maps for Android phones, which has accommodated its evolving feature set mainly by piling on menus".[241]
Google Maps' location tracking is regarded by some as a threat to users' privacy, with Dylan Tweney of VentureBeat writing in August 2014 that "Google is probably logging your location, step by step, via Google Maps", and linked users to Google's location history map, which "lets you see the path you've traced for any given day that your smartphone has been running Google Maps". Tweney then provided instructions on how to disable location history.[242] The history tracking was also noticed, and recommended disabled, by editors at CNET[243] and TechCrunch.[244] Additionally, Quartz reported in April 2014 that a "sneaky new privacy change" would have an effect on the majority of iOS users. The privacy change, an update to the Gmail iOS app that "now supports sign-in across Google iOS apps, including Maps, Drive, YouTube and Chrome", meant that Google would be able to identify users' actions across its different apps.[245]
The Android version of the app surpassed five billion installations in March 2019.[246] By November 2021, the Android app had surpassed 10 billion installations.[247]
Google Maps Go, a version of the app designed for lower-end devices, was released in beta in January 2018.[248] By September 2018, the app had over 10 million installations.[249]
The German "geo-novel" Senghor on the Rocks (2008) presents its story as a series of spreads showing a Google Maps location on the left and the story's text on the right. Annika Richterich explains that the "satellite pictures in Senghor on the Rocks illustrate the main character's travel through the West-African state of Senegal".[250]
Artists have used Google Street View in a range of ways. Emilio Vavarella's The Google Trilogy includes glitchy images and unintended portraits of the drivers of the Street View cars.[251] The Japanese band group inou used Google Street View backgrounds to make a music video for their song EYE.[252] The Canadian band Arcade Fire made a customized music video that used Street View to show the viewer their own childhood home.[253][254]
cite web
: CS1 maint: bot: original URL status unknown (link)Google employs automatic face and license plate blurring technology to protect people's privacy in Street View, and users can even request additional blurring. Aerial imagery provides much less detail and resolution.
A web directory or link directory is an online list or catalog of websites. That is, it is a directory on the World Wide Web of (all or part of) the World Wide Web. Historically, directories typically listed entries on people or businesses, and their contact information; such directories are still in use today. A web directory includes entries about websites, including links to those websites, organized into categories and subcategories.[1][2][3] Besides a link, each entry may include the title of the website, and a description of its contents. In most web directories, the entries are about whole websites, rather than individual pages within them (called "deep links"). Websites are often limited to inclusion in only a few categories.
There are two ways to find information on the Web: by searching or browsing. Web directories provide links in a structured list to make browsing easier. Many web directories combine searching and browsing by providing a search engine to search the directory. Unlike search engines, which base results on a database of entries gathered automatically by web crawler, most web directories are built manually by human editors. Many web directories allow site owners to submit their site for inclusion, and have editors review submissions for fitness.
Web directories may be general in scope, or limited to particular subjects or fields. Entries may be listed for free, or by paid submission (meaning the site owner must pay to have his or her website listed).
RSS directories are similar to web directories, but contain collections of RSS feeds, instead of links to websites.
During the early development of the web, there was a list of web servers edited by Tim Berners-Lee and hosted on the CERN webserver. One historical snapshot from 1992 remains.[4] He also created the World Wide Web Virtual Library, which is the oldest web directory.[5]
Most of the directories are general in on scope and list websites across a wide range of categories, regions and languages. But some niche directories focus on restricted regions, single languages, or specialist sectors. For example, there are shopping directories that specialize in the listing of retail e-commerce sites.
Examples of well-known general web directories are Yahoo! Directory (shut down at the end of 2014) and DMOZ (shut down on March 14, 2017). DMOZ was significant due to its extensive categorization and large number of listings and its free availability for use by other directories and search engines.[6]
However, a debate over the quality of directories and databases still continues, as search engines use DMOZ's content without real integration, and some experiment using clustering.
There have been many attempts to make building web directories easier, such as using automated submission of related links by script, or any number of available PHP portals and programs. Recently, social software techniques have spawned new efforts of categorization, with Amazon.com adding tagging to their product pages.
Directories have various features in their listings, often depending upon the price paid for inclusion:
rel="nofollow"
attribute associated with the link, meaning search engines will give no weight to the linkA human-edited directory is created and maintained by editors who add links based on the policies particular to that directory. Human-edited directories are often targeted by SEOs on the basis that links from reputable sources will improve rankings in the major search engines. Some directories may prevent search engines from rating a displayed link by using redirects, nofollow attributes, or other techniques. Many human-edited directories, including DMOZ, World Wide Web Virtual Library, Business.com and Jasmine Directory, are edited by volunteers, who are often experts in particular categories. These directories are sometimes criticized due to long delays in approving submissions, or for rigid organizational structures and disputes among volunteer editors.
In response to these criticisms, some volunteer-edited directories have adopted wiki technology, to allow broader community participation in editing the directory (at the risk of introducing lower-quality, less objective entries).
Another direction taken by some web directories is the paid for inclusion model. This method enables the directory to offer timely inclusion for submissions and generally fewer listings as a result of the paid model. They often offer additional listing options to further enhance listings, including features listings and additional links to inner pages of the listed website. These options typically have an additional fee associated but offer significant help and visibility to sites and/or their inside pages.
Today submission of websites to web directories is considered a common SEO (search engine optimization) technique to get back-links for the submitted website. One distinctive feature of 'directory submission' is that it cannot be fully automated like search engine submissions. Manual directory submission is a tedious and time-consuming job and is often outsourced by webmasters.
Bid for Position directories, also known as bidding web directories, are paid-for-inclusion web directories where the listings of websites in the directory are ordered according to their bid amount. They are special in that the more a person pays, the higher up the list of websites in the directory they go. With the higher listing, the website becomes more visible and increases the chances that visitors who browse the directory will click on the listing.
Web directories will often make themselves accessing by more and more URLs by acquiring the domain registrations of defunct websites as soon as they expire, a practice known as Domain drop catching.
Reciprocal links may not help with competitive keyword rankings, but that does not mean you should avoid them when they make sound business sense. What you should definitely avoid are manipulative reciprocal linking schemes like automated link trading programs and three-way links or four-way links.
Local SEO services in Sydney focus on optimizing a business's online presence to attract local customers. This includes claiming local business listings, optimizing Google My Business profiles, using location-specific keywords, and ensuring consistent NAP (Name, Address, Phone) information across the web.
SEO agencies in Sydney typically offer comprehensive services such as keyword research, technical audits, on-page and off-page optimization, content creation, and performance tracking. Their goal is to increase your site's search engine rankings and drive more targeted traffic to your website.
A local SEO agency specializes in improving a business's visibility within a specific geographic area. They focus on optimizing local citations, managing Google My Business profiles, and targeting location-based keywords to attract nearby customers.
SEO packages in Australia typically bundle essential optimization services such as keyword research, technical audits, content creation, and link building at a set price. They are designed to simplify the process, provide consistent results, and help businesses of all sizes improve their online visibility.
Content marketing and SEO work hand-in-hand. High-quality, relevant content attracts readers, earns backlinks, and encourages longer time spent on your site'factors that all contribute to better search engine rankings. Engaging, well-optimized content also improves user experience and helps convert visitors into customers.