Edit model card

BERT-Based Classification Model for Google Local Listings

The Logo of SerpApi

This repository contains a BERT-based classification model developed using the Hugging Face library, and a dataset gathered by SerpApi's Google Local API. The model is designed to classify different texts extracted from Google Local Listings.

You may check out the blog post explaining the model's usecase with an example: Real World Example of AI Powered Parsing.

You may also check out the Open Source Github Repository that contains the source code of a Ruby Gem called `google-local-results-ai-parser`.


Usage and Classification for Parsing

The example code below represents using it Python with Inference API for prototyping. You may use different programming languages for calling the results, and you may parallelize your work. Prototyping endpoint will have limited amount of calls. For Production Purposes or Large Prototyping Activities, consider setting an Inference API Endpoint from Huggingface, or a Private API Server for serving the model.

API_URL = "https://api-inference.huggingface.co/models/serpapi/bert-base-local-results"
headers = {"Authorization": "Bearer xxxxx"}

def query(payload):
    response = requests.post(API_URL, headers=headers, json=payload)
    return response.json()

output = query({
    "inputs": "5540 N Lamar Blvd #12, Austin, TX 78756, United States",
})
Output: address

Strong Features

The BERT-based model excels in the following areas:

  • Differentiating difficult semantic similarities with ease
    • "No Reviews" β†’ reviews
    • "(5K+)" β†’ reviews
  • Handling partial texts that can be combined later
    • "Open β‹… Closes 5β€―pm"
      • "Open" β†’ hours
      • "Closes 5β€―pm" β†’ hours
  • Handling Vocabulary from diverse areas with ease
    • "Doctor" β†’ type
    • "Restaurant" β†’ type
  • Returning Assurance Score for After-Correction
    • "4.7" β†’ rating(0.999)
  • Strong Against Grammatical Mistakes
    • "Krebside Pickup" β†’ service options

Parts Covered and Corresponding Keys in SerpApi Parsers

  • Type of Place: type
  • Number of Reviews: reviews
  • Phone Number: phone
  • Rating: rating
  • Address: address
  • Operating Hours: hours
  • Description or Descriptive Review: description
  • Expensiveness: expensiveness
  • Service Options: service options
  • Button Text: links
  • Years in Business: years_in_business

Please refer to the documentation of SerpApi's Google Local API and Google Local Pack API for more details on different parts:

References:


Known Limitations

The model has a few limitations that should be taken into account:

  • The model does not classify the title of a place. This is because the title often contains many elements that can be easily confused with other parts, even for a human eye.
  • The label key is not covered by the model, as it can be easily handled with traditional code.
  • In some cases, button text could be classified as service options or address. However, this can be easily avoided by checking if a text is in a button in the traditional part of the code. The button text is only used to prevent emergent cases.
    • "Delivery" β†’ service options [Correct Label is button text]
    • "Share" β†’ address [Correct Label is button text]
  • In some cases, the model may classify a portion of the description as hours if the description is about operating hours. For example:
    • "Drive through: Open β‹… Closes 12 AM"
      • "Drive through: Open" β†’ description
      • "Closes 12 AM" β†’ hours
  • In some cases, the model may classify some description as type. This is because some description do look like type. For Example:
    • "Iconic Seattle-based coffeehouse chain" β†’ type [Correct Label is description]
  • In some cases, the model may classify some reviews as rating. This is most likely a deficiency in the training dataset, and may be resolved in the coming versions. For Example:
    • "Expand more" β†’ hours [Correct Label is button text]
  • In some cases, the model may classify some service options as type. This is most likely a deficiency in the training dataset, and may be resolved in the coming versions. For Example:
    • "Takeaway" β†’ type [Correct Label is service options]
  • In some cases, the model may classify some reviews as hours or price. This is most likely a deficiency in the training dataset, and may be resolved in the coming versions. For Example:
    • "(1.4K)" β†’ rating [Correct Label is reviews]
    • "(1.6K)" β†’ price [Correct Label is reviews]
  • In some cases, the model may classify some service options as description or type. The reason for the confusion on description is because of a recent change in their categorization in SerpApi keys. The data contains labels prior to that. For Example:
    • "On-site services" β†’ type [Correct Label is service options]
    • "Online appointments" β†’ description [Correct Label is service options]
  • The model may be susceptible to error in one word entries. This is a minority of the cases, and it could be fixed with assurance scores. For Example:
    • "Sushi" β†’ address(0.984), type(0.0493) [Correct Label is type]
    • "Diagorou 4" β†’ address(0.999) [Correct address in same listing]
  • The model cannot differentiate between extra parts that are extracted in SerpApi's Google Local API and Google Local Pack API. These parts are not feasible to extract via Classification Models.
  • The model is not designed for Listings outside English Language.

Disclaimer

We value full transparency and painful honesty both in our internal and external communications. We believe a world with complete and open transparency is a better world.

However, while we strive for transparency, there are certain situations where sharing specific datasets may not be feasible or advisable. In the case of the dataset used to train our model, which contains different parts of a Google Local Listing including addresses and phone numbers, we have made a careful decision not to share it. We prioritize the well-being and safety of individuals, and sharing this dataset could potentially cause harm to people whose personal information is included.

Protecting the privacy and security of individuals is of utmost importance to us. Disclosing personal information, such as addresses and phone numbers, without proper consent or safeguards could lead to privacy violations, identity theft, harassment, or other forms of misuse. Our commitment to responsible data usage means that we handle sensitive information with great care and take appropriate measures to ensure its protection.

While we understand the value of transparency, we also recognize the need to strike a balance between transparency and safeguarding individuals' privacy and security. In this particular case, the potential harm that could result from sharing the dataset outweighs the benefits of complete transparency. By prioritizing privacy, we aim to create a safer and more secure environment for all individuals involved.

We appreciate your understanding and support in our commitment to responsible and ethical data practices. If you have any further questions or concerns, please feel free to reach out to us.

Downloads last month
14
Inference Examples
This model does not have enough activity to be deployed to Inference API (serverless) yet. Increase its social visibility and check back later, or deploy to Inference Endpoints (dedicated) instead.