We have fixed the Google My Business Extractor. The component now allows you to collect daily metrics, reviews, media, and questions for your businesses that have a Google Business profile.
Please, feel free to test the Extractor. You can find the updated documentation in our Keboola user documentation.
Dominik Novotný 33 Component Developer
Do you have questions for Dominik Novotný?
Log in to ask Dominik Novotný questions publicly or anonymously.

Updates to the Google My Business Data Source

New Component - Okta Extractor
We are introducing a new component for extracting data from the Okta Identity Platform.
The new extractor will bring your data from the Okta Identity Platform into Keboola Connection Storage. The extractor has access to the following Okta endpoints: users, user_types, and devices.
When extracting data, you have two choices for syncing:
- Full Sync – All data from the source will be downloaded on every run, providing a comprehensive snapshot of your Okta data at each point in time.
- Incremental Sync – Only data updated since the last run will be downloaded, allowing for efficient, incremental updates to your Okta data.
Learn more about the new Okta extractor in our documentation. If you have any additional questions or feedback, please contact us.

New Component - OpenAI Application
We are excited to announce the release of a new component in our Keboola Connection platform—the OpenAI app! This app allows you to utilize the OpenAI Text Completion service and incorporate it into a Keboola Connection project.
To get started with the OpenAI app, simply create a new configuration and enter your API key from the OpenAI platform settings into the designated field. You can then choose which type of model you want to use, either predefined or custom, and customize the model options as desired. You can also define your prompt and input pattern in the Prompt field, using a placeholder to refer to the input column in your data table.
We encourage all Keboola Connection users to try out the OpenAI app and experience the power of the OpenAI Text Completion service in their data projects. For more information and detailed instructions on how to use the OpenAI app, please visit our documentation page.

New Component - Time Doctor 2 Extractor
We are excited to introduce the Time Doctor 2 data source, which enables the extraction of data from Time Doctor 2 using the Time Doctor 2 API.
The extractor supports the following endpoints:
- users
- tasks
- projects
- worklog
- edit-time
- timeuse
For more information on how to set up this data source, please visit our documentation. If you have any additional questions or feedback, please do not hesitate to contact us.

New Component - ServiceNow Extractor
We are pleased to announce the release of our new data source—ServiceNow. The component allows you to retrieve data from ServiceNow using the ServiceNow table API.
For more information on how to set up this extractor, please see our documentation. If you have any additional questions or feedback, please do not hesitate to contact us.


New Component - LinkedIn Pages Source
We are happy to announce the release of our brand new data source for extracting data from the LinkedIn Pages API. It downloads data related to organizations and their posts, and to statistics about the performance of their pages, as well as tables of enumerated types used therein.

Learn more about the new extractor in our documentation.
If you have any other questions or just want to give us feedback, please do not hesitate to contact us.

New Component - DynamoDB Streams Extractor
We are excited to announce the release of our new component for fetching data from DynamoDB Streams. This component is designed to help you easily extract data from DynamoDB Streams and store it in Keboola Storage, with support for incremental fetches.
One of the key features of this component is it's ability to fetch data in increments. This makes it a great choice for use with large DynamoDB tables, as it enables you to track changes without having to perform a full scan.
It is particularly useful for scenarios where you need to work with very large DynamoDB tables but don't want to perform a full scan. By retrieving only the changes made in the last 24 hours, you can significantly reduce the amount of data that needs to be processed, which can help improve performance and reduce costs.


New Component - Deepnote Notebook Trigger Application

The component enables you to trigger the run of a specified notebook in Deepnote. To use it, you need access to the Deepnote API, which is available only to the Team and Enterprise plans.
You can find the component documentation here.