BigQuery integration
As soon as a plan or test completes, the execution details such as outcome, duration, and browser type will be immediately written to tables in your configured BigQuery dataset. See the export schema for details.
Enterprise Feature
This feature is enabled during trials and requires an Enterprise plan.
Getting started
BigQuery Integration Requires a Paid Google Cloud Account
The BigQuery Sandbox feature available to unpaid trial Google Cloud accounts cannot be used with mabl BigQuery feed export. This is because Streaming Inserts are not supported by BigQuery Sandbox.
Upgrade to a paid GCP BigQuery account to use mabl BigQuery Export.
You'll need to add a Service Account to your destination GCP Project so mabl can deliver data.
Google Cloud Project Aliases
Sometimes GCP will alias a project, meaning the project Id and Name differ.
Make sure to use the account Id not the account Name in this case.Run
gcloud project list
to see all project details (Id, Name, Number).
- Add the ServiceAccount [email protected]. as a
BigQuery Data Editor
to the destination Google Cloud Platform project (more details). - In the mabl web UI, Visit
Settings > Integrations
and clickBigQuery Export
. - Add the destination project Id, dataset name, and a prefix for the export table.
- Click save.

Click the BigQuery + SETUP
button to get started.
Alternative IAM Permission Setup
If you desire constraining the mabl [email protected] user to a greater extent, you can create a Custom IAM Role with the following permissions, and assign the role to the mabl service account on the target GCP project.
bigquery.datasets.create
bigquery.datasets.get
bigquery.tables.create
bigquery.tables.get
bigquery.tables.update
bigquery.tables.updateData
BigQuery table layout and partitions
Upon running the first test through the BigQuery export feed, the specified dataset and following tables will be created.
<table_prefix>_plan_run
<table_prefix>_journey_run
<table_prefix>_run_categorization
<table_prefix>_performance_test_run
<table_prefix>_activity_feed
For example, if you use the defaults, the following BigQuery tables will be created:
<your-project>:mabl_export.mabl_plan_run
<your-project>:mabl_export.mabl_journey_run
<your-project>:mabl_export.mabl_run_categorization
<your-project>:mabl_export.mabl_performance_test_run
<your-project>:mabl_export.mabl_activity_feed
These tables will be partitioned by day, based on the columns:
<table_prefix>_plan_run
on columnstart_time
<table_prefix>_journey_run
on columnstart_time
<table_prefix>_run_categorization
on columncreated_time
<table_prefix>_performance-test_run
on columnstart_time
<table_prefix>_activity_feed
on columncreated_time
If you want to discard test data after a set amount of time, you can configure partition expiration in your BigQuery data warehouse.
Personally identifiable data redaction
For readability, exports reference entities using both an opaque unique identifier, and the customer supplied name
property.
For user
entities, this name is their email. However, some customers may not want this information in BigQuery. Enabling the Redact Personal Information Fields
will replace these entries with the value PII_REDACTED
on all subsequent writes.

Redact personally identifiable fields.
Field redaction doesn't include free text
The PII redaction only excludes the below fields.
trigger_user_email
actor_email
Other free text fields, like test name will still be included. mabl does not inspect the contents of these fields.
Table schema changes
Table schema changes will be rare and only be additive if they occur
- New columns only
- Announcement via Friends of mabl Slack and email lists
- Admins of active integration users will be notified
Table schema
Table *_plan_run
Contains every plan run on the integrated workspace, written upon plan run completion.
Column Name | Type | Description |
---|---|---|
id | String | plan run id |
plan_id | String | plan id |
plan_name | String | plan name |
plan_url | String | plan UI URL |
status | String | plan run status |
success | Boolean | overall plan run success |
start_time | Timestamp | plan run start time UTC |
stop_time | Timestamp | plan run stop time UTC |
runtime_millis | Integer | total plan run time milliseconds |
tags | String, Repeated | plan tags |
labels | String, Repeated | plan labels |
application_id | String | application under test id |
application_name | String | application under test name |
application_url | String | application under test UI URL |
starting_url | String | application under test starting url |
trigger_type | String | execution trigger type |
trigger_user_id | String | execution triggering user (optional) |
trigger_user_email | String | execution triggering user email (optional) |
deployment_id | String | execution trigger deployment id (optional) |
workspace_id | String | workspace name |
workspace_name | String | workspace name |
Notes on trigger types
The trigger_type
dimension will be one of the following values:
Trigger Type Value | Description | User Id/Email populated |
---|---|---|
SCHEDULE | Executed by timer/schedule trigger | No |
MANUAL | Executed manually from mabl web app or CLI (non-Deployment event) | Yes |
CUSTOMER_EVENT | Executed for Deployment API trigger | No |
INSIGHT | Executed in reaction to a mabl Insight | No |
Table *_journey_run
Contains every test run on the integrated workspace, written upon test run completion (possibly before parent plan run, as parent isn't yet complete).
Type | Column Name | Description |
---|---|---|
String | id | test run id |
String | journey_id | test id |
String | journey_name | test name |
String | journey_url | test UI URL |
String | view_output_url | test run output UI URL |
String | status | test run status |
Boolean | success | overall test run success |
Timestamp | start_time | test run start time UTC |
Timestamp | stop_time | test run stop time UTC |
Integer | runtime_millis | total test run time milliseconds |
String, Repeated | tags | test tags |
String, Repeated | labels | test labels |
String | mabl_branch | mabl branch test ran against |
String | environment_id | test run environment |
String | environment_name | test run environment name |
String | environment_url | test run environment UI URL |
String | browser_type | browser under test type |
String | browser_version | browser under test version |
String | plan_id | plan id |
String | plan_name | plan name |
String | plan_url | plan UI URL |
String | plan_run_id | plan run id |
String | workspace_id | workspace id |
String | workspace_name | workspace name |
Table *_run_categorization
Contains every test run failure reason for the integrated workspace, written when a user sets a failure reason on a given test run output.
Repeated entries per test run on update
A new row will be written for each change to a test run's failure reason (stored in the
category
dimension). Group byjourney_run_id
and take the latestcreate_time
value to find the most recent category given to a test run.If the failure reason is removed,
category
will beNULL
.
This indicates that a user has cleared the test run of a failure reason.
Column name | Type | Description |
---|---|---|
id | String | unique id |
category | String | failure reason (NULL on removed) |
journey_id | String | test id |
journey_name | String | test name |
journey_run_id | String | test run id |
created_time | Timestamp | grading time UTC |
start_time | Timestamp | test run start time UTC |
stop_time | Timestamp | test run stop time UTC |
grader_user_id | String | grading user id |
grader_user_email | String | grading user email (optional) |
workspace_id | String | workspace id |
workspace_name | String | workspace name |
tombstone | Boolean | Indicator that failure reason has been removed from test run. Will be true if test run no longer has a failure reason and category will be NULL |
Table *_performance_test_run
Contains metrics for every performance test run on the integrated workspace, written upon test run completion (possibly before parent plan run, as parent isn't yet complete). Can be joined with the journey run table on id.
Column Name | Type | Description |
---|---|---|
id | String | test run id |
journey_id | String | test id |
journey_name | String | test name |
journey_url | String | test UI URL |
journey_run_id | String | test run output UI URL |
start_time | Timestamp | test run start time UTC |
stop_time | Timestamp | test run stop time UTC |
api_virtual_user_hours_consumed | Float | virtual user hours consumed of performance test run |
api_concurrent_user_count | Integer | concurrent users of performance test run |
api_average_response_time | Integer | average response time in milliseconds of performance test run |
api_requests_per_second | Integer | average requests per second of performance test run |
api_error_rate_percent | Float | average error rate as a percent [0-100] of performance test run |
workspace_id | String | workspace id |
workspace_name | String | workspace name |
Table *_activity_feed
Contains every activity feed entry in the integrated workspace. Contains activity data for all types supported by the activity feed in the mabl ui.
Column Name | Type | Description |
---|---|---|
id | String | activity feed entry id |
entity_id | String | id of the entity changed |
entity_name | String | name of the entity changed |
entity_type | String | type of the entity changed |
entity_canonical_id | String | test run output UI URL |
entity_version_number | Integer | version number for versioned entities |
action_type | String | type of action taken |
action_timestamp | Timestamp | time the action was taken |
created_time | Timestamp | activity feed entry creation time |
actor_id | String | id of the person or API key that took the action |
actor_email | String | email of the actor, if applicable |
entity_previous_id | String | id of the entity before this action |
entity_previous_name | String | name of the entity before this action |
entity_previous_version_number | Integer | version of the entity before this action |
workspace_id | String | workspace id |
workspace_name | String | workspace name |
Querying failure reason
Because BigQuery exports are done in a Data Warehouse fashion, tables are never updated/deleted, but rather only appended.
This means that mutable values like failure reasons result in multiple rows being written when they change. Thus, to determine the latest failure reason, you need to (1) find the last update and (2) select the matching rows. The following query does this.
To use, replace my-project.mabl_export.mabl_run_categorization
with the table name in your BigQuery system.
SELECT
failure_reason.*
FROM (
-- Get latest update time
SELECT
journey_run_id,
MAX(created_time) AS last_updated_time
FROM
`my-project.mabl_export.mabl_run_categorization`
GROUP BY
journey_run_id ) AS latest_categories
JOIN `my-project.mabl_export.mabl_run_categorization` AS failure_reason
ON failure_reason.created_time = latest_categories.last_updated_time
-- Ignore removed failure reasons
WHERE failure_reason.category IS NOT NULL
Updated about 2 months ago