
Ontul Key Features
Discover the core features of the distributed data engine that unifies batch processing, stream processing, and interactive SQL in a single engine.
Unified Data Engine
Run batch processing, stream processing, and interactive SQL queries in a single cluster. Consolidate all data workloads without separate systems.
Arrow-Native Execution Engine
Process all data in Apache Arrow columnar format. Achieve high performance with vectorized execution and zero-copy, eliminating serialization overhead.
Interactive SQL
JDBC connections (DBeaver, DataGrip) via Arrow Flight SQL with multi-catalog federation queries. Full standard SQL support including JOINs, window functions, and CTEs.
Batch & Streaming Processing
Execute distributed batch ETL jobs and real-time streaming pipelines using Java/Python SDK. Supports both Client mode and Server mode.
Connector Architecture
Access diverse data sources through plugin-based connectors. Dynamically register and unregister Iceberg, JDBC, Kafka, and more at runtime.
Federation Queries
Execute cross-catalog joins across multiple data sources in a single SQL query. Combine Iceberg tables and JDBC databases seamlessly.
Apache Iceberg Integration
Iceberg REST catalog integration with read/write, CTAS, and MERGE INTO support. Automated maintenance including snapshot expiration and data compaction.
Security (IAM & KMS)
AES-256-GCM envelope encryption, built-in KMS, catalog/table/column/row-level IAM policies, and STS temporary credentials.
Use Cases
Unified Data Processing
Handle all data workloads — batch, streaming, and SQL — with a single Ontul cluster instead of separate systems.
Real-Time Data Pipelines
Ingest data from Kafka, process in Ontul, and load into Iceberg tables for real-time ETL pipelines.
Data Lake Analytics
Run federation queries across Iceberg, JDBC, and other sources for unified analytics.
ETL Automation
Program batch ETL jobs with SDK and REST API, integrating with workflow orchestrators.
Considering Ontul for your data platform?
Unified. Arrow-Native. Single Engine.
Revolutionize your data infrastructure with a distributed data engine that unifies batch, streaming, and SQL.
