Nazare is an industrial data platform. Nazare provides the performance, flexibility, and scalability needed to accurately collect, store, transform, and analyze data from industrial machineries. Nazare enables customers to quickly utilize data for their core business.
Ultra-precise machinery data can be collected, various machineries can be easily connected, data can be easily converted, and statistics and alarms can be generated.
It is particularly optimized for rapidly processing ultra-precise data in fragmented manufacturing environments in microseconds.
The virtual edge allows you to easily connect machineries from the management console. They can also be connected via the physical edge to support various industrial protocols.
The management console allows you to easily convert data or generate statistics and alarms in real time.
Nazare can perform high performance collection and query processing with a single unit through core technology developed using proprietary technology, and shows performance optimized especially for time series data processing.
A single low-cost server collects 10 million time series data per second in real time and queries more than 100 million time series data per second.
Big data is collected more than twice as fast as a regular DB and queried more than 20 times faster. It also collects more than 10 times faster than Spark.
It is optimized for time range search and aggregation, and shows the same or better performance than time series databases.
Nazare runs on just one workstation, so you can get started at a very low cost compared to typical big data platforms. It also uses resources efficiently and consumes less storage, so costs can be very low.
It's a big data platform that can be cheaply started with just one unit, but can scale infinitely horizontally.
Native Rust streaming processes and databases can save money by using very low resources compared to regular big data engines.
The high compression ratio using the Parquet file format reduces storage costs. Also, separate computing and storage prevent unnecessary cost additions, and hot/warm/cold storage can be selected for each data type, thereby reducing costs.
The Rust native core service, developed using various latest open source big data processing technologies, has no VM or GC overhead and runs stably on a single machine without OOM. Also, efficient scaling up and out is possible through separate compute and storage.
Based on the DataFusion query engine, it performs high-speed columnar processing using the Arrow in-memory format and the Parquet file format. It also utilizes ADBC and FlightSQL protocols to minimize serialization and network overhead and perform processing optimized for OLAP methods.
Core services developed in Rust have no overhead such as VM or GC, and are optimized and created through a high level of native compilation through LLVM. Also, using Jemalloc and Swap, stable operation is guaranteed even on a single unit without OOM.
Compute and storage are separated as needed
You can scale up and out efficiently accordingly.
It can be efficiently deployed with just one workstation and continuously scaled out thereafter, and can be configured for on-premise and various cloud environments. Also, compatibility is high because it is not dependent on a specific vendor or technology.
Starting with just one workstation, it is possible to dynamically scale to large clusters that store tens of PB or more through separate compute and storage, and does not slow down over time.
It can be installed and operated on-premise with separate networks as well as on private and public clouds, and multi-/hybrid environments can also be configured.
It is designed based on open source and open protocols and formats, so it doesn't rely on specific cloud providers or specific vendor technology, so license costs are low and compatibility is high.
The core services developed with the FDAP stack based on Kubernetes and Lakehouse architectures provide high performance, high efficiency, low cost, and high compatibility suitable for the latest big data platforms.
It starts with just one device based on Kubernetes technology, but it enables large-scale scaling, supports on-premise and various cloud installations, and increases efficiency by separating compute and storage.
Based on the lake house architecture, it provides the functions of a data warehouse and data lake in a single platform, and the standardized storage method is compatible with various existing query engines.
It provides effective performance for OLAP based on FDAP (FlightSQL, DataFusion, Arrow, Parquet) stack. In particular, high-performance big data processing improves work efficiency in machine learning and AI tasks.
Various services can be efficiently deployed and operated on validated open sources, and since it is vendor neutral, it supports various query engines.
The FDAP (FlightSQL, DataFusion, Arrow, Parquet) stack consists of open source standards essential to big data and is an open standard endorsed by the Apache Software Foundation. Additionally, Delta Lake has been validated in numerous production environments as a data storage framework.
It is compatible with various existing big data processing engines using standardized open file formats and is not tied to a specific vendor.
Various open source services can be deployed quickly and operated efficiently. Additionally, various workloads such as app services, BI, AI, and machine learning can be handled on a single platform.