null
Jiyus Managed Big Data Frameworks

Big data processing clusters

Stop managing dedicated big data clusters and paying for idle time. Create a cluster when you need one, and remove it when you don’t. Big data clusters are automatically configured to start data processing after creation.

Big Data features

  • Big data processing clusters on-demand
  • Choice of processing frameworks
  • Fully configured and ready to go
  • 10 regions, 10 data centers

Managing big data doesn’t have to be a big deal.

choose-variety-frameworks

Choose from a variety of frameworks.

If you already have data processing jobs created in your favorite format, don’t change. Import your jobs to a familiar framework on the Jiyus platform.
store-processed-data

Store processed data.

Place your data in ephemeral instance storage or use dedicated volumes. HDFS high availability is supported with selected frameworks too.
pay-when-used

Pay when used, not when idle.

Clusters can be created on-demand and removed when jobs are complete. Or, for ongoing jobs with variable data, scale clusters anytime to match needed processing power.
orchestrate-clusters-declaratively

Orchestrate clusters declaratively.

Big data clusters come pre-integrated with orchestration templates. Manage the clusters’ infrastructure by simply writing and adjusting declarative templates.
take-advantage-object-storage

Take advantage of object storage.

Save your job’s definition or data sources on Jiyus’s object storage, and benefit from unlimited scalability and direct internet access for streamlined data collection flows.
integrate-other-jiyus-services

Integrate with other Jiyus services.

As with any service on the Jiyus platform, big data clusters are built on top of a core set of services like virtualized networking and firewall as a service for you to configure.

Your choice of Apache processing frameworks.

Choose between Vanilla Apache or Cloudera Hadoop implementations for your batch data processing needs.
Unlike Hadoop, Apache Storm is designed for real-time scaling of processing power. Run your IoT analytics, EGL workloads, online machine learning, and more.
Apache Spark, when deployed with Cloudera HDFS, processes certain workflows 100 times faster than Hadoop.
Clusters that use Apache Ambari as the orchestrator for deploying HDP makes deployments single-click, easy, and repeatable.

How-to: Create a big data cluster.