Reveals Details of Extremely Efficient Architecture that Seamlessly Accelerates Spark and Hadoop, Busts Silos and Ends ETL
SPARK SUMMIT 2016, SAN FRANCISCO, June 7, 2016
iguaz.io, the disruptive company challenging the status quo for big data, the Internet of Things (IoT) and cloud-native applications, today unveiled its vision and architecture for revolutionizing data services for both private and public clouds. This new architecture makes data services and big data tools consumable for mainstream enterprises that have been unable to harness them because of their complexity and internal IT skills gaps.
Data today is stored and moved between data silos optimized for specific applications or access patterns. The results include complex and difficult-to-maintain data lakes, constant data movement, redundant copies, the burdens of ETL (extract/transform/load), and ineffective security.
While popular cloud services like Amazon’s AWS and Microsoft’s Azure Data Lake introduce some level of simplicity and elasticity, under the hood they still move data between different data stores, lock customers in through proprietary APIs and onerous pricing schemes, and, at times, provide unpredictable performance.
Data is proliferating at an unprecedented pace — analyst firm Wikibon predicts the big data market will grow to $92.2b by 2026 — requiring a new paradigm for building and managing a growing and complex environment.
With its first-ever high-performance virtualized data-services architecture, iguaz.io is taking a fresh approach to big data’s challenges. The iguaz.io architecture:
- Consolidates data into a high-volume, real-time data repository that virtualizes and presents it as streams, messages, files, objects or data records;
- Stores all data types consistently on different memory or storage tiers;
- Seamlessly accelerates popular application frameworks including Spark, Hadoop, ELK, or Docker containers;
- Offers enterprises a 10x-to-100x improvement in time-to-insights at lower costs;
- Provides best-in-class data security based on a real time classification engine, a critical need for data sharing among users and business units.
With extensive experience in high-performance storage, networking and security, iguaz.io’s founders drew upon their combined backgrounds in designing their data stack from the ground up. They leveraged the latest technologies, bypassing traditional operating systems, network, and storage bottlenecks. More details on the architecture are available at http://iguaz.io/.
“The current data pipeline, comprised of many silos and tools, is extremely complex and inefficient, resulting in long deployment cycles and slow time to insights,” said George Gilbert, Lead Analyst, Data & Analytics at Wikibon. “What will benefit the market most is a new approach that delivers multipurpose and easy-to-use data solutions rather than single-purpose tools. This will be a key factor in accelerating the adoption of big data in the enterprise.”
“Enterprises have been sharing with us their pain points and challenges around adopting big data and new analytics in their businesses,” said Asaf Somekh, co-founder and CEO of iguaz.io. “We designed our solution from the ground up to address these challenges and allow our customers to focus on their applications and business.”
iguaz.io will unveil its architecture at Spark Summit 2016, where theCUBE will interview iguaz.io Founder and CTO Yaron Haviv and Wikibon’s George Gilbert. View the interview live at 3 p.m. PST on June 7 or at any time thereafter by clicking on http://siliconangle.tv/spark-summit-west-2016/
iguaz.io was founded in 2014 to take a fresh approach to the data challenges faced by today’s enterprises. The iguaz.io virtualized data services architecture fundamentally disrupts the status quo for big data, the Internet of Things (IoT) and cloud-native applications. The company is led by industry experts and innovators, and its teams are based in the US, Europe and Israel. To learn more about iguaz.io, visit www.iguaz.io or follow @iguazio.