Aug 18, 2020

Red Hat updates OpenShift for the edge

Edge
Networking
Kubernetes
Harry Menear
2 min
Courtesy of Red Hat
The new round of updates services supports greater AI, ML and digital manufacturing workloads at the edge...

IBM’s software solutions subsidiary Red Hat just announced a new, generational update to its popular OpenShift application platform. The round of improvements brings new products and capabilities to the company’s customers, aimed at supporting the development of edge computing strategies and hybrid cloud architectures. 

In a press release on Monday, Red Hat showcased new features in its Red Hat OpenShift and Red Hat Advanced Cluster Management for Kubernetes solutions. These developments will supposedly help organisations handle increasingly demanding edge workloads in hybrid cloud data centre infrastructures, like the increased application of artificial intelligence (AI), machine learning (ML) and industrial manufacturing. 

“The next generation of hybrid cloud applications isn’t confined to a corporate data centre or even a public cloud deployment; instead, these innovations will exist at least in part at the edge of global networks, answering consumer demands and solving business challenges with the power that comes from near-real-time processing and analysis,” said Red Hat Senior Vice President and Chief Technology Officer, Chris Wright. 

He added that, “This future at the edge is powered by data, 5G, Linux containers and Kubernetes. Edge computing has become an integral part of Red Hat’s open hybrid cloud strategy as we work with our ecosystem of customers, partners and communities in developing and maintaining the open technologies for innovation at the edge.”

There are three major updates to Red Hat’s service that have been announced: 3-node cluster support within Red Hat OpenShift 4.5, which effectively enables full Kubernetes functionality at the network edge by combining supervisor and worker nodes for a smaller footprint; cluster management, which allows users to manage thousands of edge sites via a single consistent view across the hybrid cloud; and another round of general operating system improvements of Red Hat’s Enterprise Linux system. 

“Exciting new applications in 5G networks will be enabled by edge computing, cloud-native technologies and the hybrid cloud. Our continued collaboration on the Red Hat OpenShift based Unified Edge platform enables service providers to tackle operational challenges associated with deploying the entire edge infrastructure including networking at-scale in an extremely efficient and optimised manner,” commented Suresh Krishnan, Chief Technology Officer at Red Hat ecosystem partner and customer Kaloom. “Kubernetes has become the standard for container orchestration on compute and Kaloom is excited to see that Red Hat OpenShift continues to evolve to meet the needs of service providers.”

Share article

May 23, 2021

Data deluge: the impact of data warehouse automation

Automation
DataWarehousing
IT
digitaltransformation
Simon Spring
5 min
Working out how to speed up the rollout and management of data warehousing solutions is essential if organisations expect to succeed.

 

As organisations focus more than ever on data strategy, they encounter a range of opportunities to take control of the factors that influence success, as the insight available from the effective data analysis helps improve decision making and builds competitive advantage. The transformational potential of data has not been lost on business leaders, who have tasked their technical teams with harnessing its power to deliver bottom line benefits.

As a result, organisations increasingly rely on data warehouse technologies to store, manage and analyse datasets that are often growing at an accelerating rate. By offering a curated repository of data, data warehouses are valued by users who need access to the right information in a usable format. 

This is distinct from other approaches such as data lakes that act as huge collections of data, ranging from raw data that has not been organised or processed, through to varying levels of curated data sets. Ideal for some of the newer use cases such as Data Science, AI and machine learning, for more traditional analytics, data lakes can, however, be unwieldy and confusing. 

As a result, many organisations opt for data warehouse solutions to manage essential data in more structured environments. However, working out how to speed up the rollout and management of these practices using technologies such as automation is essential if organisations are going to minimise the time to value and succeed in the data-driven business landscape.

Focusing On Data Warehouse Automation

In practical terms, as data enters the data warehouse environment, it is cleansed, transformed, categorised and tagged – making it easier to manage, use and monitor from a compliance perspective, which is where automation comes in. The problem is that the volume and velocity of data encountered by organisations today means that manually ingesting, processing and storing it in an accessible way that also meets compliance requirements within a data warehouse is increasingly unfeasible in the modern world. 

However, with businesses constantly looking to data as the source of both reports and forecasts, a data warehouse is invaluable. As a result, data warehouse automation can help accelerate data ingestion and processing to boost time to value with data-driven decision-making in a data warehouse.

For example, Data Warehouse Automation (DWA) tools orchestrate the data warehousing process end-to-end, rather than being one of many tools that solve niche problems as in the traditional data warehousing lifecycle. This means companies don’t need teams of specialists at each stage of the process with manual handoffs between them, which can often lead to miscommunication and makes it harder to get a holistic view of the process.

Instead, implementing an automated template approach allows users to add their own data sources into and model the data to suit its needs, ensuring data structures are built quickly by automating all repetitive tasks whilst keeping IT teams in full control. As explained by Gartner in a recently published report ‘Assessing the Capabilities of Data Warehouse Automation (DWA)’, “The template-driven approach for data warehouse development reduces operational and compliance risks and is a disciplined process for delivering quality data warehouses incorporating all the best practices.”

Similarly, automation can help organisations manage the increasing levels of complexity that can blight their attempts to maximise the value of their data assets. As each new innovation builds ways to access the right data at the right time, so it can increase complexity for those tasked with designing the data ecosystem, and the problem is, many data teams still rely on 90s ETL tools and hand-coding to create and control a modern data fabric. As Gartner puts it, in the ‘Assessing the Capabilities of Data Warehouse Automation (DWA), “Automating these elements’ design plays a critical and essential role in data warehouse modernization and agile data warehousing.”

Across many teams, data warehouses now also play an important role in their efforts to implement and optimise DevOps, DataOps and other Agile methodologies. But, with automation handling the complexity, data teams can focus on strategic goals, such as delivering infrastructure and/or completing projects to Agile timeframes. For instance, teams who switch to DWA more readily adopt Agile, transformative frameworks such as DevOps or DataOps, and as a result, are in a stronger position to transform the way data is available to and used by the entire organisation.

Automation can also improve the ability of businesses to increase collaboration between IT and the business and speed critical processes, such as prototyping. Employing data-driven design to enable developers to create prototypes with actual company data, for instance, can demonstrate how requirements will behave in the final data warehouse. As the Gartner report explains: “The data-driven approach focuses on organizing the data models to align them closer to source systems. Business users and developers can collectively look at the data to gather inputs and feedback before creating the model. Using an iterative approach, data warehouse developers can rapidly build several prototypes before implementing the solution that meets the business user’s requirements. The method provides flexibility for deployment as well as management of changes to the data with flexible updates.”

While organisations the world over increase their commitment to becoming data driven, those who also automate key processes across their data warehouse strategy will be well placed to see a rapid return. In doing so, the most successful will benefit from a culture where data enhances their all-round abilities to innovate and deliver on key objectives.

Simon Spring, Account Director EMEA, at WhereScape, joined the company nearly ten years ago and throughout this time has worked effectively with hundreds of organisations looking to utilise data analytics and data warehouse automation to transform their business. 
 

Share article