AI: Storage is central to the data-based economy. Whether locally on a computer or other device, in a data center or a cloud: all the data that controls the processes, documents, images, videos, etc., have to be physically available somewhere. However, the storage requirements have evolved enormously since the days of the mainframe.
Modern storage systems must be high-performing, easily scalable and agile, and at the same time must not exceed the tight IT budget. To optimally equip systems for every need, they must above all be flexible. This means that each process gets the memory it needs at the right time, in the speed class, and in the volume it needs.
Corona has just made this very clear: suddenly, a large part of the workforce was working from home, and processes had to be adapted almost from one day to the next. Companies that had already modernized their infrastructure beforehand had a clear advantage.
With the number of applications and services used in a company today, SLA-compliant Storage must be constantly made available, expanded if necessary and rereleased at the end. Virtual machines, microservices and containers further increase the complexity of the entire infrastructure – these technologies could not be managed without automation. In the future, this significantly changed profile of requirements will also have a decisive impact on Storage costs: it will no longer be the hardware, but rather the software for the automation of management and operation that will come to the fore even more.
There Is No Way Around Automation
In addition to the ever-increasing flexibility of operations, another factor ensures no way around automation: a lack of know-how in the internal IT departments. The reason for this is straightforward: the experienced employees, some of whom have been building and developing the internal systems for decades, are gradually retiring. A great deal of specialist knowledge also flows away with them. This applies particularly to tools that can often only be operated by individual specialists. Precisely these “hand-knitted and mouth-blown” tools that we all know.
Historically, a whole zoo of software tools has developed in most data centers over time, which interact with each other more badly than well. Some devices were even introduced to make other tools compatible with each other. In many places, the remaining skeleton troops can still keep operations going. Introducing innovative technologies such as microservices or containers can only be difficult to achieve under these conditions. Not to mention a further development of the internal IT strategy in digital transformation. Usually, the status quo is only cemented. Sooner or later, there will be no way around automation, especially regarding Storage.
Artificial Intelligence In IT Operations
Automation is not a new topic in enterprise IT. Even in the days of mainframe computers, processes were triggered based on time or events, i.e. according to the scheme “When A is completed, start B”. However, such rigid approaches do not make sense for infrastructures that behave dynamically because manual interventions and readjustments would have to be made constantly. Therefore, the key to modern IT automation lies in machine learning and artificial intelligence. The significant challenges in improving the efficiency of IT operations can be broken down into four areas:
- With integrated service management built on shared services, companies can curb the proliferation of IT tools and immediately improve overall operations. With its “Ops Center”, Hitachi offers a platform for administrators to set up numerous systems using a configuration tool. All VSP storage systems can thus be combined into a virtual system, which means that the available memory can be expanded (scale-out) up to 70 per cent faster and, of course, during operation. In an alert, the uniform interface helps quickly identify and solve errors on different systems with one log-in. In addition, the Ops Center easily integrates with other automation tools via the open REST API,
- Via Ops Center, companies can also get a global overview with a central control authority to optimize, plan and intervene in critical incidents. Instead of reacting to incidents, a predictive, proactive approach can be taken, which uses machine learning and artificial intelligence to recognize patterns (patterns) that indicate errors. Hitachi has demonstrated this in a project for British Rail that has been running for several years: Thanks to artificial intelligence, the forecasts for the service life of individual parts improve over time, which, bottom line, continuously enhances profitability. A dashboard can provide a real-time overview of the status of all systems. This area can also be easily outsourced to an external service provider, which relieves the IT department significantly.
- Intelligent automation based on overloaded employees can be relieved of manual routines, optimized operations, and reduced risks. Provisioning storage manually is time-consuming, tedious, tedious—and error-prone. It becomes even more complex when different storage classes come into play. With the right platform, Storage can be classified from bronze (SATA hard drives) to platinum (fastest flash modules), and “smart” automatically provisioned depending on the agreed service level agreement (SLA). The artificial intelligence in the background ensures that the operation runs more and more smoothly over time. Using the same principle, complete, uninterrupted data migration to a new system can be carried out much faster (in seven instead of 30 steps).
- Standards-based IT integration orchestrates and accelerates resource delivery. The more interfaces there are in a system, the more problems with integration. The aim must therefore be to reduce the number of tools as much as possible while at the same time relying on a standard such as the REST API. This also enables the integration of external management frameworks and joint development projects in a partner ecosystem.
Also Read: Calculated With Foresight: Artificial Intelligence In Sales