Advertorial

Huawei’s OceanStor Dorado scoops Interop Tokyo 2020 Grand Prize

16. Juli 2020, 10:10 Uhr | Huawei

Fortsetzung des Artikels von Teil 1

Built-In AI Module Learns I/O Patterns and Optimizes the Prefetch Algorithm to Improve System Performance

AI module + intelligent algorithm
Figure 1 AI module + intelligent algorithm
© Huawei

Read cache is a common acceleration method for a storage system. Data is prefetched from disks and stored in a more quickly accessible location, generally in Random Access Memory (RAM). The CPU searches the cache first for the required data, and when found, the CPU sends it to the front-end interface module, which then sends the data to the user. The CPU searches for disks only when it cannot find the required data in the cache. The ideal and highest-performance situation is that all data read requests find data in the read cache. Data, however, is disordered and tasks are random. The user has no idea on which data should be fetched in advance and put it in the read cache.

To solve this problem, Huawei OceanStor Dorado innovatively uses an AI plug-in to improve prediction accuracy.

Then how does the AI module help improve the forecast accuracy? A storage system receives many pieces of data, called I/Os. Each I/O is sent by different services and related not only spatiotemporally but also semantically. To make those relations easier to understand, let's look at some day-to-day examples. A time relation example is that nine o'clock follows eight o'clock; a space relation is how Russia is north of China; and a semantic relation is how "the world's largest bear" is a highly likely supplement to "Polar bear is." Similar patterns can be found from I/Os, and it is our job to find those patterns and improve the prefetch accuracy.

The AI module in OceanStor Dorado uses an integrated self-tuning deep learning algorithm, which can quickly analyze and deeply mine all of the I/O data of upper-layer services from the spatiotemporal and semantics perspectives. When an I/O arrives, the chip immediately identifies the data to be accessed and instructs the CPU to quickly obtain the data to the read cache. In addition, it continuously learns the existing data in the background to further improve accuracy. The chip then evaluates key performance indicators such as the prefetch hit rate, waste rate, and latency, and then makes adjustments to further improve accuracy.

According to Huawei tests, the read cache hit ratio of Huawei OceanStor Dorado climbs from 19% to 69% with the same bandwidth (16 Gbit/s FC) and under the same test model (random read, I/O block size within 64 KB).

Huawei OceanStor Dorado All-Flash Storage
© Huawei

Summary

Like a hard-working student, the built-in AI module of OceanStor Dorado uses every minute to continuously accelerate the storage system performance and improve user experience.

Looking back over the past two years, Zhang Peng smiled with relief "The new-gen OceanStor Dorado is innovative, it's very unique in the industry. I'm very proud."
AI is reinvigorating high-end storage. It is because of this innovation that the Interop review board wowed by the first AI module in a storage system. OceanStor Dorado has set a new benchmark for integrated intelligence for storage products.

Discover More: OceanStor Dorado


  1. Huawei’s OceanStor Dorado scoops Interop Tokyo 2020 Grand Prize
  2. Built-In AI Module Learns I/O Patterns and Optimizes the Prefetch Algorithm to Improve System Performance

Lesen Sie mehr zum Thema


Jetzt kostenfreie Newsletter bestellen!

Weitere Artikel zu Huawei Technologies Deutschland GmbH

Weitere Artikel zu Controller/Komponenten

Matchmaker+