If you like DNray Forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...

 

Backups with built-in data protection tools

Started by Digitel, Sep 27, 2022, 08:00 AM

Previous topic - Next topic

DigitelTopic starter

The international market of hyperscale data centers (DCs) is expanding by 11% annually, driven by enterprises, connected devices, and users who continuously generate new data. With the growth in the market, demands for storage reliability and data availability are also increasing rapidly. The key factor that affects these criteria is the storage system, which is not limited to equipment types or brands. In this topic, we will look at the three types of storage - block, file, and object - and determine their purposes.



Block-level storage splits files into pieces of the same size, each with its own address but without metadata. Such storage systems are used in many applications, including most relational databases such as Oracle and DB2, and access to block hosts is organized using protocols like SAN, Fibre Channel, iSCSI, or AoE. On the other hand, file storage, like a NAS, stores data as files and folders that can be accessed via client interfaces through a hierarchical structure.

However, distinguishing between SAN and NAS as only network drives and a network file system respectively is artificial. The boundary between them blurred with the introduction of the iSCSI protocol when NetApp provided iSCSI on its NAS and EMC added NAS gateways on SAN arrays.

Object storage, unlike block and file storage, does not have a file system and has a flat address space with unique identifiers for each object. Most object repositories allow users or clients to attach metadata to objects and aggregate them into containers. Object storage is suitable for storing large amounts of unstructured data, as data and metadata can be configured.

The applicability of different storage systems differs, with block storage often used for virtualization and working with databases, but with high costs and complexity in management. File storage has limitations in metadata, which has to be processed at the application and DB level. Object storage is suitable for storing large amounts of unstructured data but requires careful planning and understanding of data management principles.

File storage systems offer simplicity in assigning file names and metadata, making them cheaper than block storage for processing small amounts of data. However, as data accumulates, finding necessary information becomes challenging, so file systems are not suitable for data centers that prioritize speed.

Object storage scales well, making it ideal for handling growing amounts of data, with many cloud services, including Facebook and Dropbox, using object storage as the standard. Object storage's flat address space makes retrieving data from local or cloud servers equally easy, and it is better equipped than file systems to store and protect unstructured data.

For instance, Netflix and Spotify use object storage to work with Big Data and media. Additionally, object storage's built-in data protection tools enable the creation of reliable backup centers using geographically distributed copies. However, some operations, such as working with transactional workloads, may require more efficient block storage solutions, and integrating object storage may require changing the logic of applications and workflows.
  •  


fix.97

Regarding your discussion of AoE, are you utilizing it or simply considering it as an option? If you have implemented it, can you share some thoughts on your experience with it? I myself have not had the opportunity to explore this particular solution.

It is always interesting to learn about new and innovative technologies in the IT industry. Solutions like AoE may offer unique advantages for certain use cases and applications, and understanding these options expands our range of possibilities when designing and implementing solutions.
  •  

sbglobal

Cloud storage offers the primary benefit of flexible performance scaling in both upward and downward directions. This means that when workloads demand additional computing power, we can add resources to our server (and pay accordingly), and scale back down during lower demand periods.

For example, platforms such as Azure DWH enable users to separately scale Compute and Storage functions, allowing for the complete clearing of Compute resources outside of business hours and resulting in significant cost savings.

The ability to dynamically adjust resource usage is a key advantage of cloud storage models, making them particularly attractive for businesses with fluctuating processing demands or those that prioritize cost efficiency. Additionally, innovative optimization tools and services have emerged to help organizations maximize cloud storage resources and minimize costs.
  •  

J.Bhp

The expansion of hyperscale data centers and the continuous generation of new data by enterprises, connected devices, and users necessitate a deep understanding of storage reliability and data availability.

Block-level storage, with its capability to split files into smaller, equal-sized pieces without metadata, is essential for applications like relational databases such as Oracle and DB2. As an engineer, it's important to consider the protocols used to access block storage hosts, such as SAN, Fibre Channel, iSCSI, or AoE, and ensure seamless integration with the existing infrastructure.

File storage, on the other hand, operates at a higher level with data stored as files and folders in a hierarchical structure. Incorporating NAS into a storage solution requires careful consideration of client interfaces and the ability to access data through a network file system. Understanding the evolution of protocols like iSCSI and the blurring of boundaries between SAN and NAS is critical in designing adaptable and future-proof storage solutions.

Object storage, with its unique identifier for each object and the capability to attach metadata, is suitable for storing large amounts of unstructured data. Implementing object storage solutions involves careful planning and an understanding of data management principles to ensure scalability and efficient handling of vast quantities of data. Cloud services like Facebook and Dropbox using object storage as the standard highlights the relevance and applicability of object storage in modern data center environments.

I understand the diverse storage needs of different clients and industries, and I can tailor storage solutions based on the specific requirements, whether it's virtualization, database management, or handling large volumes of unstructured data. Integrating object storage into existing workflows, and ensuring efficient support for transactional workloads, requires a deep understanding of storage technologies and their implications on application logic and data workflows. Additionally, I am vigilant about utilizing object storage's built-in data protection tools to ensure reliable backup centers with geographically distributed copies, catering to the critical data protection needs of clients in diverse industries.
  •  

hrin

Companies often rush into adopting it, thinking it's the magic bullet for all data challenges. However, the complexity of managing unstructured data and the need for proper data governance can't be overlooked. Block and file storage still have their place, especially for businesses with specific transactional needs.
  •  


If you like DNray forum, you can support it by - BTC: bc1qppjcl3c2cyjazy6lepmrv3fh6ke9mxs7zpfky0 , TRC20 and more...