مقدمه ای بر نسل جدید استوریج های Dell EMC Unity

ارسال شده توسط admin | در دسته بندی مقالات

Introduction to Next Gen Dell EMC Unity

Nearly a year later, Dell EMC has added Next Gen Unity storage systems to its existing Unity portfolio. The new models are Unity 350F, 450F, 550F and 650F. In a nutshell, This year’s Dell EMC Unity release brings in improvements to hardware as well as software i.e. improvements to UnityOE. This post examines new hardware and the new features introduced this year.

Gen2 vs Gen1

In general, the second generation Unity systems have more cores per CPU and memory when compared with the hardware introduced last year. Does that mean there must be an improvement in performance? Yes, the maximum IOPS that the new generation hardware can handle is slightly increased when compared to the older generation. Another important thing to note is that Dell EMC is going all in on All Flash and this year in the Unity product line there is no new introduction of hybrid models (spinning disks + SSD’s). The new Unity models 350F, 450F, 550F and 650F are all Flash and does not support spinning disks. Here is a table that summarizes the improvements.

Unity 350F Unity 450F Unity 550F Unity 650F
Processor Intel E5-2603v4 6-core, 1.7 GHz Intel E5-2630v4 10-core, 2.2 GHz Intel E5-2660v4 14-core, 2.0 GHz Intel E5-2680v4 14-core, 2.4 GHz
Memory 48GB (3x 16GB DIMMs) – per SP 64GB (4x 16GB DIMMs) – per SP 128GB (4x 32GB DIMMs) – per SP 256GB (4x 64GB DIMMs) – per SP
Minimum/Maximum drives 6/150 6/250 6/500 6/1000
Maximum raw capacity* 2.4 PBs 4.0 PBs 8.0 PBs 16.0 PBs **
Max IO modules 4 4 4 4
Max LUN Size 256 TB 256 TB 256 TB 256 TB
Max LUNs per array 1,000 1,500 2,000 4,000
Max File System Size 256 TB 256 TB 256 TB 256 TB

*Maximum raw capacity may vary.

**Unity 650F raw capacity is a 2x increase when compared with Unity 600F.

The look of the hardware remains the same and there is no change in the aesthetics. But on the inside, things have changed so much with the introduction of Unity OE 4.2. Before we jump on to what new in the software, Dell EMC has introduced 80 drive DAE this year. This 80 drive DAE is compatible with all generation hardware. It can work with a Gen1 hybrid, all flash arrays, and Gen2 all flash arrays.

80 Drive DAE

Photo Credit: Dell EMC

The 80 drive DAE is a dense DAE that accommodates eighty 3.5″ drives and the drives used in this DAE cannot be used in fifteen drive DAE. The new 80 drive DAE supports connecting to all generation Unity hardware. The backend connection can be x4 lanes SAS or x8 lanes SAS.

If you would like to read about Unity DPE, other DAE types and the internal components of Unity DPE check out my post Unity hardware architecture.

New features in Unity OE 4.2

Unity OE 4.2 release is a major update of this year and here is a list of most notable ones, (click the arrow to expand and read)

Dynamic Pools
Thin Clones
Enhancements to Snapshots
Improvements to system limits
Inline Compression for File
SMB migration from VNX to Unity

I will be publishing separate posts detailing the most important features of Unity OE release 4.2. Stay tuned!

0 دیدگاه | سپتامبر 26, 2018

نصب و پیکربندی EMC Isilon Simulator

ارسال شده توسط admin | در دسته بندی مقالات

نصب و پیکربندی EMC Isilon Simulator

نیازهای بسیار شدید شرکتها به فضاهای استوریج در حال حاظر موجب بوجود آمدن بسیاری از تکنولوژی ها و راهکارهای جدید در این حوزه شده است. یکی از نمونه این راهکارها ، تکنولوژی EMC Isilon می باشد. در یک ساده سازی بزرگ ، می توان گفت Isilon یک استوریج NAS ساده می باشد. ولی باید این را بدانیم که آن فقط یک آرایه دیسکی معمولی نمی باشد بلکه قلب تپنده ای به نام OneFS دارد که به عنوان سیستم عامل آن انجام وظیفه می کند. معماری Isilon بر پایه راهکار کلاسترینگ می باشد که حداقل برای شروع تعداد سه نود نیاز دارد و نرم افزار OneFS پیکربندی کاملا اتوماتیک را فراهم می سازد بصورتی که مرحله Cluster Initialization فقط چند دقیقه طول می کشد و این نرم افزار می تواند پخش داده ها در میان همه نودها انجام دهد.

این راهکار مزایای بسیاری دارد که مهمترین مزیت این راهکار حذف Single Point Of Failure ساختار می باشد.

 

 

 

New times and new requirements for storage space used by the company have brought new solutions. One of these solutions is EMC Isilon. In a huge simplification, we can say that it is the simple NAS. But it is not just disk array, the heart of it all is OneFS. The architecture is based on a clustered solution (minimum starting number of units is three nodes), OneFS provides full automation of configuration (cluster initialization takes a few minutes), and the distribution of data across all nodes. There are several advantages of such a solution, the main advantage is of course no single point of failure.

Node fails? Connect the new, the rest happens by itself. Another advantage is scalability, which is virtually unlimited, we connect another node to the cluster and thereby increase the available space (automatically, no configuration required). Added to this deduplication, compression, data protection and several other services. One should also mention the huge performance of this solution. Subsequent nodes not only increase the available space but also increase performance many times (by manufacturer: explosive growth in performance and capacity). And the last advantage, API. OneFS provides a REST API through which all manipulations on files can be accelerated several times. EMC also provides a full-featured Isilon simulator! The simulator is a bad word, it is a fully functional Isilon, only virtualized. Its performance be little smaller. I strongly encourage to the testing, EMC Isilon can be downloaded here (requires EMC account, this is version for VMware Workstation/Player) or directly from me (version 7.1.1, file OVA). I write here about the simulator, but the hardware configuration of the cluster looks almost the same!

isi1

The emulator has a one possibility in relation to the hardware version, you can install one node version (but we will install three). At the beginning, deploy Isilon appliance (ova) or import the machine to the VMware Player (VNX file). Run, wait, and start to respond to the questions:

isi3

isi4

isi5

We will not need SupportIQ, we do not have the support from EMC to the emulator:

isi6

Configure the internal network (int-a), after which nodes communicate (without gates), addressing any:

isi9

Configure the external network, these addresses will be available from outside with a management service (all at the same time, as in a cluster). Isilon does not distinguish between addresses in the first or second node, they are all equivalent:isi10

Configure our DNS servers:

isi12

We choose how to connect the new nodes, in case of a hardware cluster, this happens by Inifiband bus (automatically), here we add nodes manually:

isi13

Then we set up a date and time zone, here do not do anything,  enter a valid value on the web interface. Now configure the SmartConnect service address, this is an important point and  does not miss it (although it can be further defined with the web interface). This address will communicate with EMC external services, such as. ViPR.

isi11

Summary of our configuration, type yes and look forward to the end of the cluster configuration:

isi14

This message indicates that everything is OK. We should also be able to ping the cluster and service addressess. Go to the web-based console (in my case https://172.18.28.91:8080/):

isi15

isi17

As the next nodes pull configuration from first, before we add them, Let’s finish the basic configuration. First of all, set the time correctly (and NTP server):

isi16

If we have our own Active Directory, we can immediately add the cluster to the AD. This allows us to export the network shares in accordance with the privileges of the AD:

isi18

A management interface itself is very simple (an OneFS advantage), comes down to the manipulation of pairs of privileges and shares. We create the Access Zone with precisely defined privileges (local, AD, NIS, etc.) and combine them with shares (eg. SMB or NFS). File system space is one, we have no influence on it. Now we can add another node, the procedure is similar, after deploy appliance and running it, select 2:isi30

New node detects our cluster and attaches under it:

isi31

The whole procedure takes a few minutes. In the same way, we add a third node and as result we have a properly configured cluster.

isi32

Finally, a few words about the performance. In the case of virtual deployment is dependent on where they will be sited (virtual or physical ESXi) and what drives are connected. At the moment we are at the stage of prepare physical ESXi test cluster with plenty of internal drives. Once we have everything prepared, I will try to perform the appropriate tests and post a few charts to exercise a virtual Isilon built on decent hardware. EMC Isilon hardware cluster has phenomenal performance, the following graphs made from a synthetic test and meter. 8k files record:

8k_RW

When reading we come to 900Mb/s. The test was performed on a virtual machine sitting on an NFS share. Without any advanced philosophy and optimization! I draw attention to a minimal CPU load.

100R_I

Deploy single large file:

ii

Chart (maybe a bit garbled) with many hours of testing on the reading and writing of 50000 files with variable size. The data given in MB/s (test made witch our own software):

isi21

Conclusions are two, on the EMC Isilon lies the power! Almost 300MB/s on plain, clustered NAS. The second conclusion is that it is possible to clogged EMC Isilon quite a bit (but the average is still very good). The graph made on demo cluster from EMC consisting of three nodes. Now think what will happen at 9, 18, 36 nodes…

0 دیدگاه | ژانویه 13, 2018

Check Page Rank