TCO is important, but you have to review more than just cost, here is a short list: It can be expensive if you don't have an eye on it. It's more challenging than if you have a customer who is running only in Azure. That was the most complicated part in our project, to figure out how much data would be streamed out of our data center into the cloud and how much data would have to be sent back into our data center. What's also important is to know is the network bandwidth. You still have to pay for the Azure disks, and Blob Storage if you're using tiering. In terms of additional costs beyond the standard licensing fees, you have to run instances in Azure, virtual machines and disks. We have an Enterprise Agreement or something similar to that. If you're using Bring Your Own License, which is what we're doing, then you get with your sales contact at NetApp and start figuring out what price is the best, in the end, for your company. If you're using the PayGo model, then it's just the normal costs on the Microsoft page. The only cost savings we see on it is against having to buy physical hardware. There is potential to save money by moving things off to object storage. You pay AWS' hosting fee for an EC2 instance, and each one of the disks within the NetApp is EBS storage and you pay AWS for those. NetApp is licensed per filer, but there are additional running costs that are paid to AWS. Ultimately, the more data you save, the more it costs you, because you're paying AWS for the capacity. If you are committed to having a client filer for an extended period, then go with the NetApp licensing model rather than the AWS-provisioned one. But you may not get the same level of discounts that you would if you were dealing with NetApp directly. There is a by-the-hour or by-the-month basis from AWS and you can get it that way as well and be billed through AWS. Vm has more disk space than reported by netapp license#They call it the Bring Your Own license scheme. The one we've gone for is where we pay NetApp once a year. They've got a variety of license schemes. If you need bigger, faster ones then go for them. Go with the slowest, cheapest disk you can. I think that would a good idea.Ĭhoose your disk type properly. However, keeping in line with the elastic nature of cloud and flexibility of the cloud, some bursting of that 368 terabyte license capacity should be allowed. Because if you're licensed for 368 terabytes, you should be using 368 terabytes. Therefore, that is something that could potentially be improved. At the same time, you might temporarily exceed that 368 terabyte limit. However, to do that data movement, a couple of sets of disks have to be assigned. After that, I could migrate off the set of disks that the appliance is using currently, move data around, and delete the original source, but still stay under the 368 terabyte capacity. For example, if I'm rearranging my disk groups or disk aggregates, then I could add to the existing capacity and move my data around within the system to optimize capacity, costs, and performance. NetApp could perhaps allow temporary bursts of capacity on the 368 terabytes. This is something that I run into, so you need some flexibility with the licensing. When you have highly utilized Cloud Volumes ONTAP systems, you can get into a situation where you can't remove disks. You would need to delete a whole disk group. However, I would like to see some more flexibility because you can't remove disks that you added from Azure. I can stack licenses, e.g., two, three, or more 368 terabyte licenses can be stacked. At this point, NetApp is addressing some people's concerns around this. Basically, the largest capacity license that you can buy is 368 terabytes. The product is licensed based on capacity. Some flexibility around the licensing model would help.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |