Behold the death of the hdd is upon us. One tiny chip has 512GB of ssd at extremely fast speeds

Behold the death of the hdd is upon us. One tiny chip has 512GB of ssd at extremely fast speeds.

PM971-NVMe
Capacity 128 GB
256 GB
512 GB
Form Factor BGA
20 mm × 16 mm × 1.5 mm
Controller Samsung Photon
Interface PCIe 3.0 x2
Protocol NVMe
DRAM 512 MB
NAND Samsung's 48-layer MLC V-NAND
Sequential Read 1500 MB/s
Sequential Write 900 MB/s with TurboWrite
4KB Random Read (QD32) 190K IOPS
4KB Random Write (QD32) 150K IOPS


news.samsung.com/global/samsung-mass-producing-industrys-first-512-gigabyte-nvme-ssd-in-a-single-bga-package-for-more-flexibility-in-computing-device-design


While at first the price will be high they will lower the price over time and totally cannibalize the remaining HDD market. This thing probably has less actual silicon in it than a modern hdd. I mean seriously producing these things probably only has an incremental cost of a couple dollars (in other words the cost after capital investments are paid off)

Now I just need a 16 lane pcie jbod on a card made of these things for that sweet sweet 12GB/s transfer rate.

Other urls found in this thread:

gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions
veracrypt.codeplex.com/wikipage?title=Wear-Leveling
veracrypt.codeplex.com/wikipage?title=Trim Operation
veracrypt.codeplex.com/wikipage?title=Reallocated Sectors
usenix.org/legacy/events/fast11/tech/full_papers/Wei.pdf
twitter.com/SFWRedditVideos

There are HDDs at 8TB and even more, I doubt SSDs reach that capacity for the same price anytime soon.
I would like to be hyped for them to do so, but at this point it seems like every progress in hardware just encourages coders and (((proprietary software companies))) to produce more bloated programs.

So how is the life span compared to that of your current SSD?

HDD will exist until a better longterm storage tech comes along, like graphene or quartz based shit.

SSD's are medium-term storage as they can lose data if unpowered for too long, and only have a limited number of writes.

So long as google/NSA/whoever wants longterm backup in their datacenters, HDDs will continue to have a market share

Pioneers of 3D flash SSD design say that raw 3D nand flash endurance is better.

How much better?

3x to 4x better than 2D at the same line geometries - Dave Merry founder of industrial SSD company FMJ Storage told me in March 2014 - based on his early access characterization research.

Part of 3D nand's better endurance is due to more expensive substrate (insulating) materials. But another factor - explained by Samsung in 2015 - is that the different design of charge trap (compared to floating point) works with a lower write pulse voltage.

Also how many fucking pb of wrights do you need to do each year? I mean yeah it don't fit all use cases but it can fit most.

caching databases like google uses need a lot of writes, I would think it would wear SSDs pretty quickly

samefagging

Also don't forget things like AWS which has thousands of ephemeral VPS's created daily which create and store who knows how many terrabytes a day. I can gaurantee amazon doesn't use SSDs for most AWS instances.

They do use SSDs for specialty cases, but that just further implies that magnetic storage is still most cost effective for them.

so they just based it out with NAND gates at the silicon level? Awesome.

I heard all silicon uses everything simplified to NAND because it's the smallest gate to etch (or whatever they do to imprint the circuit)

You're absolutely right, but it'd be more correct to say that it's the other way around. NAND gates are the smallest because they're used for everything. Every logic gate can be made out of NAND gates (or NOR gates), and while using multiple gates for one logical operation does mean a bit of "overhead", it pays dividends when you only have to focus on miniaturising one part, instead of several different ones.

All the basic two input gates only take a total of 4 transistors to make, but only NAND and NOR are functionally complete (ie, you can make all the other gates using a combination of NAND and NORs).

However when you design an IC most of the time you don't bother making everything out of a combination of NANDs or NORs to get the desired logic functionality and instead just design specific logic blocks because they take less transistors to make. The difference is with things like memory where its a repeating block so it makes sense to use the simplest logic gates.

these will be put onto standard 2.5in drives, right?

if they packed the PCB it could fit upwards of 10-13TB with these size chips and a dedicated controller

But they do. Magnetic on AWS is used almost exclusively by people that haven't updated their instances. SSDs have been the default on AWS for years, it even warns you about accidentally using magnetic disk. Many of the "newer" features like I/O provisioning aren't even available on magnetic.

Do none of you manage servers? How can you be so behind? Today we're well into the SDN and "personal cloud" transition, by the way. Try to keep up.

Disinfo agent spotted, Datacentres and Clouds uses RAID
Show me a single SSD that last more than 10 years and supports encryption well

You clueless faggot, this has nothing to do with using or not using RAID and AWS considers non-SSD for EC2 backing to be DEPRECATED. They don't even support anything else with features added in the last several years!
Jesus fucking christ you're a moron. First of all, server magnetic disks that you're actually using and not wasting only last a couple years on average even if the workload is mainly reads, and what does encryption have to do with anything? What would you even hope to accomplish encrypting an always-mounted drive on an always-running server in a physically secure datacenter on servers controlled by a third party? How fucking stupid can you be?

? I was referring to the general usage of SSD? Can they last 10 years? No. Does encryption work on them? Not really. So what if these aren't the concern of a datacentre, its still extremely important for personal use and storage and SSD simply lacks the flexibility.
Also the design plan of these tiny complex chips are utter shit, chips that small and thin do not last under pressured environment and cannot ever be repaired.
thats_bullshit_but_i_believe_it.png
I have thirty-five years experience in informational security kid, and HDD are not going away anytime soon - in the same way mastertapes will still be around until your children die of old age.
This thread is just as bad as the pro-systemd shill threads.
Polite sage.

System-D did nothing wrong

You would not say that if you truly loved Software-Development. Sadly, very few do.

Encryption works the same on an SSD as on a HDD or a CF card or RAM, you fucking moron

Yes. Right now I'm running on a 7 year old X25-M that I've used for everything including fraps of every WoW raid for three years of a hardcore raiding guild and video editing. It's had extremely heavy use for "personal use", way more than what would be expected. Intel's SSD toolbox claims I've still got over 90% of the estimated life remaining. But your question is just stupid as for personal use SSDs are replacing FAR more unreliable magnetic disks. Drive failures are essentially a thing of the past for normalfags with a SSD - they'll outlast the machine.
... but it fucking does.
End your life.

Have you newfags never read a single cryptography paper on SSD?
Hell I am pretty sure even the TrueCrypt Manual HAD LOTS to say against it.

I think even the man dm-crypt is against SSD

So did you pay out of your ass for one of the earliest commercial ones on the market? I highly doubt your claim but build quality of ThinkPads back then was very solid as well. But I call bullshit.

Why haven't you killed yourself yet? Go be wrong somewhere else. And btw, truecrypt's TRIM bug was a bug in truecrypt.

Nope.
TrueCrypt had MANY other points against SSD
You work for Seagate/Samsung? Why you so angry about your precious SSDs?
>gitlab.com/cryptsetup/cryptsetup/wikis/FrequentlyAskedQuestions
5.19 What about SSDs, Flash and Hybrid Drives?
The problem is that you cannot reliably erase parts of these devices, mainly due to wear-leveling and possibly defect management.
Basically, when overwriting a sector (of 512B), what the device does is to move an internal sector (may be 128kB or even larger) to some pool of discarded, not-yet erased unused sectors, take a fresh empty sector from the empty-sector pool and copy the old sector over with the changes to the small part you wrote. This is done in some fashion so that larger writes do not cause a lot of small internal updates.
The thing is that the mappings between outside-addressable sectors and inside sectors is arbitrary (and the vendors are not talking). Also the discarded sectors are not necessarily erased immediately. They may linger a long time.
For plain dm-crypt, the consequences are that older encrypted data may be lying around in some internal pools of the device. Thus may or may not be a problem and depends on the application. Remember the same can happen with a filesystem if consecutive writes to the same area of a file can go to different sectors.
However, for LUKS, the worst case is that key-slots and LUKS header may end up in these internal pools. This means that password management functionality is compromised (the old passwords may still be around, potentially for a very long time) and that fast erase by overwriting the header and key-slot area is insecure.
Also keep in mind that the discarded/used pool may be large. For example, a 240GB SSD has about 16GB of spare area in the chips that it is free to do with as it likes. You would need to make each individual key-slot larger than that to allow reliable overwriting. And that assumes the disk thinks all other space is in use. Reading the internal pools using forensic tools is not that hard, but may involve some soldering.
What to do?
If you trust the device vendor (you probably should not...) you can try an ATA "secure erase" command for SSDs. That does not work for USB keys though and may or may not be secure for a hybrid drive. If it finishes on an SSD after a few seconds, it was possibly faked. Unfortunately, for hybrid drives that indicator does not work, as the drive may well take the time to truly erase the magnetic part, but only mark the SSD/Flash part as erased while data is still in there.

If you can do without password management and are fine with doing physical destruction for permanently deleting data (always after one or several full overwrites!), you can use plain dm-crypt or LUKS.

If you want or need all the original LUKS security features to work, you can use a detached LUKS header and put that on a conventional, magnetic disk. That leaves potentially old encrypted data in the pools on the disk, but otherwise you get LUKS with the same security as on a magnetic disk.

If you are concerned about your laptop being stolen, you are likely fine using LUKS on an SSD or hybrid drive. An attacker would need to have access to an old passphrase (and the key-slot for this old passphrase would actually need to still be somewhere in the SSD) for your data to be at risk. So unless you pasted your old passphrase all over the Internet or the attacker has knowledge of it from some other source and does a targeted laptop theft to get at your data, you should be fine.

You should be fine, but do you want to be "should be fine" or golden?

...

some of these use AES as a default encryption with a key that's stored on the SSD for the purpose. nuke key, get a new one. drives data is now 100% irrecoverable.

It was expensive per gig relative to a HD but it was only around $500 or so IIRC. I wanted to get some experience with them as we were planning to migrate at work and also I'd been waiting for a new storage tech for a long time and was pretty excited about fucking around with near zero seek times. Tech costs NOTHING like it used to, btw.. My 386 adjusted for inflation was $7k and I was upgrading every other year through the '90s.
I don't use laptops.

>veracrypt.codeplex.com/wikipage?title=Wear-Leveling
>veracrypt.codeplex.com/wikipage?title=Trim Operation
>veracrypt.codeplex.com/wikipage?title=Reallocated Sectors

I can do this all day, SSD is shit.

>usenix.org/legacy/events/fast11/tech/full_papers/Wei.pdf

and of course Brian Schneider, a chair member of EFF, inventor of several predominant ciphers also advise against using SSD for encryption

How bad is your reading comprehension? I was referring to build quality of products 7-10 years ago, not what shitbox you use to run Windows with.

Well as someone who has 20 years of experience in ISP, Telco and enterprise IT I can honestly say you are a waste of oxygen.

No one but autists erase encrypted drives. Normal people just leave them as-is and throw them out, businesses usually drill the chips to "erase" them. if you've had the misfortune of dealing with the DoD you'll have seen them drill the chips on HDDs, too. They used to just drill the platters but HDDs have cache, now. And password rotation would be fucking embarrassing as you aren't rotating the key when you do that.

It's better than your quoting skills.

The HDD shill is still here?

...

Sweet, just in time for the flash block device market to be utterly wiped out by NVRAM.

hopefully this will be the technology that finally gets rid of that shitty eMMC flash memory

Can someone tell my why SSD's get slower the more worn out they are?

they deteriorate in the presence of harmful neutron and moron radiation

The main reason should be a solved problem now: before TRIM or without it properly configured, wear-leveling required moving "unused" blocks when writing as the SSD had no way to know a block really was unused.

Btw I should also mention that filesystems doing tricks like MurderFS to squeeze blocks tightly together are suboptimal on SSDs as they more frequently require writing to partially used SSD blocks.

if you don't take price into the equation you predictions are going to be inane.

HDDs will probably never be replaced by SSDs, they are going to coexist in different niches for a long time