Hacker Newsnew | past | comments | ask | show | jobs | submitlogin
SK hynix begins offering samples of 176-layer NAND chip (2020) (joins.com)
94 points by mardiyah on Feb 3, 2021 | hide | past | favorite | 42 comments


Is this likely to help lower the cost of SSDs in the ~1TB range? After watching the price of 1TB drop from the stratosphere to $100, I'd love to see them make it down towards $75...


That drop came with a drop in performance too...

Most of the 80-100 $/TB disks use QLC which is painfully slow (slower than a spinning hard drive) when you outrun the SLC cache... Had to experience this during a drive migration with dd.


*on sustained writes. They will absolutely trash a spinning drive on reads.


If you wanna resize a partition, or anything else that essentially rewrites the whole disk... have fun.


I mean yes, but isn't that something quite rare?

At least I never feel the need to do it, the closet I came to something like that was when I switched from a 240 GiB SSD to a 1 TiB one and just mirrored the old disk to the new one with dd, increased the partition and run a resize2fs. While that required ~30 Minutes it was a once in a decade thing for me.


A QLC with 40+ TB will still be quite snappy without cache.

While cacheless TLC gets parity with MLC at around 8 TB.

So, yes QLC flash needs really big drives to get enough of aggregate write performance.

Below that, there is not much rationale to use it evlconomically vs TLC.


QLC is also much lower endurance, so at those sizes there's also the question of how much over-provisioning needs to happen to enable acceptable endurance, while still being cheap enough to out compete the alternatives.


The cheapest 2TB drive on SATA III could be bought for $149. But that is not a normal pricing, it will probably take another two year for that to be a norm.

On the other hand 2.5" HDD has been stuck at the same capacity for 2-3 years.


And the capacity increase 3 years ago was from 4TB to 5TB and I bought my first 4TB disc IIRC 6 years ago.


They have 1tb going for $85 on amazon. You probably check their prices 3 months ago because SSDs have been dropping about 15% per quarter.


Wow 1TB SSD's are super cheap now..

That's so cool.


What are the limits for the number of layers? 176 seems like a large enough number to believe that a few thousands are a possibility. Is the tech any close to a bottleneck, or they will keep growing them up?


The biggest limiting factor is the ability to etch deep holes with a high aspect ratio. Shrinking individual memory cells along any axis is of limited use (fewer electrons means less durability), so you can't make layers much thinner. Making holes wider can get you more layers, but isn't really a net win for total density. So far, nobody has started stacking more than two decks of layers, so I don't know if we have good data on how cost and yield scale when going all-out on string stacking to reach extreme layer counts. As an alternative, there's R&D into using wider holes, and then splitting the vertical channels in half, giving you two semicircular memory cells per layer rather than one circular cell.

There's also a need to keep shrinking the peripheral circuitry as the number of memory cells stacked above each mm^2 of buffers and charge pumps keeps growing.


Are you working in this field?


I've been reporting on it for the past 5 years, including a recent interview with one of Micron's lead engineers about their 176L NAND, which I haven't written up and published yet. https://www.anandtech.com/Author/182


Thank you for your work. I enjoy the quality of Anandtech's reporting and am totally fine with you guys taking your time to do a good job.


Dont think he is since he has a Full Time Job Working in Anandtech.


Technically part-time, paid as a freelancer. Only two or three of the senior editors are on salary. It's not a lucrative line of work, but most of the time it's pretty fun.


Oh wow. The quality is absolutely amazing though. Thank You. While there are less content to write purely just on DRAM, NAND and storage, some of those reporting still takes a long time to investigate and write. Never thought it was freelance.


One limiting factor as the string gets too long is resistance becomes too large. This is the same reason the 2D NAND string is at 128. Maybe certain material breakthroughs will change that


Do you work on memory devices?


I know nothing about making chips, but at what point, or for what type of chip, does the heat generated become an issue?


Do they still have to deposit, mask and etch each of the layers?


3D NAND is successful and cost-effective because it's made by depositing dozens of layers at a time, then etching at the end of the process. Currently Samsung is in the lead by doing ~128L at a time. All their competitors reach 100+ layer counts by splitting the stack into two decks, eg. Micron's 176L NAND is 88+88.


This search will give you a list of videos:

3d nand manufacturing steps

It's a pretty clever arrangement, in this layman's opinion.



That was very interesting!

Could these techniques be applied to logic circuitry too or is there something preventing that?


Logic circus don’t have the concept of word and bit lines - metal layers are much much more complex. And it’s inherently less economical to stack transistors because you would have to repeat many steps for every layer.


I get that the connection between each gate wouldn't be as uniform as in storage. I guess my question isn't can the exact technique be used, but rather the same principles/ideas.

I guess its hard to etch all the different connections between gates if they're not uniform?


But they still have to do multiple steps per layer to attach the per-layer connections at the edge?


Yeah, forming the staircase at the edge of the memory array is a big deal with a lot of steps, and it can take up a significant chunk of die area. But they can apparently do it with a very small number of photolithography masks: https://www.techinsights.com/webinar/memory-process-webinar-...


Are there physical differences between say QLC and TLC designs?

Is it possible for the same hardware to be QLC for some data, but then TLC, MLC or even SLC for other data on the same chip, perhaps to get faster write speeds or more write cycles for some of the data?

If so, it opens up a lot of optimization possibilities for strategies to maximize performance and lifespan


Yes it’s physically different. You could use QLC memory as if it were TLC (just ignore a bit), but you’d be reducing your storage capacity and (to the best of my knowledge) not gain any speed. You can’t do it the other way around- you can’t use TLC as if it were QLC, you can’t make an extra bit appear that the memory controller can’t read and the memory cells weren’t designed for. Basically, if the hardware was only designed to store and read three bits of precision, you aren’t going to reliably get four bits out of it. The engineers wouldn’t have wasted that much margin.

I don’t think you would intentionally ever use QLC as if it were TLC. Leaving it in QLC mode means you’re writing half the number of cells for whatever data you’re storing, which is going to be the best way to maximize drive lifespan.


That's what SLC write caching is.


But my understanding is that uses a totally different chip... Which means the controller can't dynamically reallocate memory cells between QLC and SLC - you have a fixed amount of each.


The SLC cache is always an area of MLC/TLC used as SLC, whether the split is hardcoded or dynamic depends on the exact model. Newer models tend to be dynamic.


I guess we'll live to see cube-shaped chips perhaps with cooling channels.


Excellent advancement!

Now storage class memory is something I am keeping an eye on. Can be a highly capable cache for storage as well as page file space for memory.


"The use of 4D in the product's name is a marketing term as the chip is still based on 3-D technology."

Lest anyone be led to believe it's a warp drive?


Also reacted on that. "What's better than 3D? 4D!" :)

Reminds me of the alleged story about Xbox naming schemes. IIRC, Xbox was introduced when Playstation 2 was in stores. Then came the Playstation 3, but you can't sell an Xbox 2, since 2 is lower than 3 and thus must be worse, right? And Xbox 3 or 4 wouldn't make sense since there wasn't an Xbox 2.

So they named it Xbox 360 to bypass that problem.

But then what, Xbox 720? Xbox 361? Of course a new naming scheme had to be devised -> Xbox One. Then Xbox One X, and now Xbox Series X (and S).

Kinda funny how that seemingly small thing escalated to what seems like a series (ha) of workarounds to that initial problem.


They should have included an "Xbox 2" plush toy with the Xbox 3.


As someone who has worked for a NAND based flash memory manufacturer, time is definitely a relevant dimension.

Our whole company was hinged on a patent for a manufacturing technique to make either the source or drain leads a little more pointy so things could run faster and be more reliable with less power.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: