This is an old revision of this page, as edited by Kozuch (talk | contribs) at 21:13, 5 February 2008 (→Vandalism). The present address (URL) is a permanent link to this revision, which may differ significantly from the current revision.
Revision as of 21:13, 5 February 2008 by Kozuch (talk | contribs) (→Vandalism)(diff) ← Previous revision | Latest revision (diff) | Newer revision → (diff)This article:
- Does not conform to WP:MOS and therefore needs to be wikified, hence {{wikify}}
- Has no categories, hence {{uncat}}
- Does not give sources, hence {{sources}}, and
- Has been proposed for merging. The merging should be discussed, tag cannot be removed before it is discussed. Rich257 09:11, 11 October 2006 (UTC)
This article now has Category:Solid-state computer storage media. It still needs the other tags. Athaenara 01:51, 10 November 2006 (UTC)
Merge completed
The merge was advertized on one talk page for 3 months (Oct 2006) with no objections, and agreed 5-0 on the other. I have therefore merged them fully, see Talk:Solid state disk. FT2 01:34, 18 January 2007 (UTC)
Reference
"Subsequent investigations into this field, however, have found that data can be recovered from SSD memory." I think it's appropiate to put a reference for that statement. —Preceding unsigned comment added by 72.50.39.149 (talk) 23:29, 11 November 2007 (UTC)
Giga versus Gibi ?
Hard drives are measured in giga bytes, memory is measured in gibi bytes, what are SSD disks measured in? —Preceding unsigned comment added by 62.95.76.2 (talk) 15:01, 18 January 2008 (UTC)
"For example, some x86 architectures have a 4 GB limit"
I thought ALL x86 architectures had a 4GB limit because that's the limit of combinations of a 32bit memory address, wasn't that one of the prime reasons for switching to 64bit standard.--KX36 14:57, 8 February 2007 (UTC)
- I think not, modern x86 CPUs support PAE (although 32-bit Windows generally doesn't, except for some rare drivers) so they can address more than 4 GB at the cost of lower performance.
However, you CAN'T extend the 4 GB limit with a swap file. Allocated pages on swap file are no different than allocated pages in RAM, their address must still be within the 4 GB total. Once you have 4 GB of RAM you don't even need a swap file, it will not be used.
Agreed. Operating systems don't just re-address memory pell-mell when they swap it to disk. Here's a good article on the 4GB limit and memory addressing in general: Understanding Address Spaces and the 4GB Limit ◗●◖ falkreon (talk) 05:30, 10 December 2007 (UTC)
"First company"
The section on history was both inaccurate and remains full of holes.
The first company to launch a flash-based solid state drives did certainly not do so as late as 1995, since Psion PLC was already selling its "SSD" units from 1989 onwards. See for example http://3lib.ukonline.co.uk/historyofpsion.htm
I have no idea who was the first company to do so, but Psion sold "solid state" drives from 1984. The earlier ones were UV-EPROMs or battery-backed static RAM, with flash models introduced later. —The preceding unsigned comment was added by CecilWard (talk • contribs) 21:19, 24 February 2007 (UTC).
Campbridge Computing had one in their Z88 portable computers (EEPROM based) as early as 1988 also, and was viewed at the time as one of the most innovative products on the market. - John Hancock
Read/Write Cycles
I'm not sure if this is marketing talk or not, but since there's no source cited in the disadvantages section I think this is apt:
"Q: Is there currently some sort of technical limitation on the creation of SSDs other than cost, and what about the reliability of flash media?
A: Historically SSDs were limited in the number of R/W cycles. However, with modern flash technology and error correction, the reliability of the flash in a PC exceeds 10 years. "
The Compact_Flash article states a read/write cycle up to 300,000.
The Read-only_memory#EPROM.2FEEPROM.2FEAROM_lifetime article states up to 100,000.
Read/Write Cycles
The claim that was on this page that endurance is not a problem, with the reference to storagesearch, is incorrect. It is true that if the hard disk were overwritten as a whole, over and over and over again, it would last a very long time. The problem is that's not a common access pattern. Under GNU/Linux, if you're running a web server, /var/log/apache/access.log will get written with each access. If you're getting an access once every second, that means you're overwriting the same spot on the hard disk 86,400 times per day, and your SSD fails after 2-3 days tops (real-world Flash gets 100,000 write cycles typically, and 300,000 on the high-end. 1-5 million are slightly exaggerated marketing figures, and at least the high-end of that is not actually achieved with today's technology). With a desktop GNU/Linux box, there are log files that get written many times per day. Access times get marked on common files every couple of minutes at most. Similar issues exist with Windows. Flash drives used naively will fail after at most a few months use on the desktop. Many embedded network devices come with Flash for log files, but the Flash is a replaceable part, and typically wears out after some use and needs to be replaced.
I've seen both the desktop and the embedded failures occur (on the desktop, with a naive user, using a CF-IDE converter, and in the embedded case, replacing Flash was just standard maintenance). I haven't seen the server case occur, because all the sysadmins I know using SSDs are intelligent enough to manage the endurance issues.
The failures can be mitigated through the use of intelligent software. OLPC spurred the rapid development of Flash-optimized files systems for GNU/Linux. These intentionally stagger writes over the whole drive, so that no single block gets worn down. Hybrid Flash/non-Flash drives use the Flash as a cache, and again, can intelligently manage the part of the Flash that gets used with each write. All-Flash drives have their place, and can be managed to not fail, but the endurance issue does occur, and does need to be managed. Many SSDs have firmware to manage this, but many of the SSDs I have dealt with do not. It is an issue the user needs to be aware of. I have corrected the page to reflect that. 68.160.152.190 21:32, 4 June 2007 (UTC)
Merging RAM disc article
I am against it, as RAM discs are different. SSDs use non volatile memory. RAM is not non volatile. --soum 16:35, 7 June 2007 (UTC)
- I'm against too, they're completely different things. - 83.116.205.167 07:52, 17 June 2007 (UTC)
- Against it. Second above, and add that ram disks are virtual; SSD are physical. Arosa 21:43, 18 June 2007 (UTC)
- Same here - not the same thing at all (although the RAM disk article contradicts what Arosa said...)! I'll remove the tag. Spamsara 22:07, 24 June 2007 (UTC)
- Against it here, for Soumyasch's reasons. Makes no sense to combine the two when they are inherently different technologies, even if they share some common applications. Would be like combining 'car' and 'bike' because they could both be used to get to work, and had wheels. - 203.206.177.188
- Same here - they are completely different, SSD are hardware allocated, where RAM Disks can be created from system ram, or hardware allocated. SSD are hybrid memory. Ram Disks are volatile memory.
I am in favor of merging RAM-disk and Solid State Drive. They are essentially the same devices. They both use Ram to act like a disk drive. The fact that one is volatile and the other not does not seem to be a significant difference. In which section would you put a RAM-disk with a battery backup? --FromageDroit 13:59, 28 August 2007 (UTC)
RAM-disk and Solid State Drive are not the same, as RAM is Volatile, SSD is not. so Merging not needed, maybe an Link to Ram-disks at the bottem of the page if its not all ready there. Leexgx 15:53, 24 October 2007 (UTC)
We're arguing over symantics. On the "RAM disk" page that's marked for possible merging, the article itself states that it can be one of two things. So we're talking about two different things here. I suggest that a disambiguation page is created. It will point to either this SSD page, which could be merged with half the RAM disk article; or the RAM disk's "software abstraction that treats a segment of random access memory (RAM) as secondary storage". I used the latter extensively and can attest to it being a different beast entirely. And yes, SSD's that use volatile memory as a hardware component are indeed the same thing, just volatile.◗●◖ falkreon (talk) 05:49, 10 December 2007 (UTC)
Read/Write Cycles (another one)
When calculating the endurance level of the hardware, the article claims "blocks are typically on the order of 1kb and an 8 GB disk will have 8,192 blocks", which unless I'm very much mistaken is out by a factor of 2^10, i.e. 8,192 * 1Kb = 8Mb, not 8Gb.
Read/Write Cycles (yet another one)
As the previous comment indicated, the description of wear leveling in the text is not only very naive but also very wrong.
The nature of NAND flash is such that one typically must erase whole 128KB blocks, and then consecutively write 64 2KB pages within that block. There are no rewrites, so in order to rewrite a single 2KB page, the entire 128KB block must be erased. So, with a naive implementation the endurance of a single block drops 64 times (to about 160 rewrites?).
The main problem is not wear leveling itself, but how to avoid the need to rewrite existing data, that is how to avoid fragmentation. This problem is not 100% solvable in general unless once can predict the future. One hardware solution is to cache some data in a battery-backed RAM to avoid immediately rewriting it.
24.4.151.152 17:24, 29 September 2007 (UTC)
The problem can be solved by more innovative filesystem designs, such as Log-structured file systems. Segmentation is a non-problem for such file systems. You can always predict with 100% certainty where you're going to write next, although you still can't predict when you're doing so.
202.71.231.218 (talk) 2008-02-21T06:54 —Preceding comment was added at 06:57, 21 January 2008 (UTC)
Server mentions
There is insufficient mention of SSD usage in servers. Because the primary bottleneck on many types of servers is I/O from many users (and thus random I/O), SSDs are often considered superior to RAID HDD, assuming one is willing to pay the price.
128.113.167.175 17:15, 1 October 2007 (UTC)
- Unfortunately, word of mouth is not enough for Misplaced Pages. We need published sources to cite. Can you help?--soum 17:41, 1 October 2007 (UTC)
- A google for "random i/o bottleneck" shows plenty of sources for this, i'm not experienced enough to edit this, please be my guest to help with this :)
Talrinys (talk) 23:50, 2 January 2008 (UTC)
MacBook Air
The high-end version of the macbook air uses a 64 GB solid state drive. Some mention of this application might be warranted. —Preceding unsigned comment added by 67.116.239.156 (talk) 19:03, 15 January 2008 (UTC)
- The mention of the Air under Availability states that the 64 GB SSD "Boasts better reliability.... 80GB PATA drive." A couple of problems here. I haven't looked into it yet, but I seriously doubt that they aren't using SATA in new macbooks. Secondly, and more importantly, where is it said that the SSD offers better reliability? I don't necessarily doubt it, but is there a source for this statement? Ferrariman60 (talk) 22:44, 15 January 2008 (UTC)
- http://store.apple.com/Apple/WebObjects/dkstore?node=home/shop_mac/family/macbook_air&cid=OAS-EMEA-KWG-DK_LAPTOP-DK&aosid=p202&esvt=GODKE&esvadt=999999-1200820-1079263-1&esvid=100504 says it's PATA. No harddrives even need SATA yet, so doesn't really matter, except for the ancient standard, cables etc. SSDs just are plain more reliable, it's in the technology itself. Unless it has bad chips chips it simply just can't crash randomly as a mechanical harddrive can. However it will die after a specific amount of transfers. This will be a lot easier to account for than the random failures we have now. Talrinys (talk) 12:06, 26 January 2008 (UTC)
Lots of other laptops have 64 GB SSDs - yet this article is now sprinkled with references specifically to MacBook Air. Most of these references look rather redundant to me. --Romanski (talk) 22:05, 29 January 2008 (UTC)
Solid State and Tubes
Please take out the line in the intro refering to vacuum tubes. The term "Solid State" has always refered to semiconductors and only semiconductors, not tubes. It has to do with the fact that dopants are difused into the silicon while in a solid state in a way that mimics diffusion in liquid. 75.55.39.21 (talk) 21:25, 15 January 2008 (UTC) Sandy
Disadvantage? - "Vulnerability to certain types of effects,...magnetic fields..."
Working with some of the largest magnetic fields found in the industrial world, SSDs are the only allowed hard-drive replacement in a production environment. Mechanical hard drives die pretty much instantly or are rendered un-usable/erased.
Maximum gauss is ~ 2,000. With an average of 40-80 Gauss in normal walkway area's. Some equipment is under ~ 80-400 Gauss, all flash based storage. Magnetic field source is a DC current at ~200-350kA.
Main issues are with DC-DC converters under such conditions, so the potential for power supply in the SSD to fail rather than the flash itself.
Hard-Disks will NOT survive this environment. Flash based SSDs are the only storage device able to withstand these conditions.
Should this "disadvantage" be moved to an "advantage"?
Vandalism
Semi-protection vandalism lock needs to be placed.--Kozuch (talk) 21:13, 5 February 2008 (UTC)