Home
News
Products
Corporate
Contact
 
Saturday, December 21, 2024

News
Industry News
Publications
CST News
Help/Support
Software
Tester FAQs
Publications

High-Speed Memory and System Performance


Tuesday, August 1, 2000
High-Speed Memory and System Performance
Why pay premium if you can get the same or better for the lower price


As an end-user, I can't care less about what type of memory is used inside my computer system as long as it does the job. However, it does matter if any company tries to dictate the component used and I have to pay premium to get it. The fact is simple. I would be willing to pay the premium price if the new component translates into clear performance improvement. Otherwise, why would I pay more for something that delivers literately no performance advantage over its counterparts.

The intense high-speed memory race has driven the market into a diversion. At one end, the whole industry is backing the widely accepted 133Mhz, the faster SDRAM, and DDR. On the other hand, Intel is betting heavily on Rambus memory as the next generation interface. Rambus may look like the current open standard while SDRAM and DDR has an upper hand at the moment counting the votes from memory manufacturers and systems designers. Nevertherless, never rule out Rambus as long as Intel's commitment in it remains.

Rambus Story

So, what is so special about Rambus that Intel has to go through such a great deal to adopt the technology? More bandwidth as Rambus claims? It seems to be a big word to swallow for a typical end user like myself. After doing some research on these complicated issues, I managed to unearth some truth. However, before you get into any expectation, allow me to pull you back down to earth by first disappointing you with this fact. Most DRAM and system vendors agree that there is no simple, absolute answer to the question of how the bandwidth or latency (another big word), or other timing related parameters, of these new high-speed DRAMs will translate into increased system performance. Nevertheless, there are a few benchmark reports from credible individual/companies presenting a performance overview of different memory technologies. And it is up to the readers to make a judgement call.

Some of those articles can be found at:

*Dissecting Rambus:
Tomshardware

*800Mhz Platform Analysis:
http://www.simmtester.com/page/news/showpubnews.asp?num=21

System Architectures

Let's begin by trying to understand how "data access is" generated in your computer system:

1. The ideal situation takes place when CPU fetches on the data nicely from Cache (SRAM), until it experiences a "cache miss".
2. It will then turn to main memory (DRAM) for the data, unless the data is not there.
3. It will end up swapping the hard disk or external drive for the needed data.

As you may have experienced, the process of accessing the data becomes slower from one level to the next. Every time CPU experiences a "cache miss", it has to slow down while waiting for the needed data to come in. The delay is called latency or "lateness". Faster latency DRAM allows the CPU to resume operation quicker. While Rambus is best known for its weakness dealing with latency intensive programs, it shows its strength when the focus goes to bandwidth. Nevertheless, while larger bandwidth DRAM can pump in more data at a single time, but it also depends on how much the CPU bus can take it. For instance, the mainstream PCs with 100Mhz CPU bus is capable of absorbing 800mega bit per second. If the 800Mhz RDRAM delivers a stream of data at 1.6Giga byte per second, the chip set will have to buffer the data to slow down the process.

High bandwidth is most effective when data are successfully accessed from the same memory row. RDRAM prevails by cutting down the pre-charge time through pre-activating the subsequent pages hoping that the next hit is going to be in the same page. However, if the data is not what was requested, which happens pretty frequently, a read or write operation will start from scratch, Llatency, again, is going to show its advantages in this case. In fact, based on the following table, DDR can match and even has an edge over Rambus in terms of peak bandwidth.

DDR SDRAM

Rationally, DDR SDRAM should be considered the most qualified player to receive the baton from PC133 SDRAM as the next mainstream memory choice. Technologically, it is a natural evolution from PC133 SDRAM except it carries data in both rising and falling edges of the clock. It is capable of deliver equally good and even better performance than RDRAM but at lower cost. Furthermore, it is easy to implement and integrate into current PC systems.

Conclusions

As high-speed memory race goes on, mainstream systems must be equipped with higher CPU bus speed (200Mhz and higher), new chipsets, and applications, among other factors, Rambus's high bandwidth feature will not be fully and effectively utilized, not to mention issues involving overheating, low yield, and high manufacturing cost. Until then, the likelihood is that less costly open standard SDRAM and DDR will remain the industry main preference.

Performance Parameters



Bandwidth calculation: memory bus width/8 bits x data rate.

Note that DDR module has higher peak burst bandwidth than Rambus simply because DDR and SDRAM modules have 64-bit bus compared to Rambus's 16-bit. The high speed (800Mhz data rate) Rambus will be a viable choice when systems designers decided to integrate memory onto the system to take advantage of its almost perfect signaling (disregard of cost factor).

By: DocMemory
Copyright © 2023 CST, Inc. All Rights Reserved

CST Inc. Memory Tester DDR Tester
Copyright © 1994 - 2023 CST, Inc. All Rights Reserved