Sun . 20 Aug 2020
TR | RU | UK | KK | BE |

Synchronous dynamic random-access memory

synchronous dynamic random-access memory (sdram), synchronous dynamic random-access memory
Synchronous dynamic random-access memory SDRAM is any dynamic random-access memory DRAM where the operation of its external pin interface is coordinated by an externally supplied clock signal DRAM integrated circuits ICs produced from the early 1970s to mid-1990s used an asynchronous interface, in which input control signals have a direct effect on internal functions only delayed by the trip across its semiconductor pathways SDRAM has a synchronous interface, whereby changes on control inputs are recognised after a rising edge of its clock input In SDRAM families standardized by JEDEC, the clock signal controls the stepping of an internal finite state machine that responds to incoming commands These commands can be pipelined to improve performance, with previously started operations completing while new commands are received The memory is divided into several equally sized but independent sections called banks, allowing the device to operate on a memory access commands in each bank simultaneously and speed access in an interleaved fashion This allows SDRAMs to achieve greater concurrency and higher data transfer rates than asynchronous DRAMs could

Pipelining means that the chip can accept a new command before it has finished processing the previous one For a pipelined write, the write command can be immediately followed by another command without waiting for the data to be written into the memory array For a pipelined read, the requested data appears a fixed number of clock cycles latency after the read command, during which additional commands can be sent

SDRAM is widely used in computers Beyond the original SDRAM, further generations of double data rate RAM have entered the mass market – DDR also known as DDR1, DDR2, DDR3 and DDR4, with the latest generation DDR4 released in second half of 2014

Contents

  • 1 SDRAM history
  • 2 SDRAM timing
  • 3 SDR SDRAM
    • 31 SDRAM control signals
      • 311 Command signals
      • 312 Bank Selection BAn
      • 313 Addressing A10/An
      • 314 Commands
    • 32 SDRAM construction and operation
    • 33 Command interactions
      • 331 Interrupting a read burst
    • 34 SDRAM burst ordering
    • 35 SDRAM mode register
    • 36 Auto refresh
    • 37 Low power modes
    • 38 DDR SDRAM prefetch architecture
  • 4 Generations of SDRAM
    • 41 SDR SDRAM Single Data Rate synchronous DRAM
    • 42 DDR1 SDRAM
    • 43 DDR2 SDRAM
    • 44 DDR3 SDRAM
    • 45 DDR4 SDRAM
    • 46 DDR5 SDRAM
  • 5 Failed successors
    • 51 Rambus DRAM RDRAM
    • 52 Synchronous-Link DRAM SLDRAM
    • 53 Virtual Channel Memory VCM SDRAM
  • 6 See also
  • 7 References
  • 8 External links

SDRAM historyedit

Eight SDRAM ICs on a PC100 DIMM package

Although the concept of synchronous DRAM was well understood by the 1970s and was used with early Intel processors, it was only in 1993 that SDRAM began its path to universal acceptance in the electronics industry In 1993, Samsung introduced its KM48SL2000 synchronous DRAM, and by 2000, SDRAM had replaced virtually all other types of DRAM in modern computers, because of its greater performance

SDRAM latency is not inherently lower faster than asynchronous DRAM Indeed, early SDRAM was somewhat slower than contemporaneous burst EDO DRAM due to the additional logic The benefits of SDRAM's internal buffering come from its ability to interleave operations to multiple banks of memory, thereby increasing effective bandwidth

Today, virtually all SDRAM is manufactured in compliance with standards established by JEDEC, an electronics industry association that adopts open standards to facilitate interoperability of electronic components JEDEC formally adopted its first SDRAM standard in 1993 and subsequently adopted other SDRAM standards, including those for DDR, DDR2 and DDR3 SDRAM

SDRAM is also available in registered varieties, for systems that require greater scalability such as servers and workstations

Today, the world's largest manufacturers of SDRAM include: Samsung Electronics, Panasonic, Micron Technology, and Hynix

SDRAM timingedit

There are several limits on DRAM performance Most noted is the read cycle time, the time between successive read operations to an open row This time decreased from 10 ns for 100 MHz SDRAM to 5 ns for DDR-400, but has remained relatively unchanged through DDR2-800 and DDR3-1600 generations However, by operating the interface circuitry at increasingly higher multiples of the fundamental read rate, the achievable bandwidth has increased rapidly

Another limit is the CAS latency, the time between supplying a column address and receiving the corresponding data Again, this has remained relatively constant at 10–15 ns through the last few generations of DDR SDRAM

In operation, CAS latency is a specific number of clock cycles programmed into the SDRAM's mode register and expected by the DRAM controller Any value may be programmed, but the SDRAM will not operate correctly if it is too low At higher clock rates, the useful CAS latency in clock cycles naturally increases 10–15 ns is 2–3 cycles CL2–3 of the 200 MHz clock of DDR-400 SDRAM, CL4-6 for DDR2-800, and CL8-12 for DDR3-1600 Slower clock cycles will naturally allow lower numbers of CAS latency cycles

SDRAM modules have their own timing specifications, which may be slower than those of the chips on the module When 100 MHz SDRAM chips first appeared, some manufacturers sold "100 MHz" modules that could not reliably operate at that clock rate In response, Intel published the PC100 standard, which outlines requirements and guidelines for producing a memory module that can operate reliably at 100 MHz This standard was widely influential, and the term "PC100" quickly became a common identifier for 100 MHz SDRAM modules, and modules are now commonly designated with "PC"-prefixed numbers PC66, PC100 or PC133 - although the actual meaning of the numbers has changed

SDR SDRAMedit

The 64 MB of sound memory on the Sound Blaster X-Fi Fatality Pro sound card is built from two Micron 48LC32M8A2 SDRAM chips They run at 133 MHz 75 ns clock period and have 8-bit wide data buses1

Originally simply known as SDRAM, single data rate SDRAM can accept one command and transfer one word of data per clock cycle Typical clock frequencies are 100 and 133 MHz Chips are made with a variety of data bus sizes most commonly 4, 8 or 16 bits, but chips are generally assembled into 168-pin DIMMs that read or write 64 non-ECC or 72 ECC bits at a time

Use of the data bus is intricate and thus requires a complex DRAM controller circuit This is because data written to the DRAM must be presented in the same cycle as the write command, but reads produce output 2 or 3 cycles after the read command The DRAM controller must ensure that the data bus is never required for a read and a write at the same time

Typical SDR SDRAM clock rates are 66, 100, and 133 MHz periods of 15, 10, and 75 ns Clock rates up to 200 MHz were available

SDRAM control signalsedit

All commands are timed relative to the rising edge of a clock signal In addition to the clock, there are 6 control signals, mostly active low, which are sampled on the rising edge of the clock:

  • CKE Clock Enable When this signal is low, the chip behaves as if the clock has stopped No commands are interpreted and command latency times do not elapse The state of other control lines is not relevant The effect of this signal is actually delayed by one clock cycle That is, the current clock cycle proceeds as usual, but the following clock cycle is ignored, except for testing the CKE input again Normal operations resume on the rising edge of the clock after the one where CKE is sampled high
    Put another way, all other chip operations are timed relative to the rising edge of a masked clock The masked clock is the logical AND of the input clock and the state of the CKE signal during the previous rising edge of the input clock
  • CS Chip Select When this signal is high, the chip ignores all other inputs except for CKE, and acts as if a NOP command is received
  • DQM Data Mask The letter Q appears because, following digital logic conventions, the data lines are known as "DQ" lines When high, these signals suppress data I/O When accompanying write data, the data is not actually written to the DRAM When asserted high two cycles before a read cycle, the read data is not output from the chip There is one DQM line per 8 bits on a x16 memory chip or DIMM

Command signalsedit

  • RAS Row Address Strobe Despite the name, this is not a strobe, but rather simply a command bit Along with CAS and WE, this selects one of 8 commands
  • CAS Column Address Strobe Despite the name, this is not a strobe, but rather simply a command bit Along with RAS and WE, this selects one of 8 commands
  • WE Write enable Along with RAS and CAS, this selects one of 8 commands This generally distinguishes read-like commands from write-like commands

Bank Selection BAnedit

SDRAM devices are internally divided into either 2, 4 or 8 independent internal data banks One to three Bank Address inputs BA0, BA1 and BA2 are used to select which bank a command is directed toward

Addressing A10/Anedit

Many commands also use an address presented on the address input pins Some commands, which either do not use an address, or present a column address, also use A10 to select variants

Commandsedit

The commands are defined as follows:

CS RAS CAS WE BAn A10 An Command
H x x x x x x Command inhibit No operation
L H H H x x x No operation
L H H L x x x Burst Terminate: stop a burst read or burst write in progress
L H L H bank L column Read: Read a burst of data from the currently active row
L H L H bank H column Read with auto precharge: As above, and precharge close row when done
L H L L bank L column Write: Write a burst of data to the currently active row
L H L L bank H column Write with auto precharge: As above, and precharge close row when done
L L H H bank row Active activate: open a row for Read and Write commands
L L H L bank L x Precharge: Deactivate close the current row of selected bank
L L H L x H x Precharge all: Deactivate close the current row of all banks
L L L H x x x Auto refresh: Refresh one row of each bank, using an internal counter All banks must be precharged
L L L L 0 0 mode Load mode register: A0 through A9 are loaded to configure the DRAM chip
The most significant settings are CAS latency 2 or 3 cycles and burst length 1, 2, 4 or 8 cycles

All SDRAM generations SDR and DDRx use essentially the same commands, with the changes being:

  • Additional address bits to support larger devices
  • Additional bank select bits
  • Wider mode registers DDR2 and up use 13 bits, A0–A12
  • Additional extended mode registers selected by the bank address bits
  • DDR2 deletes the Burst Terminate command; DDR3 reassigns it as "ZQ calibration"
  • DDR3 and DDR4 use A12 during read and write command to indicate "burst chop", half-length data transfer
  • DDR4 changes the encoding of the Activate command A new signal ACT controls it, during which the other control lines are used as row address bits 16, 15 and 14 When ACT is high, other commands are the same as above

SDRAM construction and operationedit

For example, a 512 MB SDRAM DIMM which contains 512 MiB mebibytes = 512 × 220 bytes = 536,870,912 bytes exactly, might be made of eight or nine SDRAM chips, each containing 512 Mbit of storage, and each one contributing 8 bits to the DIMM's 64- or 72-bit width A typical 512 Mbit SDRAM chip internally contains 4 independent 16 MB MiB memory banks Each bank is an array of 8,192 rows of 16,384 bits each 1024 16-bit columns A bank is either idle, active, or changing from one to the other

The active command activates an idle bank It presents a two-bit bank address BA0–BA1 and a 13-bit row address A0–A12, and causes a read of that row into the bank's array of all 16,384 column sense amplifiers This is also known as "opening" the row This operation has the side effect of refreshing the dynamic capacitive memory storage cells of that row

Once the row has been activated or "opened", read and write commands are possible to that row Activation requires a minimum amount of time, called the row-to-column delay, or tRCD before reads or writes to it may occur This time, rounded up to the next multiple of the clock period, specifies the minimum number of wait cycles between an active command, and a read or write command During these wait cycles, additional commands may be sent to other banks; because each bank operates completely independently

Both read and write commands require a column address Because each chip accesses eight bits of data at a time, there are 2048 possible column addresses thus requiring only 11 address lines A0–A9, A11

When a read command is issued, the SDRAM will produce the corresponding output data on the DQ lines in time for the rising edge of the clock a few clock cycles later, depending on the configured CAS latency Subsequent words of the burst will be produced in time for subsequent rising clock edges

A write command is accompanied by the data to be written driven on to the DQ lines during the same rising clock edge It is the duty of the memory controller to ensure that the SDRAM is not driving read data on to the DQ lines at the same time that it needs to drive write data on to those lines This can be done by waiting until a read burst has finished, by terminating a read burst, or by using the DQM control line

When the memory controller needs to access a different row, it must first return that bank's sense amplifiers to an idle state, ready to sense the next row This is known as a "precharge" operation, or "closing" the row A precharge may be commanded explicitly, or it may be performed automatically at the conclusion of a read or write operation Again, there is a minimum time, the row precharge delay, tRP, which must elapse before that bank is fully idle and it may receive another activate command

Although refreshing a row is an automatic side effect of activating it, there is a minimum time for this to happen, which requires a minimum row access time tRAS delay between an active command opening a row, and the corresponding precharge command closing it This limit is usually dwarfed by desired read and write commands to the row, so its value has little effect on typical performance

Command interactionsedit

The no operation command is always permitted, while the load mode register command requires that all banks be idle, and a delay afterward for the changes to take effect The auto refresh command also requires that all banks be idle, and takes a refresh cycle time tRFC to return the chip to the idle state This time is usually equal to tRCD+tRP The only other command that is permitted on an idle bank is the active command This takes, as mentioned above, tRCD before the row is fully open and can accept read and write commands

When a bank is open, there are four commands permitted: read, write, burst terminate, and precharge Read and write commands begin bursts, which can be interrupted by following commands

Interrupting a read burstedit

A read, burst terminate, or precharge command may be issued at any time after a read command, and will interrupt the read burst after the configured CAS latency So if a read command is issued on cycle 0, another read command is issued on cycle 2, and the CAS latency is 3, then the first read command will begin bursting data out during cycles 3 and 4, then the results from the second read command will appear beginning with cycle 5

If the command issued on cycle 2 were burst terminate, or a precharge of the active bank, then no output would be generated during cycle 5

Although the interrupting read may be to any active bank, a precharge command will only interrupt the read burst if it is to the same bank or all banks; a precharge command to a different bank will not interrupt a read burst

To interrupt a read burst by a write command is possible, but more difficult It can be done, if the DQM signal is used to suppress output from the SDRAM so that the memory controller may drive data over the DQ lines to the SDRAM in time for the write operation Because the effects of DQM on read data are delayed by 2 cycles, but the effects of DQM on write data are immediate, DQM must be raised to mask the read data beginning at least two cycles before write command, but must be lowered for the cycle of the write command assuming the write command is intended to have an effect

Doing this in only two clock cycles requires careful coordination between the time the SDRAM takes to turn off its output on a clock edge and the time the data must be supplied as input to the SDRAM for the write on the following clock edge If the clock frequency is too high to allow sufficient time, three cycles may be required

If the read command includes auto-precharge, the precharge begins the same cycle as the interrupting command

SDRAM burst orderingedit

A modern microprocessor with a cache will generally access memory in units of cache lines To transfer a 64-byte cache line requires eight consecutive accesses to a 64-bit DIMM, which can all be triggered by a single read or write command by configuring the SDRAM chips, using the mode register, to perform eight-word bursts A cache line fetch is typically triggered by a read from a particular address, and SDRAM allows the "critical word" of the cache line to be transferred first "Word" here refers to the width of the SDRAM chip or DIMM, which is 64 bits for a typical DIMM SDRAM chips support two possible conventions for the ordering of the remaining words in the cache line

Bursts always access an aligned block of BL consecutive words beginning on a multiple of BL So, for example, a four-word burst access to any column address from four to seven will return words four to seven The ordering, however, depends on the requested address, and the configured burst type option: sequential or interleaved Typically, a memory controller will require one or the other When the burst length is one or two, the burst type does not matter For a burst length of one, the requested word is the only word accessed For a burst length of two, the requested word is accessed first, and the other word in the aligned block is accessed second This is the following word if an even address was specified, and the previous word if an odd address was specified

For the sequential burst mode, later words are accessed in increasing address order, wrapping back to the start of the block when the end is reached So, for example, for a burst length of four, and a requested column address of five, the words would be accessed in the order 5-6-7-4 If the burst length were eight, the access order would be 5-6-7-0-1-2-3-4 This is done by adding a counter to the column address, and ignoring carries past the burst length The interleaved burst mode computes the address using an exclusive or operation between the counter and the address Using the same starting address of five, a four-word burst would return words in the order 5-4-7-6 An eight-word burst would be 5-4-7-6-1-0-3-22 Although more confusing to humans, this can be easier to implement in hardware, and is preferred by Intel for its microprocessorscitation needed

If the requested column address is at the start of a block, both burst modes sequential and interleaved return data in the same sequential sequence 0-1-2-3-4-5-6-7 The difference only matters if fetching a cache line from memory in critical-word-first order

SDRAM mode registeredit

Single data rate SDRAM has a single 10-bit programmable mode register Later double-data-rate SDRAM standards add additional mode registers, addressed using the bank address pins For SDR SDRAM, the bank address pins and address lines A10 and above are ignored, but should be zero during a mode register write

The bits are M9 through M0, presented on address lines A9 through A0 during a load mode register cycle

  1. M9: Write burst mode If 0, writes use the read burst length and mode If 1, all writes are non-burst single location
  2. M8, M7: Operating mode Reserved, and must be 00
  3. M6, M5, M4: CAS latency Generally only 010 CL2 and 011 CL3 are legal Specifies the number of cycles between a read command and data output from the chip The chip has a fundamental limit on this value in nanoseconds; during initialization, the memory controller must use its knowledge of the clock frequency to translate that limit into cycles
  4. M3: Burst type 0 - requests sequential burst ordering, while 1 requests interleaved burst ordering
  5. M2, M1, M0: Burst length Values of 000, 001, 010 and 011 specify a burst size of 1, 2, 4 or 8 words, respectively Each read and write, if M9 is 0 will perform that many accesses, unless interrupted by a burst stop or other command A value of 111 specifies a full-row burst The burst will continue until interrupted Full-row bursts are only permitted with the sequential burst type

Later double data rate SDRAM standards use more mode register bits, and provide additional mode registers called Extended Mode registers The register number is encoded on the bank address pins during the Load Mode Register command For example, DDR2 SDRAM has a 13-bit Mode Register, a 13-bit Extended Mode Register #1 EMR1, and a 5-bit Extended Mode Register #2 EMR2

Auto refreshedit

It is possible to refresh a RAM chip by opening and closing activating and precharging each row in each bank However, to simplify the memory controller, SDRAM chips support an "auto refresh" command, which performs these operations to one row in each bank simultaneously The SDRAM also maintains an internal counter, which iterates over all possible rows The memory controller must simply issue a sufficient number of auto refresh commands one per row, 4096 in the example we have been using every refresh interval tREF = 64 ms is a common value All banks must be idle closed, precharged when this command is issued

Low power modesedit

As mentioned, the clock enable CKE input can be used to effectively stop the clock to an SDRAM The CKE input is sampled each rising edge of the clock, and if it is low, the following rising edge of the clock is ignored for all purposes other than checking CKE As long as CKE is low, it is permissible to change the clock rate, or even stop the clock entirely

If CKE is lowered while the SDRAM is performing operations, it simply "freezes" in place until CKE is raised again

If the SDRAM is idle all banks precharged, no commands in progress when CKE is lowered, the SDRAM automatically enters power-down mode, consuming minimal power until CKE is raised again This must not last longer than the maximum refresh interval tREF, or memory contents may be lost It is legal to stop the clock entirely during this time for additional power savings

Finally, if CKE is lowered at the same time as an auto-refresh command is sent to the SDRAM, the SDRAM enters self-refresh mode This is like power down, but the SDRAM uses an on-chip timer to generate internal refresh cycles as necessary The clock may be stopped during this time While self-refresh mode consumes slightly more power than power-down mode, it allows the memory controller to be disabled entirely, which commonly more than makes up the difference

SDRAM designed for battery-powered devices offers some additional power-saving options One is temperature-dependent refresh; an on-chip temperature sensor reduces the refresh rate at lower temperatures, rather than always running it at the worst-case rate Another is selective refresh, which limits self-refresh to a portion of the DRAM array The fraction which is refreshed is configured using an extended mode register The third, implemented in Mobile DDR LPDDR and LPDDR2 is "deep power down" mode, which invalidates the memory and requires a full reinitialization to exit from This is activated by sending a "burst terminate" command while lowering CKE

DDR SDRAM prefetch architectureedit

DDR SDRAM employs prefetch architecture to allow quick and easy access to multiple data words located on a common physical row in the memory

The prefetch architecture takes advantage of the specific characteristics of memory accesses to DRAM Typical DRAM memory operations involve three phases: bitline precharge, row access, column access Row access is the heart of a read operation, as it involves the careful sensing of the tiny signals in DRAM memory cells; it is the slowest phase of memory operation However, once a row is read, subsequent column accesses to that same row can be very quick, as the sense amplifiers also act as latches For reference, a row of a 1 Gbit DDR3 device is 2,048 bits wide, so internally 2,048 bits are read into 2,048 separate sense amplifiers during the row access phase Row accesses might take 50 ns, depending on the speed of the DRAM, whereas column accesses off an open row are less than 10 ns

Traditional DRAM architectures have long supported fast column access to bits on an open row For an 8-bit-wide memory chip with a 2,048 bit wide row, accesses to any of the 256 datawords 2048/8 on the row can be very quick, provided no intervening accesses to other rows occur

The drawback of the older fast column access method was that a new column address had to be sent for each additional dataword on the row The address bus had to operate at the same frequency as the data bus Prefetch architecture simplifies this process by allowing a single address request to result in multiple data words

In a prefetch buffer architecture, when a memory access occurs to a row the buffer grabs a set of adjacent data words on the row and reads them out "bursts" them in rapid-fire sequence on the IO pins, without the need for individual column address requests This assumes the CPU wants adjacent datawords in memory, which in practice is very often the case For instance, in DDR1, two adjacent data words will be read from each chip in the same clock cycle and placed in the pre-fetch buffer Each word will then be transmitted on consecutive rising and falling edges of the clock cycle Similarly, in DDR2 with a 4n pre-fetch buffer, four consecutive data words are read and placed in buffer while a clock, which is twice faster than the external clock of DDR, transmits each of the word in consecutive rising and falling edge of the faster external clock 3

The prefetch buffer depth can also be thought of as the ratio between the core memory frequency and the IO frequency In an 8n prefetch architecture such as DDR3, the IOs will operate 8 times faster than the memory core each memory access results in a burst of 8 datawords on the IOs Thus a 200 MHz memory core is combined with IOs that each operate eight times faster 1600 megabits per second If the memory has 16 IOs, the total read bandwidth would be 200 MHz x 8 datawords/access x 16 IOs = 256 gigabits per second Gbit/s, or 32 gigabytes per second GB/s Modules with multiple DRAM chips can provide correspondingly higher bandwidth

Each generation of SDRAM has a different prefetch buffer size:

  • DDR SDRAM's prefetch buffer size is 2n two datawords per memory access
  • DDR2 SDRAM's prefetch buffer size is 4n four datawords per memory access
  • DDR3 SDRAM's prefetch buffer size is 8n eight datawords per memory access
  • DDR4 SDRAM's prefetch buffer size is 8n eight datawords per memory access

Generations of SDRAMedit

SDRAM feature map
Type Feature changes
SDRAM Vcc = 33 V
Signal: LVTTL
DDR1 Access is ≥2 words
Double clocked
Vcc = 25 V
25 - 75 ns per cycle
Signal: SSTL_2 25V4
DDR2 Access is ≥4 words
"Burst terminate" removed
4 units used in parallel
125 - 5 ns per cycle
Internal operations are at 1/2 the clock rate
Signal: SSTL_18 18V4
DDR3 Access is ≥8 words
Signal: SSTL_15 15V4
Much longer CAS latencies
DDR4 Vcc ≤ 12 V point-to-point single module per channel

SDR SDRAM Single Data Rate synchronous DRAMedit

This type of SDRAM is slower than the DDR variants, because only one word of data is transmitted per clock cycle single data rate But this type is also faster than its predecessors EDO-RAM and FPM-RAM which took typically 2 or 3 clocks to transfer one word of data

DDR1 SDRAMedit

Main article: DDR SDRAM

While the access latency of DRAM is fundamentally limited by the DRAM array, DRAM has very high potential bandwidth because each internal read is actually a row of many thousands of bits To make more of this bandwidth available to users, a double data rate interface was developed This uses the same commands, accepted once per cycle, but reads or writes two words of data per clock cycle The DDR interface accomplishes this by reading and writing data on both the rising and falling edges of the clock signal In addition, some minor changes to the SDR interface timing were made in hindsight, and the supply voltage was reduced from 33 to 25 V As a result, DDR SDRAM is not backwards compatible with SDR SDRAM

DDR SDRAM sometimes called DDR1 for greater clarity doubles the minimum read or write unit; every access refers to at least two consecutive words

Typical DDR SDRAM clock rates are 133, 166 and 200 MHz 75, 6, and 5 ns/cycle, generally described as DDR-266, DDR-333 and DDR-400 375, 3, and 25 ns per beat Corresponding 184-pin DIMMs are known as PC-2100, PC-2700 and PC-3200 Performance up to DDR-550 PC-4400 is available for a price

DDR2 SDRAMedit

Main article: DDR2 SDRAM

DDR2 SDRAM is very similar to DDR SDRAM, but doubles the minimum read or write unit again, to 4 consecutive words The bus protocol was also simplified to allow higher performance operation In particular, the "burst terminate" command is deleted This allows the bus rate of the SDRAM to be doubled without increasing the clock rate of internal RAM operations; instead, internal operations are performed in units 4 times as wide as SDRAM Also, an extra bank address pin BA2 was added to allow 8 banks on large RAM chips

Typical DDR2 SDRAM clock rates are 200, 266, 333 or 400 MHz periods of 5, 375, 3 and 25 ns, generally described as DDR2-400, DDR2-533, DDR2-667 and DDR2-800 periods of 25, 1875, 15 and 125 ns Corresponding 240-pin DIMMS are known as PC2-3200 through PC2-6400 DDR2 SDRAM is now available at a clock rate of 533 MHz generally described as DDR2-1066 and the corresponding DIMMs are known as PC2-8500 also named PC2-8600 depending on the manufacturer Performance up to DDR2-1250 PC2-10000 is available for a price

Note that because internal operations are at 1/2 the clock rate, DDR2-400 memory internal clock rate 100 MHz has somewhat higher latency than DDR-400 internal clock rate 200 MHz

DDR3 SDRAMedit

Main article: DDR3 SDRAM

DDR3 continues the trend, doubling the minimum read or write unit to 8 consecutive words This allows another doubling of bandwidth and external bus rate without having to change the clock rate of internal operations, just the width To maintain 800–1600 M transfers/s both edges of a 400–800 MHz clock, the internal RAM array has to perform 100–200 M fetches per second

Again, with every doubling, the downside is the increased latency As with all DDR SDRAM generations, commands are still restricted to one clock edge and command latencies are given in terms of clock cycles, which are half the speed of the usually quoted transfer rate a CAS latency of 8 with DDR3-800 is 8/400 MHz = 20 ns, exactly the same latency of CAS2 on PC100 SDR SDRAM

DDR3 memory chips are being made commercially,5 and computer systems using them were available from the second half of 2007,6 with significant usage from 2008 onwards7 Initial clock rates were 400 and 533 MHz, which are described as DDR3-800 and DDR3-1066 PC3-6400 and PC3-8500 modules, but 667 and 800 MHz, described as DDR3-1333 and DDR3-1600 PC3-10600 and PC3-12800 modules are now common8 Performance up to DDR3-2800 PC3 22400 modules are available for a price9

DDR4 SDRAMedit

Main article: DDR4 SDRAM

DDR4 SDRAM is the successor to DDR3 SDRAM It was revealed at the Intel Developer Forum in San Francisco in 2008, and was due to be released to market during 2011 The timing has varied considerably during its development - it was originally expected to be released in 2012,10 and later during 2010 expected to be released in 2015,11 before samples were announced in early 2011 and manufacturers began to announce that commercial production and release to market was anticipated in 2012 DDR4 is expected to reach mass market adoption around 2015, which is comparable with the approximately 5 years taken for DDR3 to achieve mass market transition over DDR2

The new chips are expected to run at 12 V or less,1213 versus the 15 V of DDR3 chips, and have in excess of 2 billion data transfers per second They are expected to be introduced at frequency rates of 2133 MHz, estimated to rise to a potential 4266 MHz14 and lowered voltage of 105 V15 by 2013

DDR4 will not double the internal prefetch width again, but will use the same 8n prefetch as DDR316 Thus, it will be necessary to interleave reads from several banks to keep the data bus busy

In February 2009, Samsung validated 40 nm DRAM chips, considered a "significant step" towards DDR4 development17 since as of 2009, current DRAM chips were only beginning to migrate to a 50 nm process18 In January 2011, Samsung announced the completion and release for testing of a 30 nm 2 GB DDR4 DRAM module It has a maximum bandwidth of 213 Gbit/s at 12 V, uses pseudo open drain technology and draws 40% less power than an equivalent DDR3 module1920

DDR5 SDRAMedit

In March 2017, JEDEC announced a DDR5 standard is under development,21 but provided no details except for the goals of doubling the bandwidth of DDR4, reducing power consumption, and publishing the standard in 2018

Failed successorsedit

In addition to DDR, there were several other proposed memory technologies to succeed SDR SDRAM

Rambus DRAM RDRAMedit

RDRAM was a proprietary technology that competed against DDR Its relatively high price and disappointing performance resulting from high latencies and a narrow 16-bit data channel versus DDR's 64 bit channel caused it to lose the race to succeed SDR DRAM

Synchronous-Link DRAM SLDRAMedit

SLDRAM boasted higher performance and competed against RDRAM It was developed during the late 1990s by the SLDRAM Consortium The SLDRAM Consortium consisted of about 20 major DRAM and computer industry manufacturers The SLDRAM Consortium became incorporated as SLDRAM Inc and then changed its name to Advanced Memory International, Inc SLDRAM was an open standard and did not require licensing fees The specifications called for a 64-bit bus running at a 200, 300 or 400 MHz clock frequency This is achieved by all signals being on the same line and thereby avoiding the synchronization time of multiple lines Like DDR SDRAM, SLDRAM uses a double-pumped bus, giving it an effective speed of 400,22 600,23 or 800 MT/s

SLDRAM used an 11-bit command bus 10 command bits CA9:0 plus one start-of-command FLAG line to transmit 40-bit command packets on 4 consecutive edges of a differential command clock CCLK/CCLK# Unlike SDRAM, there were no per-chip select signals; each chip was assigned an ID when reset, and the command contained the ID of the chip that should process it Data was transferred in 4- or 8-word bursts across an 18-bit per chip data bus, using one of two differential data clocks DCLK0/DCLK0# and DCLK1/DCLK1# Unlike standard SDRAM, the clock was generated by the data source the SLDRAM chip in the case of a read operation and transmitted in the same direction as the data, greatly reducing data skew To avoid the need for a pause when the source of the DCLK changes, each command specified which DCLK pair it would use24

The basic read/write command consisted of beginning with CA9 of the first word:

SLDRAM Read, write or row op request packet
FLAG CA9 CA8 CA7 CA6 CA5 CA4 CA3 CA2 CA1 CA0
1 ID8 Device ID ID0 CMD5
0 Command code CMD0 Bank Row
0 Row continued 0
0 0 0 0 Column
  • 9 bits of device ID
  • 6 bits of command
  • 3 bits of bank address
  • 10 or 11 bits of row address
  • 5 or 4 bits spare for row or column expansion
  • 7 bits of column address

Individual devices had 8-bit IDs The 9th bit of the ID sent in commands was used to address multiple devices Any aligned power-of-2 sized group could be addressed If the transmitted msbit was set, all least-significant bits up to and including the least-significant 0 bit of the transmitted address were ignored for "is this addressed to me" purposes If the ID8 bit is actually considered less significant than ID0, the unicast address matching becomes a special case of this pattern

A read/write command had the msbit clear:

  • CMD5=0
  • CMD4=1 to open activate the specified row; CMD4=0 to use the currently open row
  • CMD3=1 to transfer an 8-word burst; CMD3=0 for a 4-word burst
  • CMD2=1 for a write, CMD2=0 for a read
  • CMD1=1 to close the row after this access; CMD1=0 to leave it open
  • CMD0 selects the DCLK pair to use DCLK1 or DCLK0

A notable omission from the specification was per-byte write enables; it was designed for systems with caches and ECC memory, which always write in multiples of a cache line

Additional commands with CMD5 set opened and closed rows without a data transfer, performed refresh operations, read or wrote configuration registers, and performed other maintenance operations Most of these commands supported an additional 4-bit sub-ID sent as 5 bits, using the same multiple-destination encoding as the primary ID which could be used to distinguish devices that were assigned the same primary ID because they were connected in parallel and always read/written at the same time

There were a number of 8-bit control registers and 32-bit status registers to control various device timing parameters

Virtual Channel Memory VCM SDRAMedit

VCM was a proprietary type of SDRAM that was designed by NEC, but released as an open standard with no licensing fees It is pin-compatible with standard SDRAM, but the commands are different The technology was a potential competitor of RDRAM because VCM was not nearly as expensive as RDRAM was A Virtual Channel Memory VCM module is mechanically and electrically compatible with standard SDRAM, so support for both depends only on the capabilities of the memory controller In the late 1990s, a number of PC northbridge chipsets such as the popular VIA KX133 and KT133 included VCSDRAM support

VCM inserts an SRAM cache of 16 "channel" buffers, each 1/4 row "segment" in size, between DRAM banks' sense amplifier rows and the data I/O pins "Prefetch" and "Restore" command, unique to VCSDRAM, copy data between the DRAM's sense amplifier row and the channel buffers, while the equivalent of SDRAM's Read and Write commands specify a channel number to access Reads and writes may thus be performed independent of the currently active state of the DRAM array, with the equivalent of 4 full DRAM rows being "open" for access at a time This is an improvement over the 2 open rows possible in a standard 2-bank SDRAM There is actually a 17th "dummy channel" used for some operations

To read from VCSDRAM, after the Active command, a "Prefetch" command is required to copy data from the sense amplifier array to the channel SDRAM This command specifies a bank, 2 bits of column address to select the segment of the row, and 4 bits of channel number Once this is performed, the DRAM array may be precharged while read commands to the channel buffer continue To write, first the data is written to a channel buffer typically previous initialized using a Prefetch command, then a Restore command, with the same parameters as the Prefetch command, copies a segment of data from the channel to the sense amplifier array

Unlike a normal SDRAM write, which must be performed to an active open row, the VCSDRAM bank must be precharged closed when the Restore command is issued An Active command immediately after the Restore command specifies the DRAM row completes the write to the DRAM array There is, in addition, a 17th "dummy channel" which allows writes to the currently open row It may not be Read from, but may be Prefetched to, Written to, and Restored to the sense amplifier array2526

Although normally a segment is Restored to the same memory address as it was Prefetched from, the channel buffers may also be used for very efficient copying or clearing of large, aligned memory blocks The use of quarter-row segments is driven by the fact that DRAM cells are narrower than SRAM cells The SRAM bits are designed to be 4 DRAM bits wide, and are conveniently connected to one of the 4 DRAM bits they straddle Additional commands prefetch a pair of segments to a pair of channels, and an optional command combines prefetch, read, and precharge to reduce the overhead of random reads

Virtual Channel SDRAM commands27
CS RAS CAS WE BA A12–11 A10 A9 A8 A7 A6 A5 A4–2 A1–0 Command
H x x x x — x — Command inhibit No operation
L H H H x — x — No operation
L H H L bank channel AP channel L L L x segment Prefetch auto-precharge if AP=H
L H H L bank x x x L L H x segment Prefetch to dummy
L H H L bank channel AP channel L H x x segment Pair prefetch auto-precharge if AP=H
L H H L bank channel AP channel H L x x segment Restore auto-precharge if AP=H
L H H L bank x L x H H — x — Precharge bank
L H H L x x H x H H — x — Precharge all banks
L H L H x channel x channel column Read channel
L H L L L channel x channel column Write channel
L H L L H x x x AR column Write dummy channel auto-restore if AR=H
L L H H bank row Bank activate
L L H L seg channel seg channel column Prefetch read with auto-precharge optional
L L L H x — x — Auto refresh
L L L L reg mode register data Mode register set

The above are the JEDEC-standardized commands Earlier chips did not support the dummy channel or pair prefetch, and used a different encoding for precharge

A 13-bit address bus, as illustrated here, is suitable for a device up to 128 Mbit It would have two banks, each containing 8192 rows and 8192 columns Thus, row addresses are 13 bits, segment addresses are 2 bits, and 8 column address bits are required to select one byte from the 2048 bits 256 bytes in a segment

See alsoedit

  • GDDR graphics DDR and its subtypes GDDR2, GDDR3, GDDR4, and GDDR5
  • List of device bandwidths
  • Serial presence detect - EEPROM with timing data on SDRAM modules
  • SDRAM Tutorial - Flash website built by Tel-Aviv University students
  • A concise but thorough review of SDRAM architecture/terminology and command timing dependencies in High-Performance DRAM System Design Constraints and Considerations, a master thesis from the University of Maryland

Referencesedit

  1. ^ "SDRAM Part Catalog"  070928 microncom
  2. ^ "Nanya 256 Mb DDR SDRAM Datasheet" PDF intelcom April 2003 Retrieved 2015-08-02 
  3. ^ Micron, General DDR SDRAM Functionality, Technical Note, TN-46-05
  4. ^ a b c "EDA DesignLine, januari 12, 2007, The outlook for DRAMs in consumer electronics"  100622 edadesignlinecom
  5. ^ "What is DDR memory" 
  6. ^ Thomas Soderstrom June 5, 2007 "Pipe Dreams: Six P35-DDR3 Motherboards Compared" Tom's Hardware 
  7. ^ "AMD to Adopt DDR3 in Three Years" 
  8. ^ Wesly Fink July 20, 2007 "Super Talent & TEAM: DDR3-1600 Is Here!" Anandtech 
  9. ^ Jennifer Johnson 24 April 2012 "GSKILL Announces DDR3 Memory Kit For Ivy Bridge" 
  10. ^ DDR4 PDF page 23
  11. ^ "DDR4 not expected until 2015" semiaccuratecom 
  12. ^ "IDF: "DDR3 won't catch up with DDR2 during 2009"" Alphr 
  13. ^ "heise online - IT-News, Nachrichten und Hintergründe" heise online 
  14. ^ "Next-Generation DDR4 Memory to Reach 4266GHz - Report" Xbitlabscom August 16, 2010 Retrieved 2011-01-03 
  15. ^ "IDF: DDR4 memory targeted for 2012" in German hardware-infoscom Retrieved 2009-06-16  English translation
  16. ^ "JEDEC Announces Key Attributes of Upcoming DDR4 Standard" Press release JEDEC 2011-08-22 Retrieved 2011-01-06 
  17. ^ Gruener, Wolfgang February 4, 2009 "Samsung hints to DDR4 with first validated 40 nm DRAM" tgdailycom Retrieved 2009-06-16 
  18. ^ Jansen, Ng January 20, 2009 "DDR3 Will be Cheaper, Faster in 2009" dailytechcom Retrieved 2009-06-17 
  19. ^ "Samsung Develops Industry's First DDR4 DRAM, Using 30nm Class Technology" Samsung 2011-01-04 Retrieved 2011-03-13 
  20. ^ "Samsung develops DDR4 memory, up to 40% more efficient" TechSpot 
  21. ^ "JEDEC DDR5 & NVDIMM-P Standards Under Development" Press release JEDEC 30 March 2017 
  22. ^ Dean Kent 1998-10-24, RAM Guide: SLDRAM, Tom's Hardware, retrieved 2011-01-01 
  23. ^ Hyundai Electronics 1997-12-20, HYSL8M18D600A 600 Mb/s/pin 8M x 18 SLDRAM PDF data sheet, retrieved 2011-12-27 
  24. ^ SLDRAM Inc 1998-07-09, SLD4M18DR400 400 Mb/s/pin 4M x 18 SLDRAM PDF data sheet, pp 32–33, retrieved 2011-12-27 
  25. ^ Siemens Semiconductor Group, HYB39V64x0yT 64MBit Virtual Channel SDRAM PDF, retrieved 2012-07-17 
  26. ^ NEC 1999, 128M-BIT VirtualChannel™ SDRAM preliminary datasheet PDF, retrieved 2012-07-17 
  27. ^ JEDEC Standard 21-C section 31154 "VCSDRAM Specific SDRAM Functions"

External linksedit

  • Everything you always wanted to know about SDRAM memory, but were afraid to ask, August 2010, AnandTech
  • Understanding RAM Timings, May 2011, Hardware Secrets

synchronous dynamic random-access memory, synchronous dynamic random-access memory (sdram)


Synchronous dynamic random-access memory Information about

Synchronous dynamic random-access memory


  • user icon

    Synchronous dynamic random-access memory beatiful post thanks!

    29.10.2014


Synchronous dynamic random-access memory
Synchronous dynamic random-access memory
Synchronous dynamic random-access memory viewing the topic.
Synchronous dynamic random-access memory what, Synchronous dynamic random-access memory who, Synchronous dynamic random-access memory explanation

There are excerpts from wikipedia on this article and video

Random Posts

Book

Book

A book is a set of written, printed, illustrated, or blank sheets, made of ink, paper, parchment, or...
Boston Renegades

Boston Renegades

Boston Renegades was an American women’s soccer team, founded in 2003 The team was a member of the U...
Sa Caleta Phoenician Settlement

Sa Caleta Phoenician Settlement

Sa Caleta Phoenician Settlement can be found on a rocky headland about 10 kilometers west of Ibiza T...
Bodybuilding.com

Bodybuilding.com

Bodybuildingcom is an American online retailer based in Boise, Idaho, specializing in dietary supple...