## Description

**Existing System:**

PRESENTLY, the continued physical feature size downscaling of CMOS technology provides memory systems with a great storage capacity. Nevertheless, this size decreasing has also caused an augment in the memory fault rate. With the present aggressive scaling, the memory cell critical charge and the energy needed to provoke a single-event upset (SEU) in storage have been reduced. As shown by different experiments, in addition to traditional single-cell upsets (SCUs), this energy reduction can provoke multiple-cell upsets (MCUs), that is, simultaneous errors in more than one memory cell induced by a single particle hit. In the case of space applications, the MCU problem must be taken into account for the design of the corresponding fault tolerance methods, as space is an aggressive environment subjected to the impact of high-energy cosmic particles.

Traditionally, error correction codes (ECCs) have been used to protect memory systems. Common ECCs employed to protect standard memories are single-error correction (SEC) or single-error-correction–double-error-detection (SEC–DED) codes. SEC codes are able to correct an error in one single memory cell. SEC–DED codes can correct an error in one single memory cell, as well as they can detect two errors in two independent cells.

The main problem when memory systems employ an ECC is the redundancy required. The extra bits added are used to detect and/or correct the possible errors occurred. Also, redundant bits must be added for each data word stored in memory. In this way, the amount of storage occupied for redundant bit scales with the memory capacity. For example, if an ECC with 100% of redundancy is employed in a 2-GB memory, only 1 GB is available to store the payload (the “clean” data); the remaining 1 GB is required for code bits. In addition, the usage of an ECC implies overheads in the area, power, and delay employed by the encoder and decoder circuits. These overheads must be maintained as low as possible, especially in space applications.

In this paper, we present a series of ECCs that greatly reduces the redundancy introduced, while maintaining, or even improving, memory error coverage. In addition, area, power, and delay overheads are also reduced. These new codes have been designed using the flexible unequal error control (FUEC) methodology, developed by Saiz-Adalid et al. where an algorithm (and a tool) to design FUEC codes is introduced. FUEC codes are an improvement of the well-known unequal error control (UEC) codes. Nevertheless, the FUEC methodology can also find other kinds of codes. In this paper, it is employed to find low redundancy codes. These novel codes are different than those presented. They only share the design methodology, but with different parameters. In this way, by using the tool, we can generate the parity check matrix of an ECC in an automatic and efficient way, just defining its error detection and/or correction capabilities.

**Background on ECCs for Space Applications: **

Different ECCs have been traditionally applied to space missions. For instance, Berger code or the well known parity code has been used for detection purposes. On the other hand, when error correction is needed, more complex codes can be used, such as Hamming, Hadamard, Repetition, Golay, Bose–Chaudhuri–Hocquenghem (BCH), Reed–Solomon, Reed–Muller, multidimensional, or Matrix codes. Hamming codes can be easily built for any word length. Also, the encoding and decoding circuits are easy to implement. Their main drawback is that only one bit in error can be corrected. Nevertheless, for common data word lengths (8, 16, 32, and 64), Hamming codes can detect some double error patterns, in addition to the SEC. Exploiting this feature, it is possible to systematize the detection of 2-bit adjacent errors with the same redundancy. In these works, different ECCs based on Hamming codes are introduced. These ECCs allow the correction of single bit errors or the detection of 2-bit adjacent errors with the same redundancy.

The main problem of Hadamard and Repetition codes is that they introduce a great redundancy for common data word lengths [20]. This great redundancy provokes the necessity of a great memory storage capacity, which is an inconvenient for space applications. Golay code is able to correct up to 3-bit errors. Nevertheless, Golay code presents a redundancy of almost 100%. Also, this code presents a high time and power consuming ratio, as it has to execute sequentially two complementary sequences.

Although BCH and Reed–Solomon codes can correct multiple errors, their main drawbacks are the great complexity and difficulty to implement them, as well as their great latency and speed. These weaknesses can be very problematic in space applications.

**Disadvantages:**

- Not Implemented Multiplier Error Correction
- More Errors occur in Single Error Correction Methods
- More power and More Delay

**Proposed System:**

In a recent technology of Digital Signal Processing based application method will have number of errors will occur with integrated SRAM memory on during signal transmission and reception it will reliable with conventional fault tolerance technique with probability of occurrence in single-cell upsets and multiple cell upsets. In a existing thing will have a Hamming code will also present parity checks in bi-dimensional layout method to correct and detect the errors. The main drawback of the Hamming code can detect only one bit in error can be corrected and also detect some double error patterns, in addition to the SEC. In this paper presents this method on BCH Codes with Multiple Cell upsets, in this BCH Code can capable to detect and correct a Multiple Fault Error Corrections, and it will use variant check matrices to extend ideas from Hamming and Reed-Solomon Codes, with detect many errors. In this method to implement in VHDL and synthesized in Xilinx FPGA-S6LX9, finally compared with Hamming and BCH codes based Multiple-Cell Upsets of error correction and shown in the terms of area, delay and power.

**BCH Codes:**

BCH codes use variant check matrices to extend ideas from Hamming and Reed-Solomon codes. BCH codes can keep a fixed alphabet, and correct many errors. Unfortunately, BCH codes suffer from the same shortcoming as Hamming and RS codes: as block size increases, they become worse and worse in the sense that the relative error correction rate goes to 0 (and/or information rate goes to 0). Start with a finite field Fq with q elements, often q = 2. Choose block size n, for simplicity relatively prime to q. For q = 2, look at odd block sizes.

For example, to correct two errors have any bunch of 4 columns be linear independent, via variant) check matrix like

with α possibly lying in some larger field Fqm. with 1, α, α 2 , α 3 , . . ., α n−1 are distinct and n ≥ 5. The determinant of any 4-by-4 matrix made from 4 different columns of this matrix is non-zero Vandermonde, so the code will correct 2 errors.

Similarly we can make check matrices with any 6 columns linear independent (so any 3 errors correctible), any 8 columns linearly independent (so any 4 errors correctible), etc. This much was already done by Reed-Solomon codes using larger and larger alphabets.

As for Hamming codes, we allow the element α to be in a field Fqm larger than the field Fq used as the alphabet for the code. This would be in contrast to the RS codes where we never went outside the finite field Fq used as the code alphabet. Staying inside Fq was what required that Reed-Solomon codes use larger and larger Fq as the block size goes up. Instead, if we can make check matrices over larger fields but keep the code alphabet itself fixed, we can make multiple-error-correcting codes using small alphabets.

Let α be a primitive element in Fqm. For simplicity, take block length = n = q m − 1 For an integer t with t < n = q m − 1 the matrix H = has the property that the (t − 1)-by-(t − 1) matrix formed from any t − 1 columns has nonzero (Vandermonde) determinant. We must connect such a check matrix with generating polynomials for a cyclic code. For 1 ≤ i ≤ t−1, let fi be the irreducible polynomial with coefficients in Fq such that

fi(α i ) = 0

Since α lies in Fqm (and is primitive) each irreducible fi(x) is a factor of x qm−1 − 1. Let g(x) = least common multiple of (f1, . . . , ft−1).

Since n = q m − 1 and q are relatively prime, x qm−1 − 1 has no repeated factors, so unless two fi are the same their lcm is their product. So the generating polynomial g(x) for the code C with the check matrix H above is the product of the different irreducible polynomials fi which have roots α i (1 ≤ i < t), not repeating a given polynomial fi if two different α i and α j are roots of the same fi . Then the question is: given q, m, block length n = q m − 1, primitive element α, and designed distance t.

An important class of multiple-error-correcting linear cyclic codes is the class of BCH codes. In fact, BCH code is a generalization of the cyclic Hamming codes for multiple-error correction (recall that Hamming codes correct only one error). Binary BCH codes were first discovered by A. Hocquenghem in 1959 and independently by R. C. Bose and D. K. Ray-Chaudhuri in 1960.

These codes are important for two reasons:

- They admit a relatively easy decoding scheme;
- The class of BCH codes is quite extensive

Let f(x) be an irreducible polynomial of degree n over a field IF and let α be a root of f(x). Then by replacing x in IF[x] mod (f(x)) by α, we obtain the field represented as:

** ****Code Word Structure**

The (31, 21) BCH code word with 32nd bit added to provide an overall even parity check is the same word as defined for the code in Annex 1.

All (31, 21) BCH even parity bit code words received in the protocol are processed through a 2 bit error corrector. The 8 word interleaved block structure provides for correction of 16 consecutive errors in the received data stream (32 consecutive bit errors at 3 200 bit/s and 64 consecutive bit errors at 6 400 bit/s in the time multiplexed data stream). Since employing the maximum error correction may in some cases (low S/N and extreme cases of fading) result in an unacceptable error rate out of the decoder, the protocol utilizes Checksums embedded in the data stream. The Checksum used in the Frame Information Word, the Block Information Word, and all Vector Words is calculated by forming 4-bit fields as shown in above figure and calculating the binary sum. The result is 1’s complemented (each bit inverted) and the 4 LSB of the result are transmitted as the Checksum.

** Advantages:**

- Implemented Multiple Error Corrections with BER Testing
- More Errors occur in Single Error Correction Method
- More Power and More delay

## Reviews

There are no reviews yet.