Standard cell libraries and embedded memory compilers - Evaluation

 

Dolphin Integration provides all standard cell and memories users with

  • a step by step benchmarking process besides the irrelevant comparison between NAND2s for Dolphin Integration standard cells
  • a benchmark method for comparing efficiently and objectively memory generators

Standard Cell Benchmark

Any Standard Cell library must be accompanied with integration guidelines, and evaluated when used accordingly. Arguing on details separately in the SESAME specifications misses the point, so that the list of cells is controlled as an ISO-9001 procedures for its RCSL structure and the ensuing circuit architecture is protected by pending patents.
The control and data paths for SESAME stems are different from those with a classical library, the clock paths are different from those in any CCSL library. At their intersections, the spinner cells are different from flip-flops. The only common points are the standard EDA solutions (Synthesizer, Placer, STA and Router).
The crucial reason for such adamant differentiation of our SESAME stems can be understood when margins are taken into account to ensure yield in presence of mismatch. Standard cell evaluation suffers from the lack of comparative and public databases to assess the comparative performances of truly different RCSL and CCSL.

As the promoter of Reduced Cell Stem Libraries (RCSL) Dolphin Integration provides all standard cell users with a step by step benchmarking process besides the irrelevant comparison between NAND2s.

 

Step one: Discovery of the SESAME RCSL library

  • Collaterals

Depending on the stem, different types of collaterals are freely available on our website to discover more about the features and benefits of a SESAME library: the presentation sheet, the brochure, a description of the benchmarks.

 

Step two: Assessment of performances of the SESAME RCSL library

Sofia benchmark

The Sofia benchmark is as easy as the traditional NAND2 but far more representative as it is based on a sample of six cells representative of a typical design including a sequential cell. Sofia results is a figure of merit comparing the dynamic power consumption, area, leakage and speed after synthesis of different libraries. For more information on the Sofia benchmark, click here

 

VEDA benchmark

Dolphin Integration offers the Veda benchmark to assist SoC Integrators in their permanent search for minimizing costs and maximizing the performances of each design.
With the Veda benchmark, it is now easy to estimate the area after P&R and the power consumption of your logic design - including RAM, ROM, Standard Cell Library - when embedding Dolphin’s Panoply to perform a benchmark against any other solution. For more information on the VEDA benchmark, click here

 

Motu Uta logic Standard

For a complete comparative evaluation of libraries from topological synthesis through Placement to Routing, the Motu Uta logic standard (logic block in RTL) is proposed. Made public as freeware, Motu Uta embeds a pseudo co-processor, different critical paths and combinatorial logic, representative of a typical logic block in all dimensions: area, power consumption and speed, while avoiding mistakes due to the misreading of a proprietary standard.

In order to provide a fair comparison in terms of power consumption, a testbench is provided to generate the activity file for the power estimation.

Download Motu Uta V6.0

Click here to request Motu-Uta results for SESAME libraries in your targeted process

 


Step three: After Sale Integration with the SESAME RCSL library

  • Delivery of the library

While any Cell library should benefit from its own integration guidelines, it took until SESAME RCSL to address this need explicitly so that Dolphin Integration FAEs are currently introducing the essential guidelines enabling customers use Sesame RCSL in the most efficient way.

You can benefit from free support from our FAE and know how on Reduced Cell Stem Library all along your evaluation process.

Memories Benchmark

Benchmark - On-line benchmarking of RagTime memories!

Comparing objectively memory generators, within the same process or astride two different processes, remains a multidimensional task for designers. In fact, relying on the sole data as commonly provided by memory suppliers may limit the objectivity of the comparison. To that end, we propose to you a benchmark method for comparing efficiently and objectively memory generators.

The problem is that:

  • each supplier may deliver its evaluation results based on different assumptions!
  • each supplier may pick and choose its best local instances for a fake yardstick!
  • each supplier may pick and choose its best weights for biasing statistics!

The underlying questions are:

  • How to cope with a generator with several thousands of memory instances whereas providers propose one or two of the most regular memory instances for comparison?
  • About the generators to compare, are same process conditions used for performance data delivered?
    • Is power consumption calculation based on maximum parasitic extraction values?
    • Does the memory area include power supply rails?
  • About the comparable generators, are the delivered performance data based on the same calculation method?
    • What are the test vector assumptions for computing power consumption?
    • Are performances given for the same aspect ratio knowing as speed and power consumption may vary drastically from an aspect ratio to an other?

In search of The yardstick for consumption

Due to the lack of transparency of benchmarks for ViC, it is impossible to perform serious comparative evaluations… Here are some yardsticks with respect to power-consumption of embedded memories!

Key issues for static consumption

  • Which static consumption among the followings:
  • Static = Leakage?
  • Measure in which mode:
    • Sleep mode: inputs/clock toggling or not? (RAM)
    • Operating mode (RAM)
    • Standby mode (RAM/ROM)
  • Stand-by leakage (RAM/ROM)
  • Benchmarking by
    • Computation fom DRM target or silicon data
    • DC/transient Simulations with Spice Models

Key issues for dynamic consumption

  • Power consumption from:
  • Dynamic operation:
  • Measure in which mode:
    • Use conditions of the SoC in which the ViC will be inserted
    • Read Margin issue
  • Dynamic NOP (No OPeration)
  • Stand-by leakage
  • Benchmarking by
    • Computation
    • Simulation

DOLPHIN Benchmarking technique

Considering the seriousness of competitive evaluation, we have reviewed our benchmarking technique. Due to the important decision prospective users have to make, the relevance of choosing the appropriate benchmark for low power-consumption, specific to a system application, must be pointed out.

Until now, the product-oriented developers' approach consisted in using ultra-pessimistic evaluators, which explains a number of sad surprises when "deceived" users had compared DOLPHIN figures for comparison against unspecified benchmarks.

A constructive proposal for RAMs/ROMs is presented on the RAMs/ROMs benchmark pages as explicit Benchmarks; they can be seen as an implicit question as to our prospects' and customers' preference for a reasonable but thorough benchmark capturing their own needs for RAMs/ROMs as far as low power-consumption is concerned.

These RAM/ROM Benchmarks could be replaced by any other more relevant.

If some user's benchmark of choice were to differ strongly from average expectations on the reverse, established with uniform distributions, a separate sheet describes the use of SUCCESS™ for hardware-software cosimulation for assessing memory power-consumption.