OVM Interoperability: Hype vs. Reality

Recently I have read a lot of hyperbolic statements about how the Open Verification Methodology (OVM) is “fully interoperable” and a “single implementation”. Unfortunately these claims overstate the interoperability and portability of OVM.

Bicycle gearsI have worked on two different OVM-based verification environments where interoperability was a required feature. One was a verification component for the Open Core Protocol (OCP) and the other was a conformance test environment for the FlexRay Communications System. Both environments are compatible with two OVM compliant simulators, but that interoperability was not automatic. A significant amount of time and effort was spent throughout the development process to ensure that the final products would be portable.

The Java Analogy

When Sun Microsystems released the first version of the Java programming language, they used the phrase “write once, run everywhere” to promote the language’s portability. The theory was that a developer could write a Java program (“once”) and users would then be able to run it on any platform (“everywhere”), regardless of the underlying system and/or vendor.

In reality, each platform had its quirks and limitations that could prevent an application from running properly. If you wanted to ensure that your Java program would run on a particular platform, you actually needed to test it on that platform and modify the code until it worked. Eventually you could end up with an application that would run on multiple platforms, but it required repeated testing and tweaking. Java programmers refer to this sarcastically as, “write once, debug everywhere.”

Every time I hear someone promoting the portability of OVM it reminds me of “write once, debug everywhere.”

When “Run Once” is Enough

One way that OVM does live up to the hype is that it provides a powerful SystemVerilog framework for developing complex verification environments. That alone is pretty respectable. Using OVM as the basis of your testbench allows you to spend more time working on actual verification and less time worrying about designing and building your own framework.

How many OVM environments currently in development actually need to be run on more than one simulator? How many will use third party OVM components? I imagine that most OVM testbenches only need to run on a single simulator; in other words they are “write once, run once.” If the environments are using external OVM verification components, they are probably provided by the simulator vendor. So, for a lot of OVM users interoperability doesn’t matter yet.

On the other hand, if you need interoperability or were expecting that it would come automatically you could run into issues…

Varied SystemVerilog Support

The good news is that SystemVerilog is an actively evolving language. The bad news is that none of the simulators support all of the features in the SystemVerilog specification. Just because you write some code and it does what you expect in one simulator doesn’t mean that it will do the same thing or even compile in a different simulator.

The 2.1.1 release of OVM has 51 ifdefs in the code. That’s 51 places where the experts had to resort to the bluntest of instruments possible to overcome the differences between just two simulators. There were probably several other places where the original implementation had to be modified, but in the end did not require the use of ifdefs. If the OVM code needs patches in order to be portable, then your code probably will too.

During the development of the aforementioned OVM components I worked on, there were several cases where we had to make changes to the architecture or implementation in order to get it to work in both of our simulators. We tried to avoid ifdefs as much as possible and in the end only needed a couple, mostly to get around bugs in the simulators.

Bike rack in Munich with several bikes

Multiple Implementations

At first, the claim that OVM is a “single implementation” doesn’t seem that questionable. OVM is open source, so you can go to OVM World and download one codebase that will run on multiple simulators.

The first problem you will have to deal with is that new releases of OVM appear regularly, as would be expected. If you have multiple OVM components in your environment, you have to use the same version of OVM for every one of them. This means that you can only adopt a new release of OVM if it works all of the components you are using. If some of your components have restrictions on what versions of OVM they are compatible with, hopefully there is at least one common version that they all support.

In my experience, the 2.x series of OVM releases were unfortunately not very stable. Sometimes basic features were broken, only to be fixed in the next release, which itself wouldn’t even compile. Sometimes promised features were mistakenly left out.

If you only have to worry about your own environment, it is easy enough to dictate which version of OVM to use or, worst case, to use a locally modified version of OVM. However, if you are distributing your component to someone else, it is harder to control the version of OVM being used. In the end you have to qualify your component with the last several releases of OVM and sometimes provide patches. Hopefully future OVM/UVM releases will be more stable, but that still doesn’t mean there will be a single implementation.

Each vendor bundles a custom, proprietary version of each OVM release with their simulator. These custom versions provide extra OVM debugging and tracing capabilities for that simulator. I have found these additional debugging features very useful and recommend anyone using OVM to become familiar with them. The downside is that the code required to add the enhanced debugging functionality to OVM is only available from the individual vendors and is not redistributable.

Several times I have had a component that runs fine with the public OVM release code, but does not work with a particular vendor-specific release. In most of these cases the issue caused the simulator to crash, making it completely unusable. Since the vendor’s modifications are usually encrypted, debugging the problem and fixing it on your own is usually impossible.

If you run into problems using the vendor-specific OVM release, you can just stick with the public releases in your own environment. Someone else using your component, however, will probably not want to be told they have to give up those handy debugging tools they have gotten used to. They certainly won’t be impressed with the “interoperability” of your component.

To ensure the portability of your OVM component you need to verify it with multiple implementations.

Interoperability Gotchas

Assuming you have tested your component with several simulator releases and OVM releases, can it now be called fully interoperable? Maybe.

The are a couple of things that you could do in your OVM component that could easily be broken unintentionally by another component. Any use of the ovmdefault* objects could potentially be interfered with by another OVM component that tries to use them in a different way. I detailed this issue in a post to OVM World with regards to the ovmdefaultpacker. The basic problem is that unless you specify a packer object, any calls to any of the ovmobject’s pack*() or unpack*() methods will use the ovmdefaultpacker. Since any component can change the way the ovmdefaultpacker does (un)packing (for example the endianness), you are better off using your own instances of the ovmpacker. The same precautions should be taken to avoid using other ovmdefault* objects in your code.

Tandem bicycle in front of a boat dock

Tips for Achieving Interoperability

So what are the steps you should take to make your OVM component more interoperable?

• Avoid using ovmdefault* objects, particularly the packer.
Create a static instance of ovm_packer for every class that needs one and make sure that the default packer is never used.
• Test your component with multiple simulators during the entire development process.
Expect that you will encounter places where you will need to modify the architecture and implementation of your component. The earlier you figure out what needs to be changed or patched the better.
• Test your component with multiple releases of each simulator.
A new release might require you to modify your code in order for it to work. (Did I mention that earlier is better?) You don’t want to have to dictate to your customers which versions of the simulator they can or can’t use. If the new release has a bug (it happens), the earlier you notify the vendor, the more likely it is to get fixed in the next release.
• Test your component with multiple releases of OVM, including the vendor-specific releases.
Same reasons as above. By now you should have a matrix of configurations you need to qualify.
 

Making the Situation Better

I want to end with a special request: Please report the issues you find with the simulators and OVM back to the vendors.

It takes a little extra effort to report problems to the vendors, but it helps make the challenge of interoperability less daunting going forward. In general, the situation is getting better with each new tool and framework release. In my experience it was certainly easier to achieve OVM interoperability in 2010 than it was in 2008.

The more people who report issues back to the vendors, the fewer who have to encounter those issues. Each resolved issue makes the reality of OVM interoperability that much closer to the hype.