No cookie for
Recent blog posts

Aaron Dubois, one of MÄK’s Principal Software Engineers, stockpiled three awards yesterday at the 2015 Fall Simulation Interoperability Workshop in Orlando.

Last modified on Read more
Hits: 3893

After more than 15 years and 21 different drafts, RPR FOM 2 is finally a SISO standard! It’s been a long road with periods of intense activity and years with little progress, but it is here. The RPR FOM is an incredibly important standard in our industry. It embodies the most widely used object model in our community. It was originally designed to allow the concepts of DIS to be used in HLA federations. Now with RPR FOM 2, there is a single official standard that is supported by all the flavors of HLA and is consistent with DIS version 6. Having this standard provides a clear way for our customers to maximize their simulation investments — with minimal incremental cost, simulations built for a single purpose can be connected to other simulators to form larger and more valuable federations.

Last modified on Read more
Hits: 12832

VR-Forces 4.3.1 is a major maintenance release that greatly improves VR-Forces 3D visualization while simultaneously fixes a number of important defects. 

VR-Forces is built using the VR-Vantage graphics engine. This release incorporates the significant improves to visualization found in VR-Vantage 2.0.1, such as:

Last modified on Read more
Hits: 9438

If you have a deep technical understanding of building 3d models, here are some quick tips to make sure they perform optimally in your VR-Vantage application.

To make your models as fast as possible, you need to minimize drawables. Each drawable is a collection of polygons with the same state set and the same primitive set. Every time the information about a drawable is sent to the graphics card, your draw time goes up significantly. Having hundreds if not thousands of drawables in your scene will kill your performance.  To reduce drawables, follow these rules:

Last modified on Read more
Hits: 14351

We've just launched RadarFX, our new Synthetic Aperture Radar (SAR) simulation and visualization product! We built it in conjunction with our partner, JRM Technologies.

In the real world, a SAR sensor is typically attached to an aircraft or satellite.  A SAR system generates photograph-like still images of a target area by combining radar return data collected from multiple antenna locations along a path of flight. Requests from users on the ground define the target area to be scanned, and other parameters used to generate and return the image.

Last modified on Read more
Hits: 4311

To draw a frame, VR-Vantage needs to a) build/update the scene graph, b) organize the scene graph and send it to the GPU, and c) have the GPU render the scene. There are a bunch of other steps like loading terrain from disk (or across the network), processing network packages (DIS/HLA or CIGI), but for the most part those occur in other threads. I will address each of these issues in future posts. For the moment, let’s just focus on static scenes devoid of entities. Let’s look at *this* scene:

Last modified on Read more
Hits: 21737

A US soldier is trapped under rubble from a damaged building in hostile territory. As a Pararescuer, your team must get in, stabilize the situation, and get out – skins intact.

The rescue mission begins with a helicopter ride over to the site - the ride is bumpy and loud as combat zones dot the geography below. The war worn building comes into view and when you arrive, you fast rope out of the helo and into the rubble. You navigate to the trapped soldier and as you begin to address the situation and tend to the rock pinning him down, there’s an explosion. Even more smoke, debris, and confusion fill the area; when the dust settles, you learn that more soldiers are injured, even a civilian is hurt. 

What do you do? How do you react? 

Last modified on Read more
Hits: 5471

We are excited to release VR-Exchange 2.4, a major feature release that enforces our commitment to supporting the latest protocols and the largest exercises with MÄK products. Here are a few of the changes we made with this release:

Last modified on Read more
Hits: 6316

Your squad has been tasked with a convoy mission through a town with suspected insurgent activity. As a surveillance operator, you need to spot the threats and alert your team before it’s too late.

You peer down from a UAV through an infrared camera analyzing and scrutinizing the happenings of a seemingly ordinary town. You see farmers in fields, children coming from and going to school, families en route to and from the marketplace, and religious services – everything seems normal but your training tells you that you need to look ahead. That’s when you notice signs of suspicious behavior: people moving to rooftops looking to the sky for incoming aircraft, armed civilians lurking behind corners, and most dangerous of all, a child wearing a heavily laden vest. You use your comms channels and report the potential threat to your squad leader. 


Last modified on Read more
Hits: 4103

At MÄK, we help our customers simulate unmanned vehicles in a lot of ways, depending on what part of the system architecture the customer is addressing. Some use VR-Forces to simulate the UAV’s mission plans and flight dynamics. Some use VR-Vantage to simulate the EO/IR sensor video. Of those, some use VR-Vantage as the basis of their payload simulation and others stream video into their ground control station (GCS) from a VR-Vantage streaming video server. 

All of our customers now have the opportunity to add a Synthetic Aperture Radar (SAR) to their UAV simulations — and here’s how to do it. SensorFx SAR Server comes as two parts: a client and a server. The server runs on a machine on your network and connects to one or more clients. Whenever a client requests a SAR image, it sends a message to the server, providing the flight information of the UAV and the target location where to take a SAR image. The server, built with VR-Vantage, then uses the JRM Technologies radar simulation technology to generate a synthetic radar image and return it to the client.  

The SAR Server renders SAR images taking into account the specified radar properties, the terrain database, and knowledge of all the simulated entities. The radar parameters are configured on the server in advance of the simulation. The terrain database uses the same material classification data that is used by SensorFX for rendering infrared camera video so your sensor package will have the best possible correlation. The server connects to the simulation exercise network using DIS or HLA so that it has knowledge of all the entities. It uses this knowledge to include targets in the SAR scenes and so that you can use a simulated entity to host the SAR sensor. 

Last modified on Read more
Hits: 8353

A week ago, I wrote a blog entitled “Do I need a new graphics card?” to answer the common question: Will I get better performance if I just upgrade my graphics card? In the blog, I discussed the difference between CPU and GPU bound scenes, and made the point that if you are CPU bound, getting a new graphics card will not help much. Typically scene performance will improve more with better terrain organization. 

While that is all true, there is one additional problem you may encounter that will spoil performance and can be addressed by upgrading hardware: running out of video memory. VR-Vantage 2.0.1 now tracks your total video memory, how much you are using, and if any of your textures have been pushed out of memory (evictions). Once you have consumed all of your video memory, the card will start swapping textures off the card and into the system memory. This is incredibly slow and will seriously affect frame rate. Scenes that were fast may all of a sudden have a 100ms draw time. 

To see how your scene is performing, turn on your Performance Statistics Overlay (found in Display Settings -> Render Settings).  You would want to see something below 80% usage. As you move around in your scene, if the memory consumption gets up to 100%, or you start seeing Evictions, then your performance is being seriously affected by a lack of memory. 

Last modified on Read more
Hits: 4327

Frequently we get questions about hardware requirements for customers who are trying to use VR-Vantage as an IG for a specific program. Typically, the customer is looking to achieve 60 frames per second (FPS) in VR-Vantage and their scene is rendering slower than they would like/expect. They have read the MÄK Blog about minimum hardware yet didn’t find the answer they were looking for.

Over the years, many of us have been conditioned to assume that buying newer/better hardware will yield better performance; if your performance isn’t up to snuff, just buy something newer. This often works – new GPUs are released yearly, often with phenomenal performance improvements. The cost for this new hardware is low compared to the total program cost, so upgrading can make sense. That said, most terrains used in the Modeling & Simulation community aren’t particularly complicated and so should run really fast even on old hardware. So how can you figure out if it’s your terrain that is slowing you down or if it’s your graphics card that is the culprit? This blog will try to answer that question for you.

To understand where your bottleneck is, you need to understand if your application is CPU or GPU bound. For this blog I will use the term “CPU” to mean not just the physical processor, but also the process of organizing and passing information to the GPU. Simply put, VR-Vantage can be bottlenecked in many places: collecting information from the network, updating the scene graph, sending information to the GPU, or the GPU itself may be bottlenecked trying to render the actual scene. Of these possible bottlenecks, upgrading your video card will only help the final case. That means if your scene is slow for any reason besides the final render step, you need to optimize your scene’s content and configuration, not by buying a better graphics card.

Last modified on Read more
Hits: 6388

At MÄK, we are constantly seeking ways to improve our products by diligently researching the latest technologies that will elevate our fidelity and performance. In this blog, we’ll tell you how we’re doing exactly that by integrating the photogrammetry process into our human content pipeline.


Photogrammetry is the science of making measurements from photographs "” we’re using it to make a high-resolution 3D mesh. We expertly capture photos of a subject, use specialized processing software and post-processing by our team of 3D artists to make hyper-realistic, high-performing humans for DI-Guy, our Human Simulation software. DI-Guy’s ability to support multi-texturing via albedo, bump, specular, gloss, and ambient occlusion allows us to retain the minute detail of these captures while delivering them in low-polygonal, high-performing models. The DI-Guy artists use industry-leading tools such as ZBrush, 3D Studio Max, Maya, and Photoshop to translate these models from reality to virtual reality. As you can see from the photos and videos, the results are impressive. 

Last modified on Read more
Hits: 17914

Last week, we conducted a multi-day seminar series in Beijing hosted by our in-country representatives "“ Seastars Co. Ltd.  The seminar presentations consisted of Solution Architect and UAV Topics, as well as a suite of product presentations. We unveiled for the first time in China the latest capabilities of VR-Forces 4.3 aggregate-level simulation, as well as a suite of "new" functionality and capabilities offered within VR-Vantage and DI-Guy. 

Our VR-Vantage 3D visual solution has many new features, including the new integrated Oculus plugin showing a high flight fly-through over the Hawaiian Islands. We also presented our Light Armored Vehicle demonstration, which consists of physics and wheel dynamics supported by CM-Labs Vortex. Attached are a few photos taken during the seminar which includes a picture of our 50+ audience. Thank you to those who help coordinate and those who participated/attended this successful event -  looking forward to returning to China soon!


Last modified on Read more
Hits: 3857

VR-Link 5.1.3, a maintenance release with several minor changes, is out! Here are some of the most notable changes:

Platform support changes: We have added support for Red Hat Enterprise Linux 7 (64 bit only). We have also ended support for Red Hat Enterprise Linux 4, SUSE 11, and Windows MS VC 7.1 and 9.0. MÄK is committed to supporting the platforms our customers care most about; if you require discontinued platforms, contact MÄK support.

VR-Link Code Generator: We continue to improve the VR-Link code generator by making the output more intuitive and easier to read. The code generator now generates VR-Link internal classes as much as possible, helping to produce a highly consistent API. The code generator will also generate an HLA Evolved project without providing the standard MIM.

Last modified on Read more
Hits: 5808

The most recent release of MÄK RTI 4.4.1 is a minor maintenance release that makes several minor changes.

New Platform Support: Microsoft Visual C++ 12.0 and Red Hat Enterprise Linux 7 have been added. For both of these platforms, only 64 bit libraries are supported. MÄK products will only support 64 bit libraries for all new platforms. The MÄK RTI has dropped support for VC7, VC9, Red Hat Enterprise 4, and SUSE 11.

If you are a customer under support and require these platforms, please contact for more information.

Last modified on Read more
Hits: 6195

In version 4.3, VR-Forces introduces the notion of aggregate-level simulation. Okay. What exactly is the difference between aggregate-level simulation (ALS) and entity-level simulation (ELS)?

At the core, aggregate-level simulation is a more abstract level of modeling and therefore is more suitable for representing higher echelons of a force structure "” units like companies, battalions, and brigades. Entity-level modeling has the fidelity appropriate for individual entities, like vehicles and human characters. 

Lets look at maneuver modeling as an example. In ALS, units have to slow down to move through a forested area, whereas entities in ELS have to maneuver around individual trees. This higher level of abstraction happens for all the types of models. Combat in ELS happens when an entity has line of sight with another entity. When one entity fires, a hit/miss calculation is performed between the detonated ordinance and the nearby entities. Damage is assessed only for the entities that are actually hit. In ALS, units, which cover an area, must have line of sight to the "˜area’ of the other unit. Combat then proceeds as rates of change in the resources and status of the units. For example, a large, well-equipped unit will more quickly deplete the resources and status of a smaller less equipped unit.  

Last modified on Read more
Hits: 4839

Simulation has become an accepted, routine, and critical method of training militaries worldwide. Many nations have invested heavily in large simulations for wargaming, however there is no "one size fits all" training simulation. Software that may be appropriate for one nation may be too cumbersome, resource intensive, and unmanageable for others. A low-overhead simulation system will address a nation’s wargaming and constructive simulation requirements, while also being much more economical in terms of procurement, training, and sustainment. 


MÄK CST fills the Command & Staff training capability gap. It combines the user-friendly features of a game with capabilities of the larger, more complex simulations to help trainees learn how to make stronger battlefield decisions. Because of its flexibility and ease-of-use, MÄK CST can be used in the classroom, in the simulation center, on deployment, and at home stations.

Last modified on Read more
Hits: 6185

If you’re just joining us in this 5 part blog series, welcome! Check out the previous few blogs describing the goal of this series, Latency benchmark info, Throughput benchmark info, and HLA Services benchmark info. 

In addition to turning services on and off as noted in my last blog, the MÄK RTI provides a few ways to reduce the traffic in the network. The two most commonly used methods to do this are bundling and compression. The ideal value to set both of these features varies by the type of simulation being done. Thus it is best to understand their effects on traffic to use effectively. The following graph shows the effects of bundling on network throughput:


Last modified on Read more
Hits: 4558

In VR-Forces 4.3, we’ve made a number of enhancements that are not immediately obvious, but are still very useful if you know how to take advantage of them. In this post I’ll share some tips on how to make use of the improved Simulation Model Set (SMS) management that is part of VR-Forces 4.3.

For those who don’t already know, a Simulation Model Set (SMS) in VR-Forces is the set of configuration files that defines the entities and objects available for creation in a scenario. This includes everything from their names and type enumerations to their behavior logic and physical movement dynamics. An SMS is typically modified using the VR-Forces Entity Editor tool.

VR-Forces ships with some preconfigured SMSs with hundreds of objects to use in scenarios, however, it is quite common for customers to add specific models, or to modify the shipped VR-Forces models to suit the needs of various projects.  In the past, this was most often done by editing the default SMS in VR-Forces directly, or by copying it wholesale and making edits to the copy. Both of these options lead to significant upgrade work when moving to a new version of VR-Forces where parts of the default SMS were edited, since the changes have to be merged.

Last modified on Read more
Hits: 5894