No cookie for

VR-Vantage | |

Pilots rely on visual inputs the most to orient themselves in flight. Because vision is so important, night flying can introduce new challenges – limited eyesight, night illusions and light blindness. To combat these issues, pilots train to use a consistent, regulated set of lights (to indicate approach, threshold, etc) to help guide them through darkness, identify where they are, and assess how fast they are moving.

Last modified on Read more
Hits: 2644 0 Comments


If you have a deep technical understanding of building 3d models, here are some quick tips to make sure they perform optimally in your VR-Vantage application.

To make your models as fast as possible, you need to minimize drawables. Each drawable is a collection of polygons with the same state set and the same primitive set. Every time the information about a drawable is sent to the graphics card, your draw time goes up significantly. Having hundreds if not thousands of drawables in your scene will kill your performance.  To reduce drawables, follow these rules:

Last modified on Read more
Hits: 10619 0 Comments

We've just launched RadarFX, our new Synthetic Aperture Radar (SAR) simulation and visualization product! We built it in conjunction with our partner, JRM Technologies.

In the real world, a SAR sensor is typically attached to an aircraft or satellite.  A SAR system generates photograph-like still images of a target area by combining radar return data collected from multiple antenna locations along a path of flight. Requests from users on the ground define the target area to be scanned, and other parameters used to generate and return the image.

Last modified on Read more
Hits: 3261 0 Comments

To draw a frame, VR-Vantage needs to a) build/update the scene graph, b) organize the scene graph and send it to the GPU, and c) have the GPU render the scene. There are a bunch of other steps like loading terrain from disk (or across the network), processing network packages (DIS/HLA or CIGI), but for the most part those occur in other threads. I will address each of these issues in future posts. For the moment, let’s just focus on static scenes devoid of entities. Let’s look at *this* scene:

Last modified on Read more
Hits: 14617 0 Comments

At MÄK, we help our customers simulate unmanned vehicles in a lot of ways, depending on what part of the system architecture the customer is addressing. Some use VR-Forces to simulate the UAV’s mission plans and flight dynamics. Some use VR-Vantage to simulate the EO/IR sensor video. Of those, some use VR-Vantage as the basis of their payload simulation and others stream video into their ground control station (GCS) from a VR-Vantage streaming video server. 

All of our customers now have the opportunity to add a Synthetic Aperture Radar (SAR) to their UAV simulations — and here’s how to do it. SensorFx SAR Server comes as two parts: a client and a server. The server runs on a machine on your network and connects to one or more clients. Whenever a client requests a SAR image, it sends a message to the server, providing the flight information of the UAV and the target location where to take a SAR image. The server, built with VR-Vantage, then uses the JRM Technologies radar simulation technology to generate a synthetic radar image and return it to the client.  

The SAR Server renders SAR images taking into account the specified radar properties, the terrain database, and knowledge of all the simulated entities. The radar parameters are configured on the server in advance of the simulation. The terrain database uses the same material classification data that is used by SensorFX for rendering infrared camera video so your sensor package will have the best possible correlation. The server connects to the simulation exercise network using DIS or HLA so that it has knowledge of all the entities. It uses this knowledge to include targets in the SAR scenes and so that you can use a simulated entity to host the SAR sensor. 

Last modified on Read more
Hits: 7051 0 Comments