As many of you probably know, I have been hard at work during the last months preparing Live++ for public release.
Today, I can finally share the news that Live++ 1.0.0 is here!
Go grab your trial today and spread the word!
As many of you probably know, I have been hard at work during the last months preparing Live++ for public release.
Today, I can finally share the news that Live++ 1.0.0 is here!
Go grab your trial today and spread the word!
About a year and a half ago I posted a few C++ tips on Twitter. Because not all of my blog’s readers are on Twitter and Twitter is not the best medium for archiving things, I decided to write a blog post instead, accumulating all tips in one place.
Additionally, this also allows me to go into more detail where necessary and comment on a few things noted on Twitter.
I will keep updating this post as I add more tips.
Update on 21th Dec 2017: Added tips #5-#7.
When dealing with .dlls and Visual Studio, there is a well-known problem of the Visual Studio debugger holding onto the .pdb file, even after the .dll has been unloaded by a call to FreeLibrary().
In today’s post, we will finally take a look at the last remaining piece of the new job system: adding dependencies between jobs.
I recently had the need to retrieve the type of a template argument as a human-readable string for debugging purposes, but without using RTTI – so typeid and type_info were out of the question.
Continuing from where we left off last time, today we are going to discuss how to build high-level algorithms such as parallel_for using our job system.
This week, we will finally tackle the heart of the job system: the implementation of the lock-free work-stealing queue. Read on for a foray into low-level programming.
As promised in the last post, today we will be looking at how to get rid of new and delete when allocating jobs in our job system. Allocations can be dealt with in a much more efficient way, as long as we are willing to sacrifice some memory for that. The resulting performance improvement is huge, and certainly worth it..
Back in 2012, I wrote about the task scheduler implementation in Molecule. Three years have passed since then, and now it’s time to give the old system a long deserved lifting.
In order to gauge which topics you may find interesting, I’ve decided to do a quick poll to let you choose which topic you would like to hear more about.
Every now and then, I get asked about the current status of the Molecule Engine, whether there is an evaluation version to download, or if an Indie license can be acquired somehow. There are a few things that need to be said, because this is something that is very close to my heart.
I’m proud and excited to announce that both my proposals for Game Engine Gems 3 have been accepted! The book is due GDC 2016, so make sure to pick it up once it’s released.
Hopefully this will get me back into the habit of writing a bit more. I have plenty of new (and also old!) topics to write about, but I’m really lacking the time at the moment.
The last post of this series basically concluded with the following questions: how do we efficiently allocate memory for individual command packets in the case of multiple threads adding commands to the same bucket? How can we ensure good cache utilization throughout the whole process of storing and submitting command packets?
This is what we are going to tackle today. I want to show how bad allocation behavior for command packets can affect the performance of the whole multi-threaded rendering process, and what our alternatives are.
The WordPress.com stats helper monkeys prepared a 2014 annual report for this blog.
Here’s an excerpt:
The Louvre Museum has 8.5 million visitors per year. This blog was viewed about 160,000 times in 2014. If it were an exhibit at the Louvre Museum, it would take about 7 days for that many people to see it.
Did you know that you can now also follow us on Facebook?
Always stay up-to-date with blog posts and the latest news!
If you like our blog and want to express your support for what we do, hit the Like-button on our Facebook page. We appreciate it!
In the previous part of this series, I’ve talked a bit about how to design the stateless rendering API, but left out a few details. This time, I’m going to cover those details as well as some questions that came up in the comments in the meantime, and even show parts of the current implementation.
Continuing where we left off last time, today I want to present a few ideas about how to design the API that enables us to do stateless rendering.
In this post, I would like to describe what features and performance characteristics I want from a modern rendering system: it should support stateless rendering, rendering in different layers/buckets, and rendering that can run in parallel on as many cores as are available.
I recently came across several old projects I did almost 20 years ago, and thought about how to preserve them so they aren’t lost again. I already believed them to be lost once, until a classmate approached me and told me what he had found on some of the old floppy disks he had lying around.
And what better place is there to conserve things than the internet?
Some time ago, I announced that the Molecule Engine uses C++ as a scripting language. Today, I can share implementation details and a few additional tricks that were used to keep compilation times and executable sizes down.