Me and @abhshkdz saw this error quite a while a go. After some investigation this seems to be an EGL issue.
The cause seems to be: the GPU resource used by an EGL context does not get released until all EGL context within the process get destroyed. In other words: destroying a single EGL context does not actually release its resource, until all EGL context within the process get destroyed.
This can be verified by the following code:
#include <cstdio>
#include "suncg/render.hh"
using namespace render;
using namespace std;
int main() {
std::vector<SUNCGRenderAPIThread*> apis;
apis.resize(1000);
for (int i = 0; i < 1000; ++i) {
printf("%d\n", i);
apis[i] = new SUNCGRenderAPIThread(800, 600, 0);
if (i >= 1) {
delete apis[i - 1];
}
}
}
This code always destroys a context after creating a new context. It gives the framebuffer error under EGL backend, but not under GLX backend.
#include <cstdio>
#include "suncg/render.hh"
using namespace render;
using namespace std;
int main() {
std::vector<SUNCGRenderAPIThread*> apis;
apis.resize(1000);
for (int i = 0; i < 1000; ++i) {
if (i % 10 == 0 && i >= 10) {
for (int k = 0; k < 10; ++k) {
printf("Delete %d\n", i - k - 1);
delete apis[i - k - 1];
}
}
printf("%d\n", i);
apis[i] = new SUNCGRenderAPIThread(800, 600, 0);
}
}
However this works fine, because after creating every 10 contexts, it destroys all of them.
Similar tricks apply to python side as well. To free the resources, it's important and not very easy, to make sure no references to any SUNCGRenderAPIThread is alive.
For instance, this code results in error:
for k in range(1000):
api = objrender.RenderAPIThread(...)
because in every assignment, it first creates a new api, then destroys the old one. However this works:
for k in range(1000):
api = None
api = objrender.RenderAPIThread(...)