Agreed it needs some optimization. The reason the attractor gets recalculated when rotating, is that there is actually no geometry at all..
This is different from the approach used in e.g. https://syntopia.github.io/StrangeAttractors/, where the Runge-Kutta integration is done is JS and converted to geo. The advantage of that approach is that the rotation is smooth (and there is no shader recompilation), but the number of lines/tubes is limited by the time for JS to do the RK integration.
In my approach, the image is generated fresh from each viewpoint as follows:
- initialize a floating point texture A with the start points
- in a shader, update all points in parallel to their next position, via R-K, writing to texture B
- draw all of the GL line segments via a vertex shader which reads the line endpoints from the textures A and B (with lighting done in the line fragment shader)
- composite the result into the framebuffer
- repeat for the number of integration timesteps, ping-ponging A/B
- iterate over frames, with the start points jittered, blending each frame equally
This renders the thick tubes over time.
This way the integration happens entirely in shaders, with JS never seeing any geometry. So a really huge effective amount of geometry can be rendered (billions of lines?). The disadvantage is that the integration is re-done for every viewpoint, and also the GLSL shader is recompiled whenever the vector field changes. I was optimizing here for detail of the visualization more than for smoothness of the interaction -- but I would like to improve the interactivity for sure.
>It could use better controls for changing the attractor constants. Changing those constants can really give you a good feeling of what chaos and strange attractors really are.
>Some of the controls also feel a bit hard to handle, you might want to look into doing custom controls or looking at UX a bit more.
I'm keen to know where I can improve the controls. For the number/color scrubbing in the code editor I just use http://enjalot.github.io/Inlet/ pretty much out-of-the-box. It does seems a little clunky, though generally I really like the notion of the UI being integrated with the code like this.
I'm not certain - are you using orthographic or perspective rendering of the generated fields? I only ask because it seemed like the "source box" (I don't know what to call it - just something I saw as a wireframe in the Lorenz demo) seemed orthographic.
If so - it'd be nice to be able to switch between the two using the parameter controls or something; I understand why on a tool like this you'd want the orthographic view, but from an aesthetic perspective, having the perspective view would be nice to look at.
I'm not the audience, though, for this kind of tool - I barely understand what's being done (the math is way above my pay grade, but the simulation and visualization code is neat to see in action, and I could imagine potential artistic rendering uses for the tool).
EDIT: Looking at the code, it seems like you set up the camera to be in perspective mode - so I don't know why that "source box" thing I saw appears to be orthographic? Regardless, having both modes available might be useful (and maybe - on thinking about it - some kind of "bounding box" labeled grid on three sides?)...
It's doing a perspective rendering. You can change the camera FOV in the controls (which approaches orthographic as the FOV goes to zero). I could add a specific orthographic mode though perhaps.
Agreed it needs some optimization. The reason the attractor gets recalculated when rotating, is that there is actually no geometry at all..
This is different from the approach used in e.g. https://syntopia.github.io/StrangeAttractors/, where the Runge-Kutta integration is done is JS and converted to geo. The advantage of that approach is that the rotation is smooth (and there is no shader recompilation), but the number of lines/tubes is limited by the time for JS to do the RK integration.
In my approach, the image is generated fresh from each viewpoint as follows: - initialize a floating point texture A with the start points - in a shader, update all points in parallel to their next position, via R-K, writing to texture B - draw all of the GL line segments via a vertex shader which reads the line endpoints from the textures A and B (with lighting done in the line fragment shader) - composite the result into the framebuffer - repeat for the number of integration timesteps, ping-ponging A/B - iterate over frames, with the start points jittered, blending each frame equally
This renders the thick tubes over time.
This way the integration happens entirely in shaders, with JS never seeing any geometry. So a really huge effective amount of geometry can be rendered (billions of lines?). The disadvantage is that the integration is re-done for every viewpoint, and also the GLSL shader is recompiled whenever the vector field changes. I was optimizing here for detail of the visualization more than for smoothness of the interaction -- but I would like to improve the interactivity for sure.
(There's no web workers involved -- except during GIF rendering, with uses the very nice lib https://jnordberg.github.io/gif.js/).
>It could use better controls for changing the attractor constants. Changing those constants can really give you a good feeling of what chaos and strange attractors really are.
>Some of the controls also feel a bit hard to handle, you might want to look into doing custom controls or looking at UX a bit more.
I'm keen to know where I can improve the controls. For the number/color scrubbing in the code editor I just use http://enjalot.github.io/Inlet/ pretty much out-of-the-box. It does seems a little clunky, though generally I really like the notion of the UI being integrated with the code like this.