So here's why this is particularly awesome in my head.
I am a digital nomad, and basically travel with a backpack's worth of stuff, developing off my 13"MBA. The thing I miss the most about my old office is having a nice monitor. The thing I miss most about having a home is my 3-monitor desktop setup. Both things I had to give up because they don't fit in my backpack and because I can't easily procure them wherever I travel to.
But if I can pack an Oculus Rift headset into my pack, I dunno, a year from now. I can have all the multi-desktop computing environments I want, and they'd actually be way more awesome than just 3 flat monitors.
Beyond all that, text editors and dev environments have seen little UX improvement given the rate of evolution for everything else digital. This looks like an awesome step to trying some really radical things (I know this is a previously explored topic though, but I love being reminded of it).
OP, this is awesome. I happily leave the future of my dev environments in your hands.
I would be surprised if it was a lot longer. And regardless, it seems very likely it will eventually happen, thus working towards that goal--even if the hardware isn't yet ideal--is probably a good idea.
"resolution to get high enough to be usable for fine-detail tasks like professional coding"
Coding does not require high resolution displays. You can code on a 80 character wide terminal with fonts of 8x8 pixels. I have. I still use bitmap fonts today (terminus). And let me remind you that blind people code as well. Visible text is just an abstraction, other ones are possible.
Holy crap, great idea! Imagine if adding a "monitor" (virtually speaking, of course) to your setup was no more difficult than splitting a Vim window... that would be pretty rad.
An Oculus Rift is much less expensive than your typical multi-monitor setup too, FWIW.
Actually, I'm hoping to be able to experiment with code layout ideas that do not strictly adhere to a window. 2D elements tend to not map well to 3D spaces. One of the problems with the text-in-a-texture method that I've done here is that there is no relief of depth to the text, and relief is actually an important part of how we make sense out of objects in 3D space. It's fine on a 2D screen because the context is so restricted that it all stays consistent. But in 3D, it's kind of disconcerting to see completely flat textures.
Like, why do we still use plain text in flat files for source code (for the most part, there are a few exceptions-that-prove-the-rule)? Why are there so many examples of graphical programming systems that have completely failed? Is it because there is something inherent to text that makes it superior for representing algorithms? Or is it because 1-dimensional text maps to our IO systems the best? I suspect there is a healthy dose of the latter, but we haven't ever had a significant "other" IO system to ever test that theory.
In a lot of ways, Primrose isn't even my goal, more like a benchmark from which to start developing other ideas in text representation in VR UIs. I don't know what they will be, but at least I now have a starting point.
> Like, why do we still use plain text in flat files for source code
Text transports with full fidelity between print and screen, and between pretty much any system.
Anything beyond text becomes a) harder to communicate (e.g. consider how many people would struggle with even reading out loud maths formulas with symbols they are unfamiliar with; now extend that to trying to talk about representations that have extra spatial information), b) requires extra tooling for far more varied environments than you ever thought people actually code in (e.g. I have an idea; I want to write down code in my phones note-taking app)
Serializing it for IO is trivial. Representing it in a way that allows us to handle the two problems above in a way that makes it seem superior to text in enough cases is a very, very hard problem.
I believe the reason is that graphical (2d) notations are harder to communicate and therefore not standardized. If they are standardized, they are usually very application specific and not really general enough. For example lambda calculus has a nice graphical representation as string diagrams, since it corresponds to calculations in a cartesian closed category. Typed lambda calculus has a similar representation, but with multicolored strings.
Pretty much any problem easily modeled as data flow has a nice a nice representation in an appropriate (traced, braided) monoidal category, examples would be electrical circuits, chemical reaction networks (simple atmosphere models, so called "Box models" are an example), hardware control software (LabVIEW), general ODE/Statemachine simulation (SIMULINK), "business processes" (forgot the name of the vendor), not to mention the various video/audio/shader programming environments.
Probably the intersection between the people capable of building a "generic" programming language for graphical programming and people who know enough category theory to inform the design is zero, even though software that solves subproblems of this kind is very successful. Ultimately you really want at least a 3d graphical language, because you want to express relations between 2d processes (for example the equation a+b = b+a, should be really understood as such a 3d process). Coincidentally relations between functions is something even programming languages like Haskell fail to address, it has type classes, but no way of specifying the equations that are required to hold.
The only language I'm aware of that does that is Magma, which unsurprisingly is based on Operads (essentially multicolored monoidal category with a fixed set of monoidal functors and natural transformations) and was designed by mathematicians and to a lesser extend Mathematica and Prolog.
My wife chimes in here and says, "don't even talk to me about LabVIEW". She's apparently not happy with it right now, haha.
I don't see what you mean about Magma. This is what I've found for Magma: http://www.math.harvard.edu/computing/magma/. It looks like Pascal with Python list comprehensions and sets as a first-class type.
Source code in most languages is a one-dimensional construct. Every source file in the project could be concatenated, the newline characters removed, and it would still compile to the same program (assuming the right ordering, barring white-space significant languages, etc.).
Is that because we are linear? Is that because our computers are linear? I don't think we're ever going to get away from written language as the abstraction of source code. Most (all?) people think in words. But how that language is presented might change. Literal lines and connectors between code elements has sucked in 2D because it just gets too dense for 2D space. Is that because linear things "want" to stay linear? Or is it because we have had to deal with 2D interfaces and it's easier to work in N-1 dimensions when using an N-dimensional interface?
And while I think I've seen attempts at this in 3D, then we have the other issue of 3D interfaces being terrible on 2D IO devices. Never have the two been put together.
But I think that's going to change very soon. A lot more people are experimenting with this sort of stuff these days. Good-enough stereo displays are accessible to everyone (slap your phone in the cardboard box you ordered a set of $5 lenses in). Language design is more accessible to everyone. Editor development is, too (I couldn't imagine doing this project 5 years ago, and even 3 years ago it would have been a very poor experience). There is just a culmination of different technologies that make it possible for just about anyone to hack on all of this stuff.
Somehow, there seems to be a lot of discouragement lately to do so, but that's a different story entirely. I'll just leave it at saying, I hate the phrase, "don't reinvent the wheel".
Well LabVIEW might be not the best example :), but in part that is because it has many quirks that would hopefully be not present in a more general system. Also because most LabVIEW sheets were probably put together by someone that just wanted to get something to work at all and not really cared about the next person that had to deal with the mess...
I still think that a graphical language for this is worth exploring, because it already happens to be the way domain experts think, which is presumably why most of the successful commercial software is targeted at them. See for example:
http://math.ucr.edu/home/baez/networks/
for an overview of how all those things tie together.
Regarding Magma here is a PDF, which contains a summary of the theoretical foundations:
Essentially it is a language that can do type checked symbolic reasoning, whereas for example Mathematica is for the most part untyped. Page 11 explains the steps necessary to extend the system (you have to write a C-library, so that is less then ideal). The reason I bring this up is because
relations between symbolic expressions (for example equations) are really something that lives in a higher dimensional space, see for example the Associahedron
(http://en.wikipedia.org/wiki/Associahedron). In a 2-dimensional language ordinary programs would be graphs and metaprogramming would be surfaces between graphs. As long as you have only a 2 dimensional surface available, that kind representation probably would not work, but I believe as a conceptual idea it is rather helpful.
I agree with you, that the moment convenient augmented reality hits, we will hopefully see different variations of program representations emerge. Especially complicated concurrent systems would probably benefit from it, since they don't have a sequential control flow.
Well... if you look at things like (colour)forth (and highlighting in IDEs/editors) -- most people don't actually code in "plain" text (there's colour/typographic hints that indicates meaning). Other than that -- we do tend to still use text to represent mathematics, so I don't think it's that strange that we usually represent code as text. For another exception to the rule: see graphical editors for UI layout, as seen in many IDEs (ie: using graphical programming for representing graphics) -- or 3d modellers like Blender (when you add bones etc, you could argue that you're "programming"). Then there's of course spread sheets -- but they tend to be pretty text-focused -- even if they use layout to great effect.
What stops you from having multiple windows on your desktop?
Oculus sucks at displaying text - it's low resolution (half of 1080p), it has crazy chromatic abberations, and you have to transform "spherize" your image to correct for the lens distortion, further reducting the resolution.
Oculus is not light - it's pretty annoying to wear it for longer than 20-30 minutes.
This is all true of the hardware as it exists right now. By the same token the original iPhone was low res, slow, only 2G, and bulky. The iPhone 4 (first with retina display) came out only 3 years later. That's a huge leap in such a short period of time.
If VR catches on at all you can expect it to see the same type of acceleration.
Yeah, that was kind of a motivation for me. I was sick of standing around, talking about what was or was not good for VR, with people who didn't even own a Rift, let alone program anything for it.
Nothing stops me from having multiple windows on my 13" desktop. But it's clunky to bounce between, and makes me lose context too easily ctrl/alt+shift/+tab/~/1/2/3-ing through my browser or text editor or output window. With a multi-display, I can just turn my face over or glance and look at what I need and be back in context right away. I mean, I hope I don't have to explain why multiple monitors are nice (for most people).
The same is conceivable with a VR solution. That makes it awesome.
As for the problems you mention, all extremely solvable. I'm sure you were alive when people were editing text on lower resolution displays. Lens technology for dealing with a great extent CA exists, it just needs to be tuned for VR settings. I could lie down and use it. Lighter materials could be used in future versions. It could be made disassemblable for easier packing/storage.
I have faith in our current generation of technologists to make these things and more happen before my development career sunsets :)
I think the version based on the Note 4 is 2560x1440 so that's already quite a big leap up. Give it two years and we might have usable text editing maybe?
There are definitely people working on it, but I don't think the resolution is going to be there for text editing in the first (or even second) generation Rift. That said, take a look at these: http://www.reddit.com/r/oculus/comments/2hbktl/best_virtual_...
This issue is actually where the idea came from to put the text on the inside of a sphere. I wanted to make a VERY big text field, so that I could fit a good amount of text on the object at a font size large enough to read the text well. At that size, the edges of a rectangular window are so much further away from the viewer that there is significant distortion. The sphere makes for a much more comfortable viewing angle.
No, I wouldn't want to write Primrose itself in Primrose, and Primrose isn't a very big project. But little experiments with the live-coding have been fun. I'm still working out the use cases in my head. I suspect there might be more use for manipulating and visualizing code in VR--at least in the early days until resolution gets better--than straight, bulk writing code.
Ah, interesting approach! I'd been stuck thinking about it in terms of replicating my standard multi-monitor setup in VR, but without the physical constraints of my desk and the manufacturing costs of large panels, there's really no need to set it up that way. Neat!
Thank you. You have no idea how much that means to me. That's exactly why I'm working on VR projects right now, that "year from now" that you mentioned. I'm specifically going for WebVR because I believe the browser environment to be the most conducive to rapid prototyping
Just a warning, if you try this while using Safari it might crash and close all of your tabs somehow. And it might make it so that you can't reopen those tabs despite the fact that you have tabs set to be restored each session. And you may in fact experience the acute pain of losing a year's worth of painstakingly accumulated tabs. Not that this has happened to me.
I'm really sorry about that. I didn't know it was going to do that. I had seen that it was doing that in browsers on Android and specifically disabled it for mobile users to avoid such an awful experience. I had just assumed that Safari, being WebKit, would work just as well as the least common denominator of Chrome and Opera. Again, very sorry.
I guess that will teach me to start sneaking into my wife's computer bag and borrowing her Macbook to test these things.
Because I cycle through a lot of tabs. Yes, some of them I leave up for months to pick away at, but others I finish in just a day. It's much easier to close tabs than to edit my bookmarks constantly. And losing those tabs isn't an issue because Safari can be made to reopen those tabs each session, even after unexpected shutdown or whatever. But then this happened. I need to write something that will back up my tabs.
I'm the same way, so I can relate to your pain. I've had this happen before with Chrome/Chromium with hundreds of tabs. I now use Firefox for two reasons: One, the UI is better than Chrome/Chromium with hundreds of tabs active, and two I now simply periodically clear out long term tabs when I no longer have an immediate need for them by bookmarking them.
My friends think I'm insane by keeping so many tabs open. Maybe I am. Maybe you are, too. Yet with as disorganized as it might seem to others, it works well for my particular usage patterns. (It also helps that I segregate news, work, misc. activities into different profiles, too.) The thing is that sometimes when you open something that you want to get back to at some indeterminate point in the future, you can do one of two things: Bookmark it and forget about it, possibly permanently; or keep it open and stumble upon it as you go through your tabs to categorize them and clean them out. Although these days, I mostly just dump everything into a dated but uncategorized list to go through in the future. Session Manager handles the rest in the event I do something stupid.
I imagine I'll catch some flak for sharing this, but it works well for me. My work-related browser sessions are usually better organized and important stuff gets saved more or less immediately. My "fun" generic browsing instances--not so much.
Wow, you really get it. You really hit the nail of the head about bookmarks. They are almost always completely forgotten. Tabs, however, are constantly groomed and sifted through and pruned. If I could have my way, I would take Safari with all it's nice features like sexy tab scrolling and tab view (these are major assets to the tab junkie) and reshape it into something entirely tab based. Instead of bookmarks, you would have long lived tabs. And all the tabs are categorized in whatever way the user wants. And tab view would actually allow you to manipulate tabs while viewing them (it is a fucking mystery to me why you can't do that). So it would be like having all your bookmarks and tabs mixed into one pot. And of course, bullet proof tab recovery features without the cloud. Jesus, maybe I should just write a Safari extension.
Brothers, I found you! Finally people who use tabbed browsing like me.
Is there some forum for advanced tab browsing users? If not maybe we should make one.
For me windows are tasks/areas of research. A task can last minutes, hours, days or even weeks.
Inside windows there are groups and each group has a starting point(usually a google search) from which many tabs are generated and then checked individually. And than each tab grows its own history by following hyperlinks.
And when a task is completed, the resulting pages are bookmarked into a folder.
> Is there some forum for advanced tab browsing users?
Not that I'm aware of, but there might be a support group. ;)
Joking aside, it's curious how much flak you can take from suggesting that you make extensive use of a feature that's built into the browser... everyone I know thinks I'm a bit odd when they see my Firefox sessions.
> For me windows are tasks/areas of research. A task can last minutes, hours, days or even weeks.
I really like this perspective, because it's deeply reflective of my usage patterns. For example, when I was learning Golang, I had a session opened exclusively for that, with all the respective documentation, some blogs, tutorials, and various odds and ends. Over time it evolved, certainly, but those tabs remained open for the duration that I needed them. It doesn't matter if it's a day, two days, or two months.
I have one Firefox profile configured to use Session Manager with various saved sessions based on whatever I'm working on. So, whenever I need a major context switch, I save the current session, then switch to the one I need. It allows me to pick up where I left off--even if there's 300+ tabs in that particular list. Though, I'm pretty bad about clearing some of those tabs out--oftentimes it's because I want to revisit whatever I was researching, even if it takes several months!
It sounds like you and I do about the same thing in that regard! Although, I've yet to learn the bookmarking technique. ;) Presently, for my generic browsing, I just dump everything into a dated folder. Everything else of interest that I don't want to inadvertently lose gets shunted into my unsorted bookmarks without a date under the pretense that I'll eventually categorize it (which I usually do). I think I'll have to try the categorization for tasks that are complete rather than simply bookmarking tabs individually and massacring the rest.
> Tabs, however, are constantly groomed and sifted through and pruned.
Exactly this (apologies for the 2-day delay!). This is exactly why I keep so many tabs open, because I can filter them and use them as a living browsing session. It also provides me with some amusement when I look through tabs opened weeks (sometimes months?) prior, because it gives me a snapshot into what I was thinking of/looking for during that window of history.
I'm really glad to find others who share my usage patterns. I don't feel so weird. :D
Aside: Just a week or two ago, I archived a bunch of tabs from a generic browsing session and I think the total came close to 980. Eeeek!
Firefox also has the very useful "Restore Previous Session" option under the "History" menu - perfect for when you close your browser by accident (in case of a crash Firefox reopens last tabs by default anyway).
The tabs are also "soft-loaded" meaning that you have to click on them to load the content, so you can find the offending tab and kill it to prevent crash from happening again.
I use "restore previous session" pretty extensively, personally, especially if I have multiple windows open. That said, I've very rarely had bad luck with it in prior versions of Firefox, occasionally discovering that it's trashed my sessions. Thus, even today, I still keep Session Manager installed as an insurance policy. (Though, I still backup ~/.mozilla pretty regularly too...)
> The tabs are also "soft-loaded" meaning that you have to click on them to load the content
I couldn't be happier since Mozilla switched to this mechanism. It means reloading a massive browsing instance isn't quite as resource intensive. Last I checked Chrome/Chromium, it STILL doesn't do this, which is infuriating and part of the reason I dumped using Chromium as my primary general browser.
There's nothing worse than when as few as 100 tabs can eat up ~2-3GiB RAM right after you start the browser...
Gotta admit. I've been using Session Manager [1] for years. It's a bit clunky, but it works pretty well once you figure out the UI for saving tab sets. It appears the primary complaints also revolve around its poor UI, but hey, I'm used to it. ;)
It looked very intriguing, and I installed it, but the reviews for the Firefox version are really terrible. Just a heads up in case someone should also install the addon, but not catch the bad reviews.
That's pretty much why I made this! I've been wanting to build live-editable objects in VR, but I was severely dissatisfied with the options available.
Yes, this. I want to be able to map the amazing flexibility I had in SecondLife over either physical. I want to see programmable objects or virtual objects in a "layer" over the physical world and be able to open them up and edit the code, just as I was able to do in SecondLife. The creative possibilities are vast.
WHA! I don't know what that means. I'm guessing an adblocker issue? I'll see what I can do. Sorry. This is kind of the first time anything I've made has gotten so much attention.
Just from the description, this nails something I've been wanting to see for awhile. Having text editor "primitives" (panes, windows, tabs) be actual geometric primitives is a fantastic start to next-gen editors. More power to you!
Thanks. I definitely want to implement more of those parts as I go. I just got a bug in me about making this one earlier this month and started pushing it through. Now that I have this, my next project will probably be a small, AI-programming game, ala CRobots or something.
I'm sorry I didn't know about your project a month ago. I'd be very curios to hear how you render the text. Are you still using Ace for the rendering, or is it just for the editing and tokenizing?
In my very first proof-of-concept, I used hidden DOM elements to do the layout and styling of the text, and then queried the element properties to figure out the draw operations I needed to do. It was very slow and really unreliable, though, and I had a lot of cross-browser issues. Ultimately, redoing everything from scratch was actually easier than trying to hack around DOM.
I've been seriously considering implementing a soft keyboard for mobile devices as well. It's just so hard to get the ones in the wild to interact with text in any common way, and on iOS it's nearly impossible to get everything resized correctly to use the most of the screen real estate.
The motivation behind CodeChisel3D is to reuse everything that ace provides, text modes, syntax highlighting, completion and so on. So, yes, my approach is to create a hidden ace editor in the DOM and then hook into the rendering mechanism of ace to render it to a texture that can be mapped into webGl and threejs scenes. The main reason for that is to support live webVR coding. I'm currently working on another project but I will get back to CodeChisel3D and improve it. You seem to have come quite far with primrose and might not be interested but the actual editor integration is here: https://github.com/rksm/three-codeeditor. Feel free to use it.
#1 is interesting, I've actually had the opposite problems: not being able to get things to work in fullscreen on OS X. My wife has a Macbook, so I will try to test there (if she will let me).
#2 and #3 are probably the same issue, expressed in two different ways because the red dot is placed completely independently from the caret drawn in the texture. Do you think you'd be able to figure out if its OS or display related? I.e. do you have a retina display? I should be collecting analytics on screen and pixel ratio metrics, sigh.
#4 was a punt, I just wanted to get the job out the door. I did want to dim the editor when it wasn't selected, and provide a key combo to deselect, but I ran out of time on my self-inflicted deadline. It is definitely in the pipeline, however.
#5: did you notice that SHIFT+mouse movement panned the view? Or are you not satisfied with that system? I admit, I'm not wild about it myself, but it seemed like the least evil of all of the options, considering the interactions that I also had to do with the editor.
This is actually how Desktop Window Manager (aka "Aero") on Windows and Quartz Compositor on OS X actually work. They're just setup with an orthographic projection and they don't rotate the windows at all.
That was actually a bit of a surprise (though I admit it shouldn't have been) with Primrose. If I pick dimensions for the editor that fit the window's aspect ratio, and don't rotate the view at all, then the editor pane can fill the browser window and it looks very natural.
I am not positive, as I don't have the spec laying in front of me and it's been a long time since I looked, but I'm pretty sure the HTML5 Canvas API is not hardware accelerated at this time. That is how I do the actual rendering of the text, using Text API.
There is another project called fontpath[1] that can read TTF and OTF font files, and another project from the same developer called gl-sprite-text[2] that can render them as bitmaps to textures. I've been considering converting my rendering over to using it. The process would at least be good practice in decoupling the code a bit more, as I think the tokenizer and keyboard system would be useful in other projects as well.
The keyboard code pages aren't yet complete for everyone in the world right now, but I think they go a long way towards making it possible to once and for all make good keyboard interactions in JS. And by completely implementing my own end-around of the browser's own handling, sticking to more of the older primitives, I am actually able to make for more cross browser compatible. I don't know if anyone noticed, but this runs in IE perfectly fine, and I didn't really do anything special to make it so. And if you just run the Canvas on its own, without the WebGL part, it actually runs on some relatively old browsers without any trouble.
This is super super cool. Feedback: I keep loosing the mouse pointer. The red dot seems to sometimes show up but it's pretty hard to track. Chrome latest on Ubuntu 14
Looking at this 'virtual' text editor the text looks pretty smooth, until you get really close. Would it be possible to import typefaces as paths, and antialias them?
How do native apps do this, while also using hardware acceleration? Also, native apps can access sub-pixels for antialiasing, which I don't think is possible in a WebGL context.
I've seen OpenGL GUIs before, but I wonder what limits them compared to native apps, particularly when it comes to text and sub-pixel antialiasing.
Actually, there is another open source project in the world that does that. It's called fontpath, I believe, and it reads OTF files directly. I'm considering switching to it, as the HTML5 Text API isn't hardware accelerated and a huge amount of the processing time is being spent in the fillText method.
This is so cool. Very interesting as an idea, hard to explain without seeing it I almost didn't click.
Side note: I tried to load this on Chrome for Android and it went very badly. I got redirected through two of those bright red "UNSAFE YOU SHALL NOT PASS" https pages and then finally got to a page that didn't do much at all. No idea why, worked fine in Chrome on OS X.
There are no overt technical reasons it won't run on Android. At one point, I did have it running in Chrome and Firefox on Android, but I had a lot of difficulty getting it to run without regularly crashing the browser and bombing out to the home screen. I don't know if that is an Android issue or a Galaxy Note 3 issue, but it was pretty reliably reproduceable in both browsers with just a "load this page, now refresh the page".
I also somehow broke the touch screen controls at the last minute. But I was really keen to get this project out by the end of this month. So I cut features to get in on my self-imposed deadline. I just shut it off to avoid procrastinating release, focusing on too many UI issues or trying to avoid crashing people's browsers when I haven't the slightest clue if there is anything I can do about it.
Most of my career has been working in consulting, mostly for the type of consultoware companies that make all the boring, MS-tech, CRUD projects. I've been freelancing for the last three years, but even that isn't ideal. I've always wanted to have my own projects, building the type of products I've always wanted.
Over the years, I've had a lot of personal trouble with procrastination of completing projects. 10 years of starting and stopping the same ideas, becoming dissatisfied with my work, and eventually growing bored of it and letting it rot on a forgotten hard drive somewhere. I've always known it was an avoidance tactic. By focusing only on the core technical issues, I'd never have to address the tertiary tech and management issues common to releasing any project. So that was another motivation behind this project: just get something together and get it done, out, and in front of people.
I couldn't be happier with the results. Yes, it's missing features. Yes, the server is kind of janky, being that I'm not a sysadmin and only even learned how to turn SSL on a few months ago (though I think it's actually my host's fault for not configuring some of my certificates properly). But I think it's pretty good for a month's worth of work.
Thanks for the very detailed response, I hope you didn't take my comment to be negative! This is an awesome project, it's just kind of a tradition on Hacker News to point out bugs (even small ones on great things) because this is a technical audience.
And then I hit Backspace while not inside the editor pane I moved the camera in front of and was transferred back to HN through the browser's back feature. Not sure if that was meant as a 'enough played, back to reality' nudge from mr. Firefox...
This is really cool, and it's the direction my project (with a large scope) will eventually go in. I already have a text editing component and highlighting working using OpenGL, but it needs updating to not use immediate mode so it can run in WebGL too.
Thanks. You can see it at https://github.com/shurcooL/Conception-go#screenshot, but it's less than 10% finished at this point. Everything you see is rendered in OpenGL. The text editing component has syntax and diff highlighting support.
It's written in Go, but I'm planning to have it running in the browser by compiling Go to JS and using OpenGL bindings with two backends: WebGL and OpenGL.
In terms of usability, it's nothing more than a complicated tech demo atm; I need to figure out its future direction.
I thought I recognized your user name. I had looked at Conception a little while ago. It's a neat project and I'm looking forward to seeing more of what you do with it.
Yeah, I have written everything so far in plain JavaScript. I've been considering something else, but all of the available options seem a tad... monolithic.
If I'm going to not-javascript, then I think the answer is to go to something that is completely type safe and static, instead. I'd really like to be able to write-once-and-run-desktop-and-browser. I used to do C++, and now there is Emscripten, but I'd really rather not get back into C++ again. Go looked interesting, and there seems to already be a fair amount of work towards transpiling to JS. Rust looks particularly interesting, especially with the lack of implicit garbage collector, but from the little I've read, it seems Emscripten doesn't work that well with it.
> If I'm going to not-javascript, then I think the answer is to go to something that is completely type safe and static, instead. I'd really like to be able to write-once-and-run-desktop-and-browser. I used to do C++, and now there is Emscripten, but I'd really rather not get back into C++ again.
I agree completely, about type safe, static types, and not wanting to get back to the mess of C++ (header files, makefiles, ifdefs, complexity, etc.).
Which is why I think Go fits the bill so well.
> Go looked interesting, and there seems to already be a fair amount of work towards transpiling to JS.
There is indeed. I've already done quite a lot with it, but with smaller side projects for now. That's where I gained the confidence and I will work towards applying it to my main projects now.
I have been looking for a text editor implementation in webgl or canvas, and this is cool. but still, this editor can't accept asian languages which require an input method program.
Hi, thanks. Yes, I know it's not nice to people who need such things, and I really would like to support them soon, but I was only able to get just so much done in a month.
Which keyboard are you using? It's impossible to accurately detect keyboard layout in javascript, but I do provide a option to select keyboards. Unfortunately, the only ones I've had time to implement so far are US QWERTY, UK Extended QWERTY, FR AZERTY, and DEU QWERTZ.
There is a small bug that I haven't fixed yet that requires you to look away from the editor to make the drop down lists on the instructions page work.
As for other languages, I have a tool at https://primroseeditor.com/keyboard_test.html that can put together the code pages that I use for this. It supports dead keys for typing umlauts, but it can be a little fiddly to use. I've been using the Windows on screen keyboard to make them, so I don't know how well it maps to reality.
Unless you mean using alt codes to your Unicode characters directly, no, don't have that feature yet. To the list!
Hi, sorry I missed you earlier. Would you be able to provide more details as to the nature of your difficulties? Does the JavaScript console show any errors? What are your browser version and OS? Thanks.
I haven't yet figured out how to get a Node.js process to start back up after the server has rebooted, and it looks like the traffic has been killed me a couple of times.
No, that's not the issue. I actually am using forever. I'm not entirely sure what the problem was, but I think the entire VPS was falling over, so when it comes back there is nothing to kick off forever. It is fixed, now, though, by letting Apache serve the static files. I'm not a sysadmin and I've never had anywhere near this much traffic before.
Ah, perhaps setting up an @reboot cron job to kick off the forever process would do the trick.
Also, setting up a CDN in front of your static files is actually much easier than it sounds at first. You can just setup a Cloudfront distribution that uses your domain as its source; that should take off a lot of the load.
I am a digital nomad, and basically travel with a backpack's worth of stuff, developing off my 13"MBA. The thing I miss the most about my old office is having a nice monitor. The thing I miss most about having a home is my 3-monitor desktop setup. Both things I had to give up because they don't fit in my backpack and because I can't easily procure them wherever I travel to.
But if I can pack an Oculus Rift headset into my pack, I dunno, a year from now. I can have all the multi-desktop computing environments I want, and they'd actually be way more awesome than just 3 flat monitors.
Beyond all that, text editors and dev environments have seen little UX improvement given the rate of evolution for everything else digital. This looks like an awesome step to trying some really radical things (I know this is a previously explored topic though, but I love being reminded of it).
OP, this is awesome. I happily leave the future of my dev environments in your hands.