Hacker News new | past | comments | ask | show | jobs | submit | spidaman's comments login

This gets an upvote merely for citing Frank Zappa in any way


Does the presence of this post indicate that VRML is going to be relevant for ...anything... ever again?

I have an old t-shirt around somewhere from a VRML event in the mid-90's (back when I was tinkering on making dumb little scenes a cracked AutoCAD)... yay, it's relevant again :)

Personally, I've felt for a long time that as video cards and GPUs have made rendering a buzzillion polygons per second tenable, operating system developers should rethink their attachment to the two dimensional desktop metaphor that's been the interface for over three decades now. Whenever I suggest this, people tend to knee-jerk on how silly the Jurassic Park scene is with the SGI filesystem navigator (ya, the "It's unix! I know this" scene). Yep, that was the extent of our imagination working within the constraints thirty years ago but I do believe we can and should do better. But I have little confidence in the capacity of Meta or Microsoft to drive that kind of innovation, the creativity and incentives within those organizations will thwart any breakthroughs.


I feel there are a lot of compelling UI opportunities for non-gaming uses without headsets, eye tracking, holographic hardware, etc. Back when VRML was a thing, navigating a 3d space with a mouse was painful. However, with just the graphics rendering, touch displays and gestures that have commodity availability now I feel like we can break out of the icons/desktop/menus paradigm to interface with applications and data. In fact, I question whether anything of great utility can come of the fancy new AR, holograms, etc if we can't even think outside the old 2D box to interface with applications and data with the graphics rendering, touch displays and gestures widely and cheaply that are ubiquitously available right now.

The dismissal of 3d as interface for applications and data is emblematic of the absence of creativity cited in my original comment. Yes, we're accustomed to navigating folder hierarchies and invoking discrete functionality but outside of that paradigm data can be organized and retrieved more effectively. IOW, when presented with steam and internal combustion capabilities all you still want is a faster horse it's a lapse of the imagination.


Personally, I don't see VR/AR/Holograms as an automatic improvement over 2D screens when it comes to UIs, those techs primary contribution is making things convincingly 3D and pretty, non-gaming UIs don't (primarily) exist to be 3D and pretty, they exist to present to you the actions you can do to a system and display the effects of said actions and the overall current state of the system. When you put it like that, it's much less clear why making things 3D is necessarily an improvement.

Exceptions are many. I remember seeing Iron Man and being mind blown by the sheer ergonomicity of Stark's hologram interfaces. That's because design and engineering of the kind that Stark does are inherently tactile, he grabs hologramic models of gadgets in his hand and rotates them as if they are really there. That's insanely powerful. Any kind of design, exploratory work, or education where tactile, haptic and even thermo-pressure interfaces are superior to traditional interfaces would vastly benefit from AR and the associated technologies.

Another thing is practical crafts where looking back and forth to a screen is impossible or infeasible, but imagine building a circuit while wearing glasses/contact lens showing a HUD where an annotated schematic of the circuit is displayed, or repairing a car following a graphical step-by-step tutorial (the kind shown in games) overlaid on top of the car, relevant parts of the car glow to get your attention and text appears at exactly the right time to remind you of steps. This is a whole new way of encoding knowledge, YouTube is already a small revolution in the instruction of practical crafts and cooking, imagine how much more of procedural muscle-memory knowledge we can encode with this.

The 2 examples above are just to point out that I'm immensely excited for cheap and ubiquitous VR and AR and I have no shortage of fantasies about them. I'm also not saying that making boring UIs with VR and AR wouldn't make them more entertaining, maybe making tweets float around you would really make Twitter users more amused and engaged (a bad thing), although it adds no real functional value. All I'm saying is that last claim, most current UIs wouldn't benefit substantially from VR and AR except for entertainment value. They wouldn't make it any easier or more efficient on the user. What's Excel with 3D tables? a slightly more confusing Excel.

The real general revolution in UI and interfaces is Neural Interfaces. Even the most basic and primitive neural interface that is basically just a vim-like system for composing a few basic mental gestures into re-bindable commands would be a massive productivity boost, every time you move the mouse or press the keyboard your thoughts start and end before your hands have done anything, imagine the raw speed if your thoughts alone are driving the computer.

Forget graphical tutorials, neural interfaces would conceivably allow us to download mental models and physical skills, Matrix-style. I don't think it would be easy and I'm not even sure it's possible, but boy oh boy, is that a fun thought to imagine. It would obsolete VR and AR entirely because you can just reach into people's brain and plant images, audios, haptic sensations and emotions at will, bypassing the senses and the body completely, and possibly inventing new senses. (e.g. Zap your brain with a certain pattern of electricity representing the earth's magnetic field or the stock market dynamics, after a while it will fade into unconscious and you will have a constant gut feelings for the system that the patterns represent.)

Low level access to neurons will be a gateway to wonders and horrors beyond our imagination.


I disagree that there is no practical benefit when you add the creative elements of 3D to the display of traditional 2D content. Reading a line of text is fundamentally the same in whatever format you consume - but there are serious interactions with that line of text which only become possible in an animated free space. Consider learning to read in a foreign language where HMD eye tracking is used to infer difficulty on a certain word that triggers additional supporting materials. There are thousands of examples yet to be explored and the impact to well established existing 2D information systems will be dramatic. Implementing 3D layers will offer a double win of increased functional utility combined with a nicer more human fitting and artfully expressed interface.


You are making points about potential creative applications - BTW I thoroughly disagree on your language example, comp learning people are constantly making this mistake, we don't need better visualization to learn foreign languages, it is really all about practice, which unfortunately for you has nothing to do with your scenario - the 'triggering additional supporting materials' has nothing to do with the user actually practicing, at best you have an overly complex hyper-micro-optimization that will bombard the user with more unhelpful material...

The issue in the parent comment had to do with user interface e.g. for OS - I think the concept is doomed in the general sense - people live in houses, which are mostly reduced to a set of 2d interfaces - having houses in your house is not helpful, its literally just more confusing.

This the general issue - if the assumption is 3D is better than 2D then we should aspire to do everything in 4D, 5D, etc. Apart from obvious physical limitations, there is a good reason we don't do this. Our computer systems already are n-dimensional - dimensions are useful for storing complexity. We crave simplicity, though, this is why 2D is so popular, we reduce complex n-dimensional models to 2 dimensional ones.

As programmers, we even reduce it to a single, textual dimension - being able to follow a single thread is often all we can easily reason about. Many, many people prefer reading or listening to audio over watching pictures - TV shows can be nice to veg out to, but they are much harder and more complex to dig into and really engage with.

That's why there is no good use case for a 3D OS shell, for the majority of people it doesn't provide adequate visualization value for the added complexity. To a systems engineer, there could be some value in viewing OS components as parts of a car engine, perhaps, indeed a lot of useful tooling seeks to visualize this type as stuff as much as possible. But your average Joe just needs email and maybe pictures and video - sticking them in a 3D environment just makes them more difficult to use.


You may have been interested in meta-glasses back in the day(1) I had high hopes for them - especially the hand tracking and being able to manipulate virtual objects. Alas, it wasn't to be...

1) https://www.youtube.com/watch?v=b7I7JuQXttw


Sounds like GP would be more into the OpenBCI project Galea in partnership with Valve and Tobii [1]

[1] https://www.roadtovr.com/valve-openbci-immersive-vr-games/


Fastly | Senior SRE - Edge Cloud | London, Madrid, Stockholm or EU Remote | https://www.fastly.com/

Fastly is building up the team that owns managing change on its world wide bare metal infrastructure. We are operationally minded software developers programming in Go and Ruby to orchestrate the application software updates and configuration changes as fast as safely possible and automation around our infrastructure lifecycle. The SRE team partners with application developers and operations engineers around the world and need to grow the team in the UK or Europe. Fastly has offices in London, Madrid and Stockholm but our team also welcomes applicants from throughout the UK, Sweden, Spain and the Netherlands who are experienced with home office remote work.

If you are looking to join a rapidly growing business with exciting technology and wonderful, diverse people, hit me up and apply here https://www.fastly.com/about/jobs/apply?gh_jid=1523951


Bitnami | San Francisco or West Coast REMOTE | Senior SRE | Full Time

Bitnami is on a mission to bring awesome software to everyone. Building and configuring software stacks can drag down an organization’s time to market with their applications but we make it super easy for anyone to run software in the cloud.

I’m hiring Senior Site Reliability Engineers to join BItnami who will help us build the next generation of cloud infrastructure and will have an impact on our products. We work with all of the major cloud service providers with code written in Ruby, Go and Javascript as well as operational tooling that leverages AWS, Ansible, Rundeck, Packer, Git, Icinga2, Cloudwatch, Monit, Vagrant, Docker, Kubernetes, Jenkins, ELK and other state-of-the-art enabling technologies. We’re focused on tools and event-driven infrastructure to bring automation and autonomic computing to systems that smoothly scales up and out for production use cases as well as scaling down for development.

Bitnami’s SRE team is distributed around the globe in timezones separated to optimize for hand-offs and humane on-call. We offer competitive compensation, flexible time off and other benefits. We have regular outings in Spain (travel provided by the company) and other fun activities together.

More information about the position is available here https://jobs.lever.co/bitnami/29d26810-26b2-4af2-a014-7f720d... and you can feel free to reach out to me with any questions. Principals only, please


I'm surprised at all of the responses calling for the CTO to be fired. It's very common to have founders who are very good at the very early stage but lack the experience to scale the technology, the team, the culture and the business.

Ask yourself these questions: * Are the CEO cofound and CTO in cahoots, engaged in a malicious equity grab? If so, you chose partners poorly, move on post-funding. If not, then there's probably something important to listen to here. Then ask: * Are your technical and project execution chops going to take the company to the next level of technical, organizational and business scale? If so, you chose a CTO poorly and your CEO co-founder is a fool, move on post-posting. If not, then you have another choice: * Are there other ways you can help the technology, organization and business grow? If so, discuss that transition instead of an exit. Otherwise, be grateful for the lessons learned and move on post-funding.

In all of the "move on" cases, assess that your equity position is aligned with your contribution to where the company will be when it's ultimately profitable or liquid. If it's still very early stage, that proportion may be very small but it will be better to have a small bit of something successful that a large portion of a failed company.

Set aside ego, consult an attorney (as advised elsewhere), don't engage in scorched earth and figure out if these are people you want to continue working with, you can contribute getting the company to the next level and if so, in what role.


I use MySQL plenty and there's a lot of things I like about it. But I honestly don't understand any of the MySQL related comments on this post. Even if Oracle didn't own MySQL, a migration from Oracle to MySQL is significantly more difficult than one from Oracle to PostgreSQL. The poor join performance of MySQL make it doubtful that it could have ever been a serious discussion at Salesforce (to say nothing of the merits of mvcc and PostgreSQL's license).


  > to say nothing of the merits of mvcc
I want to know more about the merits ov mvcc and why don't they apply to MySQL.


Because it doesn't do MVCC, it locks instead.


False. InnoDB does MVCC by default.


Salesforce spends a huge amount of money on licensing their horizontally scaled Oracle databases. I'd venture to guess this signals Benioff exploring a strategic bet to tell Ellison to fuck off once and for all.


Actually hiring 50 people sounds more like a decisive plan than "exploring a strategic bet".


The posting says they're hiring only 5 now, 40-50 next year;I inferred they're still in an exploratory phase because if they were all in, they'd be hiring more than 5 now. The posting doesn't say they're doing a migration but having worked on Oracle -> PostgreSQL migrations and sharded database infrastructures, it makes sense.


Of course nothing is ever final, but publicly announcing something like this is already sort of a "fuck you" to Oracle, so I'd be surprised if they weren't a 100% sure of their plans to switch.


Though the plan might be to negotiate cheaper Oracle licenses..


Oracle won't negotiate with Salesforce -- there are too many personal traits between the CEOs.


Or they're after more leverage in their dealing with Oracle.


Having worked at Salesforce I can say they take _forever_ to move on anything. When I left the company two years ago they were just starting to talk about Postgresql as a option to replace Oracle..


Nice. First thought evoked: "Scenes From A Night's Dream Lyrics" by Genesis


As a user of the WeLL from the 1990's, I'm glad to hear that the community that owns its own words will now own its own destiny.


> why no one has made a Couch to 5k app yet

I've been using this, a little buggy but definitely helped me acquire my running habit: http://itunes.apple.com/us/app/ease-into-5k/id301233668?mt=8


Consider applying for YC's W25 batch! Applications are open till Nov 12.

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: