Hacker News new | past | comments | ask | show | jobs | submit | idanb's comments login

Dream is a real time collaboration and communication application that lets you meet up with other people in a virtual space. You can bring up content by way of an integrated chromium browser that can be shared in the virtual space - you can even open up services like Skype / Zoom that will spin up a virtual web cam that can be used to connect with people not in VR.

We chose a deliberately delightful aesthetic vs a more sterile corporate one - so the comparison to Nintendo is I think a good one, but we are definitely not a game experience.

I'd recommend checking out the video in the blog post (in the link above) which will show you a video shot in Dream where my co-founder and I talk through some of the new additions to Dream in this version 1.0 release.

Hope that helps clarify a bit, and if you have other questions or any feedback would definitely love to hear it!


Early on we actually built out support for leap motion in dream, and this was super cool because of the networking stack we built - we were able to send all 20 points per hand in real time at 90 FPS at low latencies. This was really an amazing experience, but there were a lot of issues we simply couldn't overcome - like wrist occlusion where your hands would suddenly fly off into the distance, or when your hands didn't do what you intended due to incorrect data from the sensors. As a product minded company, we had to make the hard decision to hold on this - at the end of the day, users don't care whose fault a bad experience is, they just uninstall your app and never come back.

Really excited for new HW and capabilities to become available commercially. We built out our keyboard to be effective without anything but what's currently available (6DOF HMD with 6DOF controllers), and we'll continue to expand support for commercially available capabilities. Maybe it's an unorthodox perspective, but we really only want to ship and represent capabilities that any user can attain easily - and not tease things that are soon to (but may not ever) come.


Thanks for the question!

It sounds like you have easy access to VR HMDs, so if you get a chance to plug into an Oculus rift any time soon I recommend you try it out - and would definitely love your hands on feedback.

We've done a fair bit of jiggering to make sure a majority of web based content is both legible and usable - and our UI has been built to try to be as intuitive as possible, and eliminate a lot of the bumbles that we too associate with many VR experiences.

It's a free download, and you don't have to create an account if you don't like - once you download, you'll be presented with an account sign up / log in form where the keyboard can be used and messed with a bit. We also use chromium for all of our login / account creation flow in VR - so you can get a taste of what that feels like as well. If you want to try something like Trello out, just create a throwaway account and never verify the email - then you can pull up a website like Trello or NYT, where you can assess the usability and legibility.

I think that if you're coming from a place of comparing this to existing 2D based collab tools like Skype / Zoom etc you'll have a hard time seeing the benefit, but if instead you try to look at how those tools are insufficient compared to a real-life meeting you might see where we fit. Our goal is not to replace 2D based methods, but to allow for a level of presence previously only possible with in-person interactions. This shines in particular in situations where you're meeting with three or more people along with content at the core of the meeting.

Hope you get a chance to try it, and would love to hear what you think and how we can make it better!


Thanks for the reply. Incidentally I've also worked on a bunch of problems you guys must have had related to the UX in VR. For example I've also implemented a bunch of virtual keyboards ;-)

Are you using straight CEF or have you improved the compositor to composit directly into a texture? IIRC CEF only provides the composited web page in a bitmap and then you're going to have to do repeated texture uploads which is going to be a drag.

Does this support VIVE too or only Oculus?


Awesome to hear that! Would love to check out your work if you have a link / video or anything like that!

We actually forked CEF - and had to make a few changes to allow for integration in the way that we needed. We do use OSR mode, and update a texture in that way - although we need this buffer anyways, since we're sending video frames across the peer to peer mesh - so even if we did go straight GPU, we would still have to download the buffer from the texture.

It's a drag, but there are a number of techniques to improve the performance. Resolution is one great approach - since the resolution of the HMD makes having a high resolution on the browser kind of useless, so reducing the resolution also reduces the pressure on the GPU. Also, we can limit frame rate based on the kind of content being used - and also, we can leverage dirty rects to optimize for content that isn't changing. Since we're running multiple browser tabs, the latter technique isn't as useful for a particular page, but makes it more performant when a user is doing multiple things, like watching a video on the shared screen and scrolling through wikipedia or a news site like NYT.

Up until we consolidated the build for Oculus release, we supported OpenVR and still do in our code - just not the Oculus build. We've gotten a lot of interest in the Vive build in this initial release, so might look to reintroduce that. Before pushing to Oculus, Dream would just launch off the desktop and detect which HMD you had plugged in and then launch the appropriate platform. Shouldn't be a ton of work to bring it to Steam!


I think the application of VR for finance applications like what you describe is definitely a really exciting use case that will become more realistic as the headsets improve in terms of comfort and resolution - per your comments. We actually explored some other similar use cases, for example customer service - where a VR HMD could in effect replace the 2-3 monitor / desktop + headset set up that customer service reps drive to call centers to use every day.

However, these "full day" applications are definitely a few years out. We've seen sessions of dream where people have little issue of up to 90 minutes, and on average something like 45 minutes. Effectively the duration of a meeting - and this is where we're focusing our efforts, where productivity and collaboration intersect.

I think that from the perspective of investing into these technologies as a financial institution to change the way people access financial data all day, it's a not here yet. However, to get there, we need to start building these platforms today. This has been our intention, and think that even in the meanwhile - being able to have a remote meeting with 4-6 people that are all over the world. Then being able to bring in content like presentations or generic web pages will provide a level of utility out of reach of even some of the $300K teleconferencing solutions out there.

Regardless of the hype the industry has received, these headsets are still in their infancy. One of the big steps is about to be taken by the upcoming Quest headset due to launch Spring 2019 by Oculus. The big step this HMD takes is detaching the umbilical cord to the computer, and providing the first fully 6DOF headset (both HMD and controllers). They even through in a healthy resolution bump, which we're excited about. This well could be the 'blackberry' moment for VR.


Agreed, the name collision is unfortunate - we actually incorporated before Daydream was announced. If there are real issues / conflicts with the name, we're the smaller guys here (our team is just 4 people), so we'll obviously make the corrections that we need to make over time!

Let me know if my response to the Bigscreen question above is sufficient, or if you have other specific questions about anything. Would be happy to dig deeper into anything. Super excited to share the hard work of our team - we've basically been quietly coding for years now, so this is the first time we really get to talk about what we've been up to.


Resolution is a big deal for sure, but because we're using chromium instead of desktop sharing we are able to set the resolution to ensure fair visibility for most content sources. Generally, we see people trying out Dream and bringing up CNN or NYT and having little issue with reading articles. Sure, if the resolution of the HMD was better we could do a lot more - but we set the parameters to optimize for content viewing, and also added FXAA to help without hurting low-spec machines in terms of performance.

Dream is currently only available for Oculus Rift - and the video was actually shot with an in-engine camera that we developed, and captured on a mirror pipeline at 4K - so I think the blurriness in the video may be an artifact of streaming potentially? Here's a link to the vimeo, which might let you watch it at the set 1080p resolution we scaled it down to: https://vimeo.com/291432708/4c32095226

We're excited to get Dream onto other HMDs, especially the mobile standalone ones coming soon - really great that the Quest is going up to 1600x1440 as it will make use cases like ours work even better!


We're super excited to get this out after nearly 3 years of work by our team - we've put together some further thoughts here: https://medium.com/@idanbeck/announcing-dream-754c0f374da0

Dream is now available in early access on the Oculus store, so we'd really appreciate any feedback and thoughts people have. We truly believe that immersive technologies like virtual reality can make remote work and collaboration better than existing 2D form factors - especially as the new standalone VR headsets like Quest come to fruition in the coming year.

Dream has been built entirely from scratch, so we got to rethink a lot of the stack. We prioritized certain things, like networking and UI, and we're really proud of the outcome. Doing so also meant it took us a lot longer to bring a product to release, since there was a lot more to do - but it allowed us to integrate WebRTC at the native level as well as chromium (by way of CEF) so we can do things like bring up our keyboard when a user hits a text field.

Hope people like it, and want to say thanks to everyone that made it possible!


I'm a big believer in the potential of VR for remote collaboration and as a fully remote VR development team we already make some use of it. Congrats on shipping but from what I've seen so far its not clear to me what this offers over Bigscreen or Oculus Dash desktop sharing? There's a bunch of functionality I'd like to see in this space that nobody has really nailed yet that I've seen but while this looks like an interesting project the articles I've seen so far don't do great job of explaining what new functionality it brings to the space.


Would be super interested in what kind of functionality you're specifically looking for, or is it more that existing functionality doesn't hit the mark? Our goal with this release is to get some great feedback so we can make Dream better, and more useful for users so please let us know what you'd like to see!

To respond to some of your other questions - one thing that we've noticed isn't super clear is that Dream is not doing any form of desktop sharing. We integrated chromium at the native layer (by way of a forked version of CEF) and the content views are all web browsers. This allows for a level of integration with the rest of our stack in a way that is difficult or impossible to achieve if you're doing desktop sharing (we actually built desktop sharing, but disabled it in the build for now until we can solve some inherent usability problems).

We're big fans of Bigscreen, but I think they're heavily shifting their focus on entertainment and watching movies in VR together. Also, we were working on Dream for 1.5+ years when Dash was announced and were excited to see some similar ideas there! We're trying to find ways to make VR as a viable solution for remote working and collaboration, and this has led many hard decisions we've had to make - especially as we've decided to build the entire stack. This obviously meant it took us a lot longer to get something out there, but as a result Dream is a lot more intuitive and seamless than you might expect.

For example, our keyboard was heavily inspired by an early Google VR experiment we saw, but after building out a version of it we quickly understood why it wasn't getting people to a viable text-entry solution. We built our own collision system and "interaction "engine" to allow views and apps in Dream to respond at the event level of "touch start, touch end" similar to what you'd expect in building an iOS app - and underlying this the interaction engine would be updating the collision / position of everything in real time. As a result, we've seen people hit 30-40 WPM on our keyboard due to the tactile cues we've included (audio/haptics) as well as a kind of activation region, which allows you to really time and feel out the key press. Definitely hard to describe this or show this in videos since it's all happening at 90 FPS - but hey, it's a free download so give it a shot!

Dream never asks you to revert to your monitor or take off your headset, this was a strict rule. This means that everything from logging in, to inviting someone new to your team had to be possible in VR. To accomplish this, we create a kind of chromium integration with Dream so that we could run web views that manipulated our engine directly. To us, asking the end user to remove their HMD for any reason is equivalent to asking them to restart their computer - it's really not acceptable.

Our goal is to demonstrate how immersive technologies like virtual reality can enable remote collaboration and communication use cases. Especially in terms of how VR, by comparison to existing 2D formats of video/voice, provides an improved layer of presence through nonverbal communication cues.


Ah, ok, yeah that wasn't clear to me from the articles I saw, it sounded like desktop sharing. I went to the BigScreen talk at Oculus Connect and yeah it's pretty clear that their focus at the moment is on people watching video content together in VR but we have used it with some success for code reviews (desktop sharing is key there, I can jump between Visual Studio and the Unity Editor).

Your keyboard sounds interesting and I'll have to check it out but to be honest it's not a big selling point for me as a touch typist: I don't have any trouble using my keyboard in VR. We do have some non touch typists on the team though and it's not always convenient to put your Touch controllers down to type so I can see it being useful.

My ideal VR collaboration app would support at least solid desktop sharing support, well implemented VR whiteboards (including annotation on the shared desktop) and 3D sketching like Quill / Tilt Brush. We use whiteboards and 3D sketching in Rec Room but they're quite primitive. The sketching doesn't have to match a dedicated sketching app but should be better than Rec Room.

It would also be useful to be able to easily import 3D assets for review, Dash support for GLTF is looking like a good implementation of that. Custom environments would also be useful for us so we could do collaborative design of environments for our own VR app.


You should give Hubs a try, it's a WebVR based collaboration and communication tool we're working on at Mozilla. Screen sharing is behind a feature flag (and not quite working atm) but you can bring in GLTF models, images, and do simple 3d drawing. Also, by being the browser, it avoids a lot of the installation and coordination steps of native apps, and works on phone and PC.

https://hubs.mozilla.com


I was actually at that session as well! It was a bit surprising for me to hear that they're taking such a strong stance with regards to shared movie watching, but generally it's always felt that Bigscreen has been tuned for gamers / entertainment type use cases.

Totally hear you with regards to being comfortable touch typing while in VR, but I think that this is a pretty big barrier for a lot of users that are not as comfortable in VR. In our early experiences demoing Dream to people, we noticed just how overwhelming going into VR is for a lot of people that have had either no exposure, or very little to it. We used to have computer keyboard pass-thru, and this could be something that we add back as we continue to iterate and make the experience better.

In terms of desktop sharing - we used to have this capability, and it's still in the build but disabled. We pulled it back due to some inherent usability issues that we're working on as well as performance limits on low-end machines.

Annotation (whiteboarding on shared content or a white screen) is next up on our road map, we just didn't have sufficient time to get it into the initial release - so excited to hear that's something you would be looking for. Similarly, 3D model import / review is something that we're about to tackle as well. One of the big things we're excited about exploring is actually using chromium to do this vs. forcing every client to download what could be a big file, or push performance limitations on a varied set of machines. Instead, we'd find a way to utilize WebRTC to stream the content in a way that provides a 3D review experience for all clients with no performance limit.

On environments, we agree as well - right now we have one environment, and have 2 others in the pipeline. In the future, it'd be great to allow for 360 stereo videos to be used as the environment or allow teams to customize their environments if they've got the in house capabilities to do so!

Thanks for the feedback, and hope you get a chance to try Dream out a bit and give us some hands on experience if you get a minute a well!


I've always thought that conferences / education / seminars was the big sell here. Productivity didn't seem like a starter, simply because work is such a cultural thing with all of its own stresses and getting people to adopt these new techniques is like pulling teeth. Students and conference goers who want to save on the extremely big bucks that travel and classrooms and conference halls can cost will see this as a huge win.

Also, the isolated / focused environment of VR could be a big plus for learning as it blocks out so many distractions. I'd love to see a study done around that.


I'm also extremely excited about education applications for VR, especially those that benefit from real time communication. For example, learning a foreign language from a tutor that lives in the origin country of that language - and being able to interact with them naturally, including the various nonverbal cues that are crucial when learning a new language in the same location as someone that has it mastered.

At a slightly higher level, I think VR can unlock a lot of "centralization of expertise" type use cases. Basically, there's some resource that is distributed normally but is required to be centralized due to the way that expertise is consumed. For example, things like call centers, or tutoring - if those people could instead operate from wherever they might be located while providing their expertise to customers wherever they may be located, this could be super useful for both providers of said expertise as well as the consumers of it.

Definitely excited to see what kind of things applications like Dream can enable!


Great work on this. I lead a remote team, and if the vision here were fully realized and available on the Oculus Go/Quest, I’d trade some travel budget to equip the entire team with it.

You asked what features it would need. My use cases are basically what we do when we bring the team to one location for face-to-face work. While there are use-cases for typical distributed work, there aren’t many things we can’t do with the tools available: Slack, Google Docs, email, git, etc.

What is missing that only VR can solve?

Generally:

Space. No matter how many monitors we place on our desk, we run out of real-estate quickly. Collaborate planning and problem solving often requires space to visualize our thoughts and ideas.

Social presence. When human problems overshadow technical problems (i.e., all the time), the feeling of being fully present to one another is necessary.

Specifically:

A whiteboard. In VR, you could make something far better than the 10k devices out on the market today (I’m making this number up, since “request a demo” is the standard price)

Emotionally engaging avatars. Seeing a head turn isn’t enough. We need to be able to see another person’s unique facial expressions in order to connect. With goggles covering half the face, I don’t know this is possible, but ML research in 3D facial reconstruction from 2D photos has shown a lot of promise. Perhaps in-painting combined with reconstruction would do the trick.

Beyond these two items, I see a large space for reimagining UI in VR. Unlike a monitor and mouse, VR knows generally where you are looking, and in the case of Oculus, what you are saying. Voice commands that are contextual to eye gaze (tracking would be better) could be combined with VR’s version of a right-click context-menu as an HUD. It doesn’t need to be as crazy as Iron Man, but given a limited number of options, voice commands seem an ideal fit for VR.

A few years ago, I prototyped a HUD/Voice UI as the central feature in a VR browser, but I found resolution on my DK2 to be too limiting. Maybe the time is right to try again. I’d love to see someone try!


Thanks for the kind words! We are totally on board in terms of getting Dream on to the upcoming Quest, and standalone VR headsets in general. We've had a bit of success in getting businesses to install VR set ups in a conference room, but when this stuff is a $400 standalone device that doesn't cannibalize your work computer or phone I think it completely changes the viability of using VR for productivity / collab (especially on the go).

Thanks for the feedback on what kind of stuff you'd like to see! Deeply agree on the whiteboard thing, and it's one of the next features we want to tackle on our road map. In particular, we're excited about the ability to annotate either a white screen (normal whiteboard) or content that is being shared. The sensitivity of the controllers plays a big part here, but even for something like marking up a presentation or document this could go a long long way.

I agree that VR headsets have a bit of ways to go in terms of detecting facial gestures and emotions. I've seen some really cool tech using putting galvanic response sensors in the facemask, which could do everything from detect overall eye direction (up down etc) as well as emoji level gesture detection. I think over time, these features will become standard in most HMDs. In the meantime, we tried to balance the line between uncanny / creepy to useful and engaging. Perhaps it doesn't come through as well in the video, but when you're in VR just the head motion / hand position goes a long long way. Also, we built our entire stack to ensure that we're able to send this data as quickly as we get it from the HW - and keep our latency down, this provides for a surprisingly engaging experience and would love to hear what you think if you get to try it sometime!

We are keen to explore the addition of voice and contextual gaze based actions. We had a bit of this built before launch, but for the feature set we launched with didn't really use much of it. With regards to voice, we are planning to integrate with something like Google Voice Engine but wanted to make sure we had a good text entry method for things voice has a really hard time with like URLs, passwords and usernames - these were in the critical user flow, since they're required for logging in / creating accounts as well as selecting what kind of content to share. We also added Dropbox / Google Drive integrations to make bringing in content more fluid and intuitive, so overall you can kind of see where we've been prioritizing but we definitely have a long ways to go and more to build!


I gave it a quick try, some initial observations:

- You don't appear to be using SDK layers for your screens, they would probably help with text readability.

- UI isn't very discoverable and I couldn't find help.

- I couldn't find a way to zoom / scale a web view, this would help with text readability on a site like HN if I could set the zoom to say 150%.

- Related but different, I couldn't find a way to move / scale a panel.

- I couldn't find a way to rotate my view and I spawned at an awkward angle so had to stand at an angle to my cameras to face the large screen. I'd normally expect to find snap turn on the thumbstick in most apps.

- The keyboard works quite well, nice job. I feel like a Swype style keyboard could work even better for VR though.

- Keyboard behavior was a bit unintuitive on sign up, I tried to touch the next input field to select it and my keyboard disappeared, I didn't immediately notice the next / previous buttons on the keyboard itself.

- There's not a lot to do initially as a lone user. Some kind of onboarding / tutorial would be good and it would give a better initial impression if there was a bit more to do. I know the focus is on collaboration but I think you would benefit from a hook for individual users trying it out.


Hey! I'm Doug, one of the founders of Dream.

Thanks for taking the time to leave detailed feedback like this. We need as much of it as we can get.

A few of your points come down to discoverability / help. You are correct. We agree that before the app comes out of early access we have to drastically increase that. Once we understand that the core functionality is serving its intended purpose, we will turn our attention to onboarding, help and discoverability.

It's also fair to point out that pixel density in the headsets aren't really great yet. We are of the opinion that legibility will be a problem that solves itself in the near future. User frustration might force us to reconsider that opinion sooner rather than later.

Lastly, with regard to the position of UI, it is something we have worked on quite a bit and there is still plenty of room for it to get better. We have shied away from infinitely adjustable UI size and positioning and instead tried to make something that positions itself automatically. Right now, the UI positions itself based on where your head and hands are and how long we think your arms are. The implementation is pretty naive and we intend to improve it over time. This might be another area where we have to get more mechanical to decrease frustration.


Yeah, headset resolution is limited but that's why it's important to make the most of what you have. As I understand it the SDK quad and cylinder layers bypass the lens distortion pass and effectively analytically ray trace the optimal sample point for direct display. Carmack is always telling people to use them for anything text heavy. I haven't done a side by side comparison but this is probably why Dash text is a bit more legible than yours. Future headsets will have higher resolution but we're years away from high enough for this stuff not to matter.

As an experienced VR user my expectation at this point is that I can move panels around with my hands and ideally scale them too. Doing that well and in a way that doesn't trip up novice users isn't trivial though, Dash still has some work to do there.

I don't mean to be overly critical, my first VR app (for the DK2!) was written from scratch in C++ so I know how much work goes into developing something like this without an engine. I want this type of app to succeed so I can use it though so the feedback is intended to be constructive!


I don't take this as overly critical. We need and welcome feedback.

Looking at layers for text is something we will definitely do. We are happy with SDF text in our UI, but text inside of Chromium is miles from acceptable and will continue to be a focus.

As far as scalable and user positional UI, we currently hold the opinion that it doesn't offer enough value to compensate for the complication that it introduces, especially for novice users. At the same time, we realize that is a contrarian point of view and user frustration may force us to change it. We have to remain open to changing opinions like these.


Agree with Doug's response, but wanted to quickly say thanks for pointing us at the SDK layers thing - will dig into it!

We spent a fair bit of time building out this pretty unique node-based pipeline architecture, where the HMD is effectively just a sink node. We made an attempt to make text / content more legible by the way of an FXAA implementation, which would have limited performance impact as compared to something like MSAA - but it definitely can get a lot better. Will dig into that stuff, and hopefully we can improve it further!


So how much inspiration was rekindled when you guys watched Ready Player One after working on this?


We were definitely anticipating the movie. We saw it as a team. We had hoped it would do more for general recognition and understanding of VR, but I think that was not the effect. Obviously that was in no way the job of the filmmakers. They are there to make an entertaining movie, not advance the adoption of VR.

I will say though, the book itself was a big part of me personally deciding to get into this project after I left my last one.


I agree with you, but it is also IS income inequality. The amount of marginal income increases (or new income) is going to the top centile and decile and is not proportional at all to the distribution of either income or wealth.

" The average income for the richest 1 percent of Americans, excluding capital gains, rose from $871,100 in 2009 to $968,000 from 2012-13, he wrote. The 99 percent, on the other hand, experienced a drop in average incomes from $44,000 to $43,900, Wolfers said. The calculation excludes government benefits in the form of Social Security, welfare, tax credits, food stamps and so on. "

From http://www.politifact.com/truth-o-meter/statements/2015/apr/...

However, the research in Piketty's book also shows this up through 2010, and I remember the figures he was drafting showing 50% of new income going to the top decile going through 2010 - so Bernie might be a bit aggressive in his numbers, but even 50% of new income going to 10% of the population is just bad. I remember in the book he outlined how only places like Colombia or otherwise Fascist regime countries are worse than the US.

I can dig up the numbers from the book - I read it on ebook so it's always a hassle waiting for the pages to refresh looking for info :/


The answer to that isn't more taxes and regulations. Government has completely failed us. The answer is for the people to get off their butts and do something about the problem.


Agreed - something fundamental really needs to change. I'm most worried that it'll come as a result of this election to some degree. I don't think people are supporting Trump because they like him, but I think that they've almost got no choice but someone that might (even if totally by accident) blow up the system. I think this will be totally nuts, but then again - if he loses I fear things may get violent.


> I don't think people are supporting Trump because they like him, but I think that they've almost got no choice but someone that might (even if totally by accident) blow up the system.

They're supporting Trump because they've been failed by the status quo, and Trump has, at the very least, recognised that, versus the establishment who just pretend everything is OK.

His 'fixes' for the economy aren't the typical progressive economic fixes, but rather populist ideas like strong-arming corporations into keeping American jobs or keeping money in the US which, while not necessarily drastically reducing inequality, will have the result of at least giving some of the working class a boost. Globalism absolutely has allowed many working class jobs to be outsourced to areas of cheaper labour.

Whatever you think Trump is, he's a result of the sentiment of his supporters. He's the least-ideological Republican candidate in recent history (which is why Ted Cruz campaigned on the idea that Trump isn't a 'conservative').

> if he loses I fear things may get violent.

Things are already violent. Look at the news, riots in multiple US cities. Violent protests everywhere. Mass swaths of unemployed people who resort to crime. These are all the result of failed economic policies.

You're right though - if the status quo continues, the violence that's occurring today likely will escalate.


I'm a huge fan of Piketty, and have been slogging through his terrific (but long) book and it's change my perception on a lot of things.

I don't want to challenge capitalism - competition drives innovation, and the need to survive and to make a better life for oneself is at the core of that. However, we need to admit that not all people are driven by this, and in our world (at least in the United States) it's not about survival in terms of life and death and hasn't been for some time.

The people on our streets are not there because they deserve to be, neither are many of the people in the boardrooms. Today, there's a disparity between the have and have nots and it has less to do with capitalism than the reality of growth which Piketty outlines really well in his book that's definitely changed the way that I look at it. It's hard to see value in supporting the status quo when it's only going to return 1-2% year after year - especially after an excessively long period of unprecedented growth due to the devastation and rebuilding through the 20th century. However, this results in a lot of the wrong things being valued in my opinion.

I feel like we're at a pivotal point in human history. We're at income disparity levels second only to countries scourged by ISIS, we're at wealth concentrations comparable to Europe pre-world war one. Marginal income increases have little to do with marginal productivity increases of either companies or employees. I feel like the fundamental definition of value needs to be re-evaluated.

I really think that the nature of capital/wealth have really changed. When cost of living dominates anyone's ability to be a productive member of society, that's when we should reconsider how to go about either distribution of wealth that's leading to that (that hardly ever works) or instead how we quantify and define wealth. Shifting this balance in favor of what we as a society value, I think is the best way to start to really change things.


All that you said is right--we have allowed a monster to grow. And killing that monster is going to require sacrifice now. It never comes any other way.


yes - either we need to find a new "source of value" or some form of tear down needs to take place. I think historically tear downs have been at a core of human growth and progress, so I think that will happen before people are forced to change their minds.


I really like this idea. Ads are some of the most distracting elements on the web, but at the same time we all understand the need for them due to no better monetization strategy for much of the valuable content on the web. Said as such, since content is not free to produce. When Hulu, for example, came around offering a higher subscription cost for ad-less shows I was eager to use it since I value my time, and as such I value my attention on the web.

Would love to see this extend to things like online music / video as well - especially in the realm of music, a way for musicians to monetize passive plays would be terrific.


>we all understand the need for them due to no better monetization strategy for much of the valuable content on the web.

This just isn't true. There are plenty of other monetization schemes on the web that work for different types of content. (Subscriptions, donations, subsidies (both corporate and governmental), patrons, etc.)

>Would love to see this extend to things like online music / video as well

It pretty much already exists in the form of Netflix, Hulu, Spotify, Apple Music, etc.


Content creators get effectively zero income / revenues from any of the services you listed. Patreon is attempting to do some cool things there and I think are getting some great results, as are crowdfunding approaches. However, people I know who have gotten syndication of content on any given video platform don't really get much in the way of return unless it's a huge hit - so it's hard to get quality content which is why a lot of those guys are producing exclusive content, or funding it. In terms of music, yeah - artists are making nothing, and the services are making nothing too, but that's a whole different story.

There has been a failure, in my opinion, of successful monetization of valuable content on the net. I don't disagree with you that there are a plethora of monetization schemes that are very effective on the web. However, given the cost of content creation usually content is a loss leader for a separate business model / or business entirely. I think it just leads to crappy content, since now pretty much every article has some kind of agenda and there's little journalism actually taking place. I actually really like Vice for this reason - I think they've found a niche where people are willing to pay for quality content. Same for the Information, which produces terrific journalism and great long form stuff.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: