Hacker News new | past | comments | ask | show | jobs | submit | more xyzzy4's comments login

It's difficult to say if there will be a financial crisis, but multiple current economic indicators are not predicting one any time soon (meaning in the next 6 months).

This isn't counted in those indicators, but also S&P 500 earnings are predicted to go up a lot by December 2018, according to S&P Global. Credit Suisse's Global Wealth Report predicts a 5% annual increase in wealth among all adults globally in the next 5 years.

Houses in major metropolitan areas have gone about 6% annually on average, counting recessions and popping of bubbles. Timing the market could help but it's very difficult. I predict they will keep rising at a 4-7% rate for the next decade, but I could be wrong. A recession is possible but I wouldn't delay buying a house right now. But I would definitely buy within your means and don't expect prices to rise, because the future is very difficult to predict.


Why is gold worth $5 trillion market cap? If someone steals your gold, you can't even get it back easily.


In computing there were relatively few mitigation against breach of physical security. Almost no one uses encrypted volumes, though that is growing rapidly. Even then it says nothing of on line connected access to the medium whilst the volume is mounted.

If someone steals your gold the logistical complexities are much more onerous to being caught. Again, a 3000 year problem almost entirely solved.

The risk of the medium of value is priced into the medium itself.


If you want to hear voices, just take 400-500mg of Benadryl. Enough to hallucinate, not enough to hospitalize you.


Offices are much less natural than afternoon naps.


Sets are just like hashes where the value is always "true" for each key.


While that's true you could implement one that way, it's very nice to have the set operations implemented. Intersection, union, subtraction, etc. And having a uniform set type makes API signatures more consistent.

Plus you may be able to optimize Set<T> to use less space than Map<T, bool>.


> While that's true you could implement one that way, it's very nice to have the set operations implemented

Unrelated, but does anyone know why the new JavaScript set implementation is so limited? Why didn't they bother doing this right?


I think I remember reading a claim that they pushed for a small API surface in order to make sure it got through. Now that it's part of the language, anyone else can work on getting those extra features in.

You can implement most basic functionality easily enough, MDN even has an example [0]. Although I agree that it should really be part of the language.

https://developer.mozilla.org/en-US/docs/Web/JavaScript/Refe...


Because it is JavaScript, and there's some kind of unspoken rule about not doing things properly and instead releasing broken things.


This exactly, I hear over and over: just use a hashtable...totally misses the point.


Does it really matter what the 'value' is? A set is surely implementable as a supertype of a hash, where the value is totally arbitrary; it only matters that the entry exists.

With:

    {'A': true, 'B': false}
you seem to be suggesting `B` is 'not in the set'. What's `C`?


Not in the set. You're describing the same thing as the parent comment, but you're saying "arbitrary value" which they substituted for "True"


They're not really the same thing then. In OJFord's implementation, there's a one-to-one correspondence between states: the element is a member of the set iff it exists as a key in the hash table.

In the true/false implementation, two distinct hash table states (false or key not set) map to one set state (not a member). The program must check whether the key is set, and then check its value.


If A.I. can't fold my laundry, I wouldn't trust it with my car.


Driving a car is currently specialised, as is folding laundry. The opportunity for autonomous vehicles is clear; you can start with replacing the cost of paying every person around the world who drives for a living with profit equivalent to their income. (likely at somewhat less than 100% efficiency, but who cares because of the massive scale?)

The next bit would be converting private trips into automated rides; replacing car ownership. The unit profits will decrease as part of this effort, but the opportunity is again huge.

How will you monetise laundry folding automation? At best you can replace just some of the people at all the commercial laundries and turn that into profit, but many laundries may hire one person who does everything. For home use you could sell a machine, but how much would you need to charge for it for that to equal recurring revenue from autonomous vehicles and how much are people willing to pay for a laundry robot? (even robotic vacuums are still quite niche despite probably being more convenient and costing less than LaundryBot)


> replacing the cost of paying every person around the world who drives for a living with profit equivalent to their income

That's not how it works, because your competitors will undercut you. Then you will undercut them, and continue the race to the bottom until the natural price is reached.


The efficiency will certainly be a lot less than 100%, and it does seem likely that at least two companies might bring the tech to market within a couple of years of each other. I think whatever the starting price point is, there will probably be so much of a demand/supply gap that the initial price point will hold for a number of years.


Laundry bot would make no money, I think. People are happy to do this task on their own or via a maid.


Laundry bot would definitely make money, tons of it. Robo maid is like numero uno on things people have historically wished robots would do in the future


Before the invention of modern washing and drying machines, sure. But today many people in big cities don't even have an washer and dryer in their home or may not have one in the building, this problem spans many different use cases of how people consume the service. I'd consider asking things like:

* Is the barrier higher and value lower because they have laundry in their unit or does the value increase marginally because they have in the building, or is it more lucrative because they don't have laundry at all?

* And does a solution scale easily across these different use cases?

* The ability to provide continuity on even a pure software scale makes this a particularly pricey arena to enter because a person is required to fulfill this service. To find ways to make this economical, we'd have to consider co-opting existing behaviors or economic patterns. Are there _multiple_ existing patterns that we can enhance?


Yep, I paid around $2000 for a nice Hitachi drum-type washer-dryer machine last year.

I would certainly, without any hesitation whatsoever, have been willing to pay $5000 or even a bit more for one that, instead of just washing and drying all our clothes, took a pile of dirty clothes as input and output a few stacks of washed, dried, folded, and sorted clothes.

Anybody with children would want this; those who also have sufficient money would buy it.


It would be simpler to invent wrinkle free clothes and convince your wife clothes don't need to be sorted.


People are downvoting you. But I think this is a really good point...


It isn't, though. The two tasks aren't very similar. It's a pithy put-down if you already don't like the idea of driving AI for other reasons, but nobody who's looking to objectively evaluate a technology for use in cars would say, "Hang on, let's first try to make it fold laundry."


> The two tasks aren't very similar..

Care to elaborate? Is folding laundry harder than driving? Doesn't both involve following visual cues and making corrections continuously based on that...

>It's a pithy put-down if you already don't like the idea of driving AI for other reasons

And your comment can be seen as a pithy attack on criticism of something you are overly enthusiastic about "for other reasons"...

>nobody who's looking to objectively evaluate a technology for use in cars would say, "Hang on, let's first try to make it fold laundry."

That is not the point. Point is to evaluating how capable the current AI/visual processing technology is.

And why not test advancement in AI with something much safer, like folding laundry? Instead of putting it in a car that can actually kill people...

It is a good point.


AI isn't a magic black box which one just plugs into stuff. There's some core mechanisms, but those are already well tried in many other applications. The hard part is developing a specific AI for driving.

Also, self-driving cars are not new, they're being developed for over thirty years, and they've logged millions of miles, many in public streets. The problems remaining will be in edge cases, and you definitively can't use your "folding AI" to test for that.


No it isn't. Modeling how a robot interacts with cloth is actually quite difficult dynamically. Much more difficult than how a car operates.



If I understand it right, it takes this robot ~ 90 minutes to fold 5 towels.


The car is a legitimate next step with AI, computers and sensors are already integrated and have been doing basic functions like parking, whereas computers are not integrated into manual laundry task, this market has decided computers are not economical or profitable to integrate into the chain.


Laundry seems like more of a mechanics problem. I'm sure it's doable, maybe not particularly cheaply/compactly though


>Taking breaks to play games, chat around coffee, or taking a few minutes to walk around the office is a non-existent element of life. An 8 hour work day is exactly that: 8 hours of pure work.

I don't understand how some programmers can supposedly work for 8 hours straight. I often have to take breaks to think about how I want to best implement the next thing, or what exactly I should be working on.


Writing code is different than reading code, it suits different work patterns. One is dealing with software architecture and the other is doing software archaeology.

In the described catfish role, you don't get to spend any meaningful amount of time thinking about how to best implement the next thing; sure, you do some of that, but almost all the time gets spent trying to understand how the heck that thing was implemented (and why) (and why??????), and that means that (a) you don't really get progress unless you're looking at or touching that artifact, and (b) you really want to avoid context switching; if you've spent three hours building up a vague picture in your mind about what something large and weird is supposed to mean, then you'll have to repeat half of that if you take a lunch break and talk about other matters. When you do figure that out, though, the fix or rewrite generally is quick, easy and routine.


This is maybe one of the greatest advantages I have gained in my years of experience at one company. When I started, I was inheriting the code of other people, with all the baggage of trying to understand it and keep the mental model intact when I needed to make changes. Over time, I've largely rewritten all of the products that are still current, as requirements and dependencies have changed. I'm reminded of the probably apocryphal story of Nikola Tesla and Henry Ford[1]

[1] http://www.snopes.com/business/genius/where.asp


This is a perfect description of what I was expressing. I like the archaeologist angle (knowing nothing of archaeology, of course).

So much of the work is browsing files and reading, which is different than creating. It's a different challenge: enjoyable in its own way.


Yesterday I had about 5 hours of straight work within the usual 8 hour window (different from the "you only get x < 8 hours of not necessarily continuous productive work per 8 hours" some people like to claim), not even getting up from my chair once. I had to psyche myself up for it after lunch though. The reason is I had to work on finding a solution to a deep and complicated bug, and I knew it would take a long time. (I had to work on it some more today to fix some issues with what I settled on last night.) But I was able to devote my entire focus to it and put aside code reviews and doc reviews and email checking and so forth until I had a potential fix.

Most days I don't have such long periods of continuous focus. The culprit is usually multitasking. Context switching carries a cost for me. Even if the context is justified (important meeting, hammock-time to figure something out, lunch break) there's a cost. I'm somewhat jealous of people who seem to have such a minimum cost and can go even more than 8 hours grinding away and working very productively indeed, since I can taste that focus on occasion, and it'd be cool to have it always. I probably wouldn't spend most of it on work-work though.


I found a super helpful skill to develop is the ability to context switch quickly. Don't be a slave to "the zone". Interruptions happen, and the person who has the discipline to require only 2-5 minutes to get back into the programming zone is more productive than the one who needs 30 minutes after his/her concentration gets broken. I've found the single biggest thing that helped me to work on context switching quickly is having a child--because kids don't give a shit if you're in the zone. They just barge in and interrupt you, and you have to deal with it because family.

I mean, so you're super-productive in continuous 5 hour batches?? Great, we all are! But no business on Earth is like that. Are you also super-productive when your day is punctuated by meetings, phone calls, and open-office distractions? That environment seems to be more common in this day and age.


I don't think most developers are wired to be able to do this, no matter how hard we try. You could apply this maxim to anything: "Just try really hard and practice a lot and you can run a 3-minute mile!"

Most productive developers I know attend meetings and then disappear in the building to a random nook to write code. Or they show up at noon and stay till 8 so they get a few hours without interruptions.


I'm not a fan of the sink-or-swim approach to gaining skills because when people advocate for it they discount the probability of sinking. I know some people who had kids and tried working from home, every one of them couldn't build the skill to be as or more productive as they were before kids. And some of these people have kids that are now entering middle school, it's not like they haven't had years of practice trying to do it. But it's fine, we don't need to be highly productive let alone super productive all the time like the catfish.

If you treat meetings, calls, socializing with your coworkers, etc., as part of "work" instead of distractions from "real work" (that you do best in "the zone"), then it's possible to be pretty productive like that too. My context switch time depends on what I'm switching to. I haven't figured out a good way to switch from deep coding -> meetings -> deep coding efficiently, but I can switch from code reviews -> meetings -> pinball -> meetings -> emails -> direction clarification... pretty well. It's improved with practice, sure. But what I find more useful is to plan around what type of day I'm expecting to have, then it's not that hard to be productive in the context of that day or week. Most days I don't even try doing any serious coding before the morning standup, so it's a great time to do other needed things, and depending on what the rest of the day is and what I need to prioritize right now I may or may not have some good coding sessions intermixed with other things. Days of pure zone are rare, which is why I cherish them (at least with positive usages, the bug-zone days are kind of bittersweet because things shouldn't be this hard and probably wouldn't be in a non-legacy system), but I don't need that to be productive. I don't even need to do serious coding as much as I want, a lot of work is just gluing things together and revisiting existing logic flows to make them work slightly different for some new feature. Overall my average day is productive enough that I'm overall happy enough (even if unhappy I can't code in anger as much as I want) and people in charge of judging my output seem happy enough.

I think about how careerism seems to work at a lot of places, most dev roles involve more than just programming, even if programming is the single most valuable thing you could do right now, the incentives don't align with that. It's rather annoying personally. But that's probably why the Lonesome Bottom Feeder catfish is at the bottom, and not even a salaried employee but a contractor. They can make really good money, but not always, or as much as those climbing the ladder. The incentives favor the type of "productive" that goes with broad multi-tasking, directing, and a few deep dives. Even a non-catfish non-ambitious dev can do this and advance, though of course the non-bottom-feeder invasive catfishes will do better. Do it for enough years and your average depth across a broad spectrum is deep enough that it would take a while for someone fresh to reach your level with intense focus, catfish productivity or not.


The dirty secret is that most of the people that can grind away for more than 8 hours in a day are producing shit that needs to be re-done by somebody that isn't grinding themselves so hard.


Yeah, that's probably the typical invasive catfish at work. It's worse when those people get promoted after fixing a fire they created, but that happens with non-grinders too, probably more correlated with political persuasion skills to push through risky and sweeping things while forcing dependence (fire fuel) by other teams. Still there are programming gods like Carmack who embrace the grind and put out good shit. It'd be fun to work on a team full of those for a while though they might not enjoy me as much when I tire of the grind.


I guess when you're paid hourly it matters, but ya, my work day is a bunch of 20 to hour long blasts of coding, interspersed with wandering around the office or bouncing on the giant yoga ball they gave me instead of a chair. Dunno how someone could keep on the screen for more than a couple hours straight.


The only place I ever actually coded for 8 hours straight was a place in which I loved the work and I actually felt valued on a day to day basis. During crunchtime (which only happened once or twice every few months) my manager would wait around after-hours so we could leave together and I wouldn't be the only one left in the building while the cleanup crew went to work. In hindsight I now acknowledge this behavior as his way of showing support even if it meant him just catching up on news and Facebook or writing boring emails while waiting for me to be done coding for the day. He taught me a lot about managing people, and in turn I taught him a lot about being on the cutting edge of our industry. This was a very small company trying to modernize itself in the face of daunting competition.

This was the best position I ever held and was my first role as a lead. The CEO did not know much about technology but seemed to have an innate trust in me that I appreciated greatly. Granted this gave me the ability to use whatever technology I wanted so I was having a lot of fun with the day to day coding. I wouldn't even eat lunch some days just so I can finish up the task at hand before sundown. Sounds horrible but was (and still is) an incredibly exciting experience.

On the other side of town, however, where I worked for Fox Studios (yes I'm calling them out!) I and the rest of the Contractor Army would be critiqued and yelled at by executive management on a weekly basis, like clockwork. Even though they gave me all the money and resources in the world, I hated coding every second I spent at that job. Instead, I spend a lot of time walking around the lot in hopes of meeting movie stars or something.

I always figured it was the domain you worked in that made it fun or boring, but I'm starting to realize it's a lot more about the environment and people you are around that can make or break your work life.


You can take a break by looking up the next ticket.


Oh hey it's my project manager, I didn't know you were on here.


Are you doing a personal project or are you just going down a list of features that you have to implement? For a CRUD app it's fairly easy to just trudge through the list since you aren't making creative solutions or passionate about what you're doing.


I mean yea, take breaks. But when I do contracting I'll take an exactly X minute break before resuming and still work for 8 hours.


8 straight? I've done it if you don't count the lunch my wife put in front of me that I didn't remember eating. There is also some value to taking a break. I've walked away from some problems that made may head hurt - only to solve them while playing foosball.

Once I was playing foosball while the site was down. Someone walked in, quite unhappy about seeing the dev team playing when he thought we should have been fixing. Explaining that we couldn't do anything about it - ops had server issues that we could not help with. There is a time to work and a time to play. I find I'm best with 2-3 hours of work without distraction.


Thinking about how to solve a work problem is work.


I agree. I had some of my best ideas in cigarette breaks (or, alternatively, at night when I was just about to fall asleep).

Also, going to the bathroom can do wonders if your mind is stuck on some problem. Seriously.


Well there could potentially be bigger animals in the future.


80% decrease isn't that much on a log scale. 99.99% decrease would be more worrying.


Your code will only work if malloc() pads with 0s. Otherwise it will be missing the null character at the end of the out string. So I would switch it to calloc() which is guaranteed to pad with 0s. Or use memset to set it all to 0s.


It copies the null from the input string. It maintains the string invariant without silently corrupting data.

Writing C code with well-defined semantics in the face of existing heap corruption is harder. The strn* functions don't do that either though.


The code copies the origin string up to and including the null terminator, which must exist as strlen was used to obtain the size.


It doesn't copy the null terminator because it's using memcpy. It's only copying the string without the null byte.


I think you are missing the +1 behind the call to strlen. strlen reports the number of chars before the zero-byte, adding one includes it.


I go back and forth between putting the +1 on the first line, or repeating it in the other lines. Putting it on the first line is harder to screw up, but harder to read.

For real programs you basically have to put it in the initial computation, or it will be forgotten somewhere later (maybe in a later commit).


> Putting it on the first line is harder to screw up, but harder to read.

Mayhaps "len" should be renamed to something clearer?


Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: