Hacker News new | past | comments | ask | show | jobs | submit | mx12's comments login

Wow, this really brings me back. Found so many great songs/albums.

A fun story. A friend and I created a chrome extension called Turntaste [1] that would help suggest what to play based on what was previously played and help avoid songs to play. Basically, we used everyone who used the chrome extension to scrape the play history and stats. I was hoping to apply some cool ML... well cool for 2011 ... as I was doing my PhD in ML but ended up just using some simpler statistics based approaches. My favorite room was something like “Trending Indie - No Mumford & Sons”.

It was also used to generate a few cool playlist like a trending indie Spotify playlist [2]. It’s a bit frozen in time since it stopped working whenever the site went down.

[1] https://turntablefmfans.wordpress.com/tag/turntaste/ [2] https://open.spotify.com/playlist/63h132rkg3oNhirWbJqGWZ?si=...


I have been doing the same thing. I've also augmented each section with video lectures from Khan academy or other sources. For instance, their videos on the Jacobian were excellent for getting an intuitive understanding of it [1].

I also search for problems on the topic to help solidify my knowledge. You can almost always find a class that has posted problems for a section with answers.

[1] https://www.khanacademy.org/math/multivariable-calculus/mult...


It works by RSSI [1], and I believe you can calibrate it with your antenna/case design. Not to mention any attenuation by the phone's antenna or your hand/body.

Quote:

"Phones or other smart devices can pick up the beacon’s signal and estimate the distance by measuring received signal strength (RSSI). The closer you are to the beacon, the stronger the signal. Remember that the beacon is not broadcasting continuously—it’s blinking instead. The more frequent the blinks, the more reliable the signal detection."

[1] http://developer.estimote.com/


Speaking as someone who just left a startup to join Google, I was excited in getting really stock options that after vesting could be exercised and sold.

It's a long road to go from options to real money. Something I didn't realized before joining a startup. There are many points at which you could be locked in if the company is sold or other events happen.


That's awesome that you made it open source!

Maybe I'll to try implement one of my ideas for Tinder. That is A/B testing of profiles in different locations. Let's say I live in SF and could set up a weeks worth of testing with various different profile pictures and determine which one gets the most swipe rights in different parts of the country. Then I could use that profile in my own location.


Sounds like a good idea for a service, figure out the optimal profiles for people with different combinations of pictures/text, and return the results. Tinder consulting.


IIRC the MINST data has a mix of style of 4's. I tried it with an open top, and it worked correctly. I wondering if they just happened to sample a majority of 4s that have an open top.

Example: http://imgur.com/mRRz1L3


If you're interested in more camera hacking, here is a very interesting talk from last years black hat about hacking security cameras.

Title: Black Hat USA 2013 - Exploiting Network Surveillance Cameras Like a Hollywood Hacker https://www.youtube.com/watch?v=LaI0xjeefpg


Here's a link to the full video:

http://www.dailymotion.com/video/xxxiij_moon-machines-2008-p...

Around 37 minutes they talk about the alarms on approach and how the dealt with them.


After I saw this guide a while ago, I got really interested in lock picking and ended up buy a kit. There a decent subreddit and it's a good resource to get started. I purchased the kit they recommend PXS-14, and it works great. I remember I picked my first lock in about 5 minutes and then spend another hour trying to do it again. It takes a while to feel right and become consistent.

Subreddit:

http://www.reddit.com/r/lockpicking

Plus there getting starting guide:

http://www.reddit.com/r/lockpicking/comments/bzq80/where_do_...

PXS-14 Kit:

http://www.lockpickshop.com/PXS-14.html


Peterson sells some good kits for beginners and they are a bit more comfortable to use than the Southord ones mentioned above:

http://www.thinkpeterson.com/picksets.html#LESS%20EXPENSIVE%...

Of course, its up to personal preference, they are both great brands.


I don't think that Apple will completely get away from x86 for a long time. Attempting to emulate the x86 on an Arm would be terribly slow.

I do however, think that they will eventually include both arm and x86 processors in the Macbook Air. That way, backwards compatibility is preserved and low power apps can run on the arm. In the current Macbook Pros they have dynamic switching of GPUs, there's no reason they couldn't use the arm as a coprocessor or even run the full OS.

Here a few technical points:

* LLVM - You can compile once for both both architecture and then the JIT will take over compiling for the specific architecture (Take a look at llvm for the OpenGL pipe line in OSX)

Full screen app- When an app is full screen (if it's a low power app) then the x86 could sleep and the arm could switch to running the app.

App nap - If all x86 apps are a sleep, switch over the running the arm processor exclusively

*Saving State - It's possible to save the state of apps in OSX, a similar mechanism could be used to seamlessly migrated between processors.

This is pure speculation, but it is feasible. There would be many technical challenges that Apple would have to solve but the a capable. The advantage Apple has is that they have absolute control over both platforms.


> I don't think that Apple will completely get away from x86 for a long time. Attempting to emulate the x86 on an Arm would be terribly slow.

It's not you're emulating an entire operating system: the operating system (and many libraries) are native, but the application code is emulated. It's faster than you think, and Apple has already done it twice: once in 1994, and once in 2005 (exercise for the reader: try extrapolating).

Apple's applications would be 100% native long before the ARM version shipped. Some intensive tasks—text rendering, audio/video/image encoding and decoding, HTML layout, JavaScript—these would also be native on third-party apps, since you just have to write the right glue into the emulator. This would be a lot easier than the 2005 switch from PowerPC to x86, which involved emulating a system with 4x as many GPRs and the opposite endian: ARM has 2x the GPRs as x86-64.

Sure, a bunch of apps will see reduced performance. Some will break. But remember: Apple has only been on x86 for ten years. We had the same problems during the PowerPC->x86 transition: you had to wait about two years to get a version of Photoshop that ran on x86 + OS X.

I'm willing to bet that Apple has been testing OS X on ARM for years now.


>but the application code is emulated. It's faster than you think, and Apple has already done it twice: once in 1994, and once in 2005 (exercise for the reader: try extrapolating).

Well in PowerPC->x86 transition x86 was the faster chip. The emulation cost was discounted by some amount. If you go from x86->ARM, ARM is the slower chip, so there's never going to be _improvement_ in performance compared to x86. I don't see why you're equating the two.


That is an interesting idea.

At one point I had a machine which allowed movement of data between apps running in Windows 3.1 on a DX4 and Risc OS on a StrongARM. I'm wondering how hard it would be with OSX (a far stricter OS than Win3.1 or Risc OS) to allow a sort of process level separation, where the OS and any ARM compatible processes are running on an ARM, but you keep an x86 around and spin it up for certain processes.


This sounds like it is ripe for a Semi-custom chip from AMD, where they sell an ARM/X86 SOC


  I do however, think that they will eventually include both arm and x86
  processors in the Macbook Air. That way, backwards compatibility is
  preserved and low power apps can run on the arm. In the current Macbook
  Pros they have dynamic switching of GPUs, there's no reason they couldn't
  use the arm as a coprocessor or even run the full OS.
Battery life of the Macbook Air is already outstanding. I think I want something more than just better battery life for all that complexity, but eventually it will all be just "taken care of" by the toolchain, so why not?

I think the low hanging fruit would be running iOS apps native on the Macbook Air.


No reason to JIT -- They already support fat binaries.


I'm skeptical about moving existing apps seamlessly between x86 and ARM processors, because you'd need guarantees about process memory layout that I don't think any current compiler makes. Imagine the memory image of a process running on the ARM chip. It has some instructions and some data:

    |        Data          |         ARM insructions        |
You could certainly remove the ARM instructions and replace them with x86 instructions. However, the ARM instructions will have hard-coded certain offsets in the data buffer, like where to look for global variables. You would have to be sure that the x86 instructions had exactly the same offsets. For another issue, if the data buffer contains any function pointers, then the x86 and ARM functions had better start at exactly the same offsets. And if there are any alignment requirements that differ between x86 and ARM (I don't know if there are), then the data had better be aligned to the less permissive standard on both chips.

None of these problems are impossible to solve. They could be solved easily by adding a layer of indirection, at the cost of some speed, and then Apple could go back and do the real difficult-but-fast implementation later.

However, why would it? When its ARM cores are essentially desktop-class, there's no need to have an x86 chip other than compatibility with legacy code. Looking at Apple's history, it seems pretty clear that it likes to have full control of its own destiny, and designing its own chips is a logical part of that, so having its own architecture could be considered a strategic move too.

So given the difficulty of implementing it well, and assuming that Apple eventually wants to have exclusively Apple-designed ARM chips in all of its products, if I were in their shoes, I wouldn't bother to make switching work. I might have a product with both kinds of chips, but I would just have the x86 chip turn on for x86 apps, and off when there were no x86 apps running, and know that eventually those apps would go away. (And because I'm Apple, I have no problem pushing vendors to switch to ARM faster than they want to, so this won't be a long transition.)

However, an even cooler move would be to make LLVM IR the official binary representation of OS X, and compile it as part of the install step of a new program. That gives Apple several neat capabilities:

1) They can optimize code for the specific microarchitecture of your computer. Maybe not a huge deal, but nice

2) They can iterate on their microarchitecture without having to care about the ISA, because the ISA is an implementation detail. This is the technically correct thing that everyone should have done years ago (yes, I'm annoyed).

3) They can keep more secrets about their chips. It's obnoxious, but Apple would probably care about that.

So, there's my transition plan for Apple to move to its own chips. It probably has many holes, but the biggest one is still the question of what Apple gains from this. Intel still has the best fabs, and as long as that's true, there will be some advantage in sticking with them. Whether the advantage is big enough, I don't know. (And when it ends in a few years, then who knows?)


Older enough programmers will remember the DEC Vax to Alpha binary translators. When DEC produced the Alpha you could take existing Vax binaries, run them through a tool, and have a shiny new Alpha binary ready to go.¹

Given such a tool, which existed in 1992, it seems simple enough to do the recompile once on the first launch and cache it. Executable code is a vanishingly small bit of the disk use of an OS X machine.

Going forward, Apple has a long experience with fat binaries for architecture changes. 68k→PPC, PPC→IA32, IA32→x86-64. I don't think x86-64→ARM8 is anything more than a small bump in the road.

As far as shipping LLVM and letting the machines do the last step, that should make software developers uncomfortable. Recall that one of the reasons OpenBSD needs so much money² for their build farm is because they keep a lot of architectures going because bugs show up in the different backends. I know I want to have tested the exact stream of opcodes my customer is going to get.

¹ I think there was also a MIPS to Alpha tool for people coming from that side.

² In the sense that some people think $20k/yr for electricity is a lot.


Yep, and Apple has already done dynamic binary translation once before, during the PPC to x86 switch.


And 68k to PPC.


  Going forward, Apple has a long experience with fat binaries for architecture 
  changes. 68k→PPC, PPC→IA32, IA32→x86-64. I don't think x86-64→ARM8 is anything 
  more than a small bump in the road.
Using the lipo[0] tool provided as part of the Apple Developer tools, it's pretty easy for any developer to create a x86/ARM fat binary. Many iOS developers have used this technique to create libraries that work on both the iOS simulator as well as a iOS device.

[0]: http://ss64.com/osx/lipo.html


> As far as shipping LLVM and letting the machines do the last step, that should make software developers uncomfortable.

Why? This is how Windows Phone 8 works (MDIL) and Android will as well (ART).

On WP 8 case, MDIL are ARM/x86 binaries just with symbolic names left in the executable. The symbolic names are resolved into memory addresses at installation time, by a simplified on device linker.

Android's ART, already made default on the public development tree, compiles dex to machine code on device installation.


This was also the entire premise of the Transmeta CPUs


> However, an even cooler move would be to make LLVM IR the official binary representation of OS X...

It's worth revisiting http://lists.cs.uiuc.edu/pipermail/llvmdev/2011-October/0437... ("LLVM IR is a compiler IR"), from a core LLVM developer, explaining why LLVM IR is unsuitable for this task.


Ah, thanks for posting the email. You're right. :-)


> However, an even cooler move would be to make LLVM IR the official binary representation of OS X, and compile it as part of the install step of a new program.

I've wondered the same thing. In this respect the IR is analogous to Java bytecode, or C# CLI. In this context these share conceptual similarities, allowing for multiple languages to target the same runtime.

This would possibly open up iOS to being able to be more easily targeted by languages-that-aren't-Objective-C. As long as it compiles down to LLVM IR then this "binary" becomes language agnostic. (Actually, for all I know things like RubyMotion do this today. I haven't delved into it to find out.)


> I'm skeptical about moving existing apps seamlessly between x86 and ARM processors, because you'd need guarantees about process memory layout that I don't think any current compiler makes. Imagine the memory image of a process running on the ARM chip. It has some instructions and some data:

There was a paper at ASPLOS 2012 where they did something like this, but for ARM+MIPS [1]. Each program would have identical ARM and MIPS code (which took some effort), with identical data layout.

1 - http://cseweb.ucsd.edu/users/tullsen/asplos2012.pdf


> However, an even cooler move would be to make LLVM IR the official binary representation of OS X

The IR isn't architecture portable right now. IE, you can't use it as a live interpreter language, because the code it produces make assumptions on the target architecture before final binary translation.

It would be fantastic if Apple would fix LLVM so the IR was portable, it would be amazing for general purpose software if you could ship LLVM IR and have your end users compile it or have web services do it for target devices on demand.


Google's Portable Native Client essentially does something similar: https://developers.google.com/native-client/dev/


>However, an even cooler move would be to make LLVM IR the official binary representation of OS X, and compile it as part of the install step of a new program.

So a user installing Firefox or Chrome or some other complex application would need to wait for tens of minutes before they can use their application? It's more likely they'll just re-use the existing dual arch architecture but instead of PPC/x86 it'll be x86/ARM…


OpenStep supported four processor architectures before it got ported to PowerPC, and used to support "quad fat binaries" that would run seamlessly on all four architectures.


> I don't think that Apple will completely get away from x86 for a long time.

The switch from PPC to x86 was less painful than many thought. I don't see any particular reason why the "fat binary" approach (which is technically gross, but quite practical) would not work for switching from x86 to ARM.

Developers will hate it, but the whole app store farce shows what Apple thinks of developers.


Nah, LLVM is to slow for JIT. However the final compilation step still could be done at install-time or link-time.


Plus Apple's executable file format already supports bundling multiple binaries for each target arch. https://en.wikipedia.org/wiki/Universal_binary


I don't think they would nessesarily build in both processors. They'll just assume that developers will update their apps very fast.

The whole XCode toolchain already supports x86_64 and arm64. It's trivial to build "fat" binaries with clang that run on both platforms.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: