If you are interested in possible uses for this kind of technology, my R&D work until recently has been to find use cases for WiFi Aware and bring them to life.
Interestingly the article on the bottom links to a Usenix 2019 (held Aug 14 - 16) paper with the title "A Billion Open Interfaces for Eve and Mallory: MitM, DoS, and Tracking Attacks on iOS and macOS Through Apple Wireless Direct Link"
Abstract:
"Apple Wireless Direct Link (AWDL) is a key protocol in Apple's ecosystem used by over one billion iOS and macOS devices for device-to-device communications. AWDL is a proprietary extension of the IEEE 802.11 (Wi-Fi) standard and integrates with Bluetooth Low Energy (BLE) for providing services such as Apple AirDrop. We conduct the first security and privacy analysis of AWDL and its integration with BLE. We uncover several security and privacy vulnerabilities ranging from design flaws to implementation bugs leading to a man-in-the-middle (MitM) attack enabling stealthy modification of files transmitted via AirDrop, denial-of-service (DoS) attacks preventing communication, privacy leaks that enable user identification and long-term tracking undermining MAC address randomization, and DoS attacks enabling targeted or simultaneous crashing of all neighboring devices. The flaws span across AirDrop's BLE discovery mechanism, AWDL synchronization, UI design, and Wi-Fi driver implementation. Our analysis is based on a combination of reverse engineering of protocols and code supported by analyzing patents. We provide proof-of-concept implementations and demonstrate that the attacks can be mounted using a low-cost ($20) micro:bit device and an off-the-shelf Wi-Fi card. We propose practical and effective countermeasures. While Apple was able to issue a fix for a DoS attack vulnerability after our responsible disclosure, the other security and privacy vulnerabilities require the redesign of some of their services." [1]
I got nothing to add regarding OpenDrop other than that I love interoperability, and that I love it when FOSS enables this.
I can’t hep but think that these seemingly numerous security flaws are a product of proprietary software development. There is the old “many eyes” idea for software bugs, but even on a standards level. Did Apple not send out an RFC? Isn’t this type of architectural level screw up exactly what you want to avoid with an RFC?
I’m glad Apple are taking the privacy issue to heart, but for every inch we’ve won in privacy, we lost an inch in openness and interoperability. Apple is perhaps one of the worst offenders when it comes to vendor lock-in.
I use almost only Apple products out of sheer laziness (and honestly inertia.) At least their war with Qualcomm and NVIDIA creates some competition in their respective markets...
The “many eyes” hypothesis is routinely debunked when severe security bugs are found in things like the Linux kernel that have been there for years. The same is true for standards that end up being fundamentally broken at later dates. In the end software and hardware is so overly complicated that we cannot currently build secure systems.
It's a tradeoff. The point is not MANY eyes, it's ANY eyes. Proprietary software has NO public eyes on it, zero, and the vendor must (1) report to you promptly when there's a new vulnerability, (2) produce a fix for it. Most vendors do neither until forced. How many undisclosed vulns does your vendor have? You'll never know.
Of course FLOSS has bugs, it's software, and ALL software has bugs. In the FLOSS case you know what everyone else knows, AND you can fix them, hire someone to do it, or choose not to use the software, all with that knowledge.
> Proprietary software has NO public eyes on it, zero
If that were true, security flaws would never be found in proprietary software by outsiders. And they are, so it's not true. Eyes have less visibility into the codebase, but people are looking and do find flaws.
> How many undisclosed vulns does your vendor have? You'll never know.
How many undisclosed vulns does RedHat, Canonical, or Mozilla have in their FLOSS software? You'll never know.
> Of course FLOSS has bugs, it's software, and ALL software has bugs.
Then "many eyes makes bugs shallow" is at least partly debunked and your "ANY eyes make bugs shallow" is debunked completely - otherwise the original developers would see every bug, in FLOSS and proprietary software.
> In the FLOSS case you know what everyone else knows
There may be bugs which nobody knows about. The claim "many eyes make bugs shallow" suggests that open source software has more eyes on the code, and that having more eyes on the code is all it takes to reduce bugs. OpenSSL turned out to have very few eyes on the code, and it wouldn't be too surprising if codebases with many eyes on them had the developers focused on the bits they were developing and not looking for security flaws.
Complexity counteracts the many eyes principle. That doesn't invalidate it. A large codebase that is difficult for one person to read will bury a bug for the same reason being open source reveals it.
What you're falling to is the selection bias because bugs in open source software are more often publicised than when a private team discovers something and patches it without telling anyone. Same as an open source bug being fixed quietly. Like the so-called VLC vulnerability that turned out to be the fault of the tester's out of date system library that had already been fixed upstream.
>The “many eyes” hypothesis is routinely debunked when severe security bugs are found in things like the Linux kernel that have been there for years. //
Surely to debunk the theory you have to show that fewer bugs are in proprietary software of the same vintage?
AFAIK the many eyes hypothesis is that: as time progresses fewer exploitable bugs will exist in software that has the source open for inspection than in comparable closed source software(?).
When an ages old exploit/bug gets patched that is the many eyes principle working; a piece of software can't get more secure (or otherwise improve) without patching old code, surely.
https://en.m.wikipedia.org/wiki/Linus's_Law This is the term notable hacker Eric S. Raymond (esr) popularized in his seminal work, The Cathedral and the Bazaar. Read it if you care at all about software engineering processes.
If you allow anyone to send you files over the air without authenticating you or them, there is no way to prevent the files from being modified in transit.
There's also this 2018 paper by the same authors: "One Billion Apples’ Secret Sauce: Recipe for the Apple Wireless Direct Link Ad hoc Protocol" [1]. Not sure which paper came first.
FOSS may have downsides, for instance it's free, but doesn't have iTunes built in.
I consider both of these advantages, but some users don't. I consider the benefits of open source far exceeds the perceived benefits of Apple's Ecosystem.
Maybe as developers it's important to teach our Apple Peers the problems they perpetuate by paying for walled gardens.
What about your apple peer developers, who have made a deliberate and conscious choice to use Apple devices in their day to day life, and stable open source solutions in production / deployment environments?
Some of us used to live in the FOSS garden, and decided to trade extra money in exchange for receiving a lot of time back and a shiny finish. Everyones values vary.
I don't think your premise that you are saving time is correct.
Apple advertises similar to Luxury car buyers to make you believe you are getting a high quality product. But it's going to be really hard to prove things are quicker.
I think the apple experience is friction-less for their target market.
Live in their ecosystem, and everything 'just works'. Watch, Phone, Tablet, Laptop.
Linux/Windows you don't quite have that level of integration ...
You expect more from luxury items than with a $300 windows/linux laptop, a $100 fitbit, or a $200 android tablet - good luck with getting a similar level of interoperability between them...
I know you didn't explicitly say "you only use Apple products because you've been suckered in by advertising," but it's hard not to take it as the subtext, and it's something that's long rubbed me the wrong way.
They are not using AirDrop to bypass the GFW for themselves, they are using it to spread information to citizens that are otherwise normally subject to GFW and therefore only have government-approved or government-generated information available to them.
Mainland news put HK protesters as violent, rebellious youngsters that are causing trouble and injury to everyone else by having greedy demands of something better than what mainland has. Such story leads to minimal sympathy and curiosity.
Even knowing that this information is filtered, few will end up truly questioning it, and thus even when they leave GFW, they will not know that there is conflicting information to find.
That’s a very loose definition of the word “evade” and the headline is definitely misleading. The average person was probably expecting to find some sort of censorship evasion while current being censored.
Censorship evasion is no longer trivial. Deep Packet Inspection (DPI) exists and makes it difficult to use even VPNs. Shadowsocks works but has limited UDP support out of the box iirc.
There are massive regional variations, so what I say is likely not universal, but my non-techie acquaintances still successfully use VPNs, in their case to use services like Facebook and WhatsApp.
I do not know what provider they use, but their knowledge of VPNs go no further than knowing that "VPN = facebook access", so it would seem that "commoners" still manage at least in some bigger regions.
And in UK it’s used to send dickpics to strangers on the train... opendrop requires to set security level to everybody instead of contacts. If you forget to switch back to default you get instant reminder once you get onto public transport.
I wonder if it's more reliable than Apple's own implementation for MacOs. It used to be rock solid – and between iOS devices it still is – but between Macs I regularly have to switch both to "Search for an older Mac" to make them see each other, with no explanation why.
Interesting, my own experience has been the opposite. From what I remember v1 was constantly broken, initial v2 was not much better but now everything is very smooth all the time.
I had massive issues with Airdrop 2 when it first came out. After many lengthy chats with Apple support it managed to resolve itself. All I can offer and a working solution is signing out and back into iCloud on each affected device. This of course comes with the resync penalty.
I’ve found it very flaky on iOS - regularly try to transfer between my phone and my wife’s sitting next to each other and it will randomly not see the other phone or take minutes to find it :(
I've never had issues with iOS - discovery is near immediate every time. The only time I've had issues is when sharing with random people that have it set to contacts only and we haven't shared a contact, or if someone turned wifi off manually.
On the Mac though, it is a different story. Just yesterday my wife was trying to send me a PDF and I had to send it to my phone from her Mac, then dump it into iCloud Drive. I rebooted my Mac and then it worked again - but these are stock 2018/2019 Apple laptops, they shouldn't have these issues.
It is 2019, and it is quite surprising - and disappointing - that we STILL haven't universally solved the means to easily, securely, and (yes, I'll use this term again) universally share files. I wish we could share files in a peer-to-peer fashion securely without hindrance of mobile platform, nor blockage of network MiTM, etc. </sigh>
Have you taken a look at magic-wormhole[1]? I've started using it recently and it's insanely easy to use.
It does have a centralised signalling server for key exchange between peers, but it does attempt to do peer-to-peer data transfer (only falling back to a TURN-style relay if both clients are behind NATs and aren't on the same local network). An explanation of the cryptography and design was given at PyCon 2016[2]. It also has built-in optional Tor support (though I'm not sure if it attempts to use an onion service for data transfer).
Magic wormhole is good, and I've used it before, but it's never as fast as it could be. Something like piping to netcat is always faster. I know it's possible (albeit with a lot of work) to do this over HTTP, but google drive is probably the only site that can mostly saturate a connection. Can any one link more info on how exactly they achieve this? The only trick I know of is a better TCP congestion control algorithm.
I did not know magic wormhole existed. I made a simple nodejs implementation of a very similar app. It used WebRTC so maybe it could be faster. Let me know if you want to try it out!
This is one of many reasons why I have a terminal on my phone, I can just apt install magic-wormhole and use it like on any other system.
(The technical reader will note that a terminal does not give you apt, but mentioning that I have Debian running on the phone is more confusing, as it sounds like I replaced Android (which I did not) or maybe that it costs a lot of battery (the tools are idle when not in use, unlike many apps unfortunately...).)
Sure, I can use wormhole on iSH on my iPhone/iPad if there are no alternatives. But there are alternatives, so I’ll definitely stay the hell away from it given the terrible ergonomics.
Maybe someone should write a magic-wormhole mobile client. It wouldn't need a complicated UI at all, and you could (try) to use Kivy[1] to avoid having to rewrite all of the Python bits. I might even try to do it as a weekend project, actually (though I suck at mobile development -- anyone else would probably be a better choice ;]).
I used AirDrop on an airplane at cruising altitude over the ocean, with no onboard wifi available. It worked just as perfectly as it did at home.
The pilot had taken a video of the Falcon 9 second stage separating from the first on its first launch that docked with the ISS. He then walked through the cabin sharing with anyone that was interested.
That is correct, AirDrop works without Internet. It's unfortunate that Google and Apple have created these respective systems with no cross-compatibility. There's really no technological reason for it, but an obvious business reason.
And if it was purely peer-to-peer people would complain that it doesn't work in their particular use case. Personally, it's rather nice sometimes to 'fire and forget' a file transfer with a tool like this. All I need to worry about is if I successfully uploaded the file. I don't need to worry about their internet connection, keeping some daemon running on my side, keeping power on my side, remembering to delete that file that I only kept around so somebody could download it from my computer, etc.
The point is, options are good. Most of us subscribe to the Unix philosophy of "Write programs that do one thing and do it well" so why are so many advocating for a pancea solution that does everything for every use case?
I just wish they'd hurry up and make the android app useable. I signed up for the beta and was pretty disappointed as my uploads were cancelled if my screen locked or, horribly enough, rotated ...
Android is Linux, you can always just install it the normal way and use it from a terminal. Maybe that's what you mean by unusable (a terminal isn't very user-friendly, but it solves the file transfer / app availability problem).
If I remember correctly, you can configure (I don't know what the default is) to power save or even turn off WiFi on Android when you turn off the screen. Are you sure that's not interfering with this beta app?
Opera Unite did it (and that's about the fifth time this year I've mentioned Opera Unite in response to a comment bemoaning a lack in currently embedded internet systems/apps).
Personally I've been using Nitroshare for years in my home network and it works like a charm (between Windows, Linux, and macOS). Android is also supported, but not iOS (I saw someone created an iOS app, but I doubt it works still).
Bluetooth file sharing works fine on just about every phone I've used. Bluetooth just isn't built for high-bandwidth applications like this, meaning while it's fine for a photograph, you will have issues with larger transfers. That's not a flaw in the implementation, bluetooth just isn't built for high transfer speeds. It is built to transfer enough data and operate at low power, not to be just another wireless NIC.
https://files.google.com/ is a standard and it works on the vast majority of phone sold this year, by every manufacturer except Apple. Checks all 3 of peer-to-peer, secure, no MiTM network blockage. It's only 11MB.
Many popular file manager apps on android have peer-to-peer xfers as well, via WiFi direct, etc.
EDIT - As people are pointing out this isn't universal because it doesn't work on Apple devices or desktops / laptops, but it's as close as I can think of currently.
- "is a standard"? What does this mean? Google Files is not a standard of any kind.
- others have mentioned that this is very far from universal. Whatever about Apple devices, you're also discounting all non-mobile devices. Transferring files easily from computer to phone is probably the most common use-case. Yes, you can use Google Drive for that, but that's neither p2p nor seamless.
- the above commenter mentioned wanting a "secure" solution. That's a bit of a subjective term, but I would guess, at least on HN, the typical one would be e2e encrypted, and private. Google Files is neither of these things.
- the p2p function is severely limited in that it requires both a Google account, and location services to be enabled. Neither of which are necessary to initiate a p2p file transfer.
(as another commenter has mentioned), if we're just looking for "the closest thing", I would suggest https://send.firefox.com/ It isn't p2p, but it is secure and universal, so it ticks a lot more boxes overall.
Almost. People need to install the app to use its transfer functionality, and it does not work with computers.
We're still far from the universal, no-install solution that should've existed for so many years now.
Well, no install basically means every OS vendor needs to ship a proper and compatible implementation (hah yes that'll surely happen) or you basically need something web based which means it needs an internet connection for bootstrapping.
It doesn't matter to me how many Android users there are globally when we're talking about short range file transfers. I'm not bluetoothing files to somebody in India.
The US's market share split is more like 55% Android and 45% iOS, so anything that doesn't support both platforms is going to fail half the time here.
The situation isn't being helped by another proprietary solution that won't be cross-platform. (I'm an iOS dev, and I don't see integration of this possible without using private API's, and this would also fall under the 'Duplicate existing service' issue.)
I'm guessing someone will take a $10 esp32 chip and put this code on it and just drop the esp32 in some hidden location and it just sending images to any open airdrop that passes by.
ESP32 theoretically can provide all of the hardware requirements (WLAN monitor mode, BLE) but there is one missing part:
> Triggering macOS/iOS receivers via Bluetooth Low Energy. Apple devices start their AWDL interface and AirDrop server only after receiving a custom advertisement via Bluetooth LE (see USENIX paper for details). This means, that Apple AirDrop receivers may not be discovered even if they are discoverable by everyone.
If someone reverse engineer BLE advertisement, yes they can build such hardware.
Could this technology be used to create a "shadow" internet/network/messaging service where devices connect and communicate directly with each other. This way governments can't just block internet access or services during demonstrations.
XMPP over Zeroconf (https://xmpp.org/extensions/xep-0174.html) has already existed for many years and at least previously has been trivial to enable on Ubuntu (I haven't tried it recently).
It works with ad-hoc wifi with no other infrastructure requirement or Internet connection, just the same as AirDrop. What more do you require to meet your definition of "disparate peer to peer wireless mesh"?
I thought we were talking about enabling technologies, rather than specific clients, and that it's a given that a suitable and widely deployed client doesn't currently exist. I consider automatic configuration of an ad-hoc network to be a relatively trivial client implementation detail :)
> I consider automatic configuration of an ad-hoc network to be a relatively trivial client implementation detail
Your statement may be tongue-in-cheek, but this sort of attitude is why we don't have this deployed on devices en-masse. Non-technical users want something that just works with one tap, not something that requires them to manually set up an ad-hoc WiFi network every time they want to use it.
Sorry, I don't think I was clear. I'm saying that the technology exists and is capable and established. I had intended to acknowledge that client implementations and UX is exactly what we're missing. I'm not apologizing for that; I accept that this is the current state of affairs and that it is the sole reason we don't have this widely today.
My original point was simply that the technology exists, and pointing out that there isn't a decent client doesn't negate my point.
It is unfortunate that our ecosystem is arranged such that a decent client hasn't happened.
A government - or even just a sufficiently motivated individual - can trivially jam all phone, wifi, bluetooth etc communication.
Or they could just 'listen in' to record the signatures of all detected phones, so they can work out and identify who is in the crowd etc.
Military systems use frequency hopping etc and would be a more difficult target. But doubtless motivated governments could hamper their use in crowds too. Here's a fun comment I just found when I googled that: https://www.quora.com/Is-noise-jamming-an-AESA-radar-possibl...
Absolutely, but I can ask a bit more nuanced question.
In any society, there is state-"friendly" communication and state-"unfriendly" communication. Also there are state-"essential" communications like state-propoganda [1]. Is it possible to build communication networks that allow for secure and anonymous state-unfriendly communication, such that trivially jamming it also jams state-friendly and state-essential communications?
As you might realize, a yes answer to this question is sufficient for protestors communication needs. Not the harder question of completely unblockable communications - which as we know is impossible. Of course, in the real world, the answer to my question might not be a "yes/no" but a resource-tradeoff.
[1] which every country in the world democratic or dictorial engages in. If you don't agree, please read Manufacturing Consent.
I don't believe so. I think the state would be comfortable denying phone and wifi usage to protesters congregating in public places, and the state has plenty of 'state-essential' communication mediums like loudhailers, billboards, riot police beating their truncheons against their shields etc, to fall back on.
That air-drop does work in the Hong Kong protests really just says that the state wasn't really prepared, but they probably will be next time.
Although, really, China is so far ahead in the practical application of face recognition and making protesters disappear or arranging counter-riots etc that I guess airdrop is not really where they think the fight goes?
> I think the state would be comfortable denying phone and wifi usage to protesters congregating in public places
I don't entirely agree. It's a resource tradeoff scenario where the state has to expend political-capital (because they are also blocking people living nearby) to prevent the protestors from communicating. In fact, the protestors in Hong Kong have been protesting in different neighborhoods so non-protestors can see first hand the brutality of the state.
I agree with your larger point though that communication jamming is probably a minor point right now in the Chinese state's gameplan for dealing with these protests.
For what it’s worth, I have been able to get the Yggdrasil Network (https://yggdrasil-network.github.io) to peer over AWDL, allowing nearby Macs to mesh without even being connected to the same Wi-Fi network, or any network at all.
It’s not perfect - there are trade offs, like how the wireless performance is reduced somewhat when AWDL is active due to channel hopping and how AWDL expects a single node to play the role of clock sync source. It’s also not very well tested yet.
However, it works and in theory it allows an infrastructure-free IPv6 mesh network to be built ad-hoc.
They found code for kind of that in the new Google Play Services bundles ( https://www.androidpolice.com/2019/06/30/google-fast-sharing... ). They theorize in the article that it uses the same technologies as Airdrop, so they might be compatible - though I doubt it.
I use https://dro.pm a lot to share files or links. Since the links are super short (I just made https://dro.pm/a --- note that it'll expire in 12 hours), you can just give the link to someone over the phone, put them in a presentation, or just share files independent of any operating system.
I even use it even between my laptop and my phone fairly frequently, since the top suggestion on my phone keyboard in the browser is dro.pm and I just have to add a slash (long-press "m") and a letter. It's quicker to use dropm than to open a chat with myself or send myself an email or something.
Of course this is just protected by https, and although it is source-available and the links are really gone after expiry (or when you edited it, the old contents are irretrievable), magic-wormhole is superior in that you don't have a trusted third party. For the cases where the other party doesn't have magic-wormhole installed, this might be helpful. This also alleviates the requirement of WebRTC for both parties to be online simultaneously.
If I understand it correctly, the rendezvous server is not a trusted third party, just a message relay. They wouldn't be able to read or modify your file transfer's contents.
Knowing Apple, such an effort would likely be destroyed by an army of lawyers the moment you bring out an app that provides such features.
The Apple ecosystem is very closed and Apple will fight tooth and nail to keep it that way. Removing vendor lock-in would, after all, allow users to try switching to another brand without an enormous amount of hassle.
Apple's open-source mDNS/DNS-SD implementation mDNSResponder underlies Android's NSD API, so I could one day see seamless interoperability over Wi-Fi Aware being a thing. I wouldn't bet on it happening, but given Windows (which ships with an mDNS/DNS-SD stack) and Android will be doing it before too long, I wouldn't be surprised to see Apple join the party.
I hope we'll one day see proper support for mdns in Android. Devs shouldn't have to go out of their way to use the special NSD API to provide mdns functionally in their apps. It should just work at the system level, which to be clear, it does not. It's been years and years in this state, so I can't be as positive.
Incidentally, Wi-Fi Aware is based heavily on AWDL but the Wi-Fi Alliance made a number of breaking protocol changes in the Wi-Fi Aware standard. This isn’t exactly Apple’s fault - it could have been interoperable.
> The Apple ecosystem is very closed and Apple will fight tooth and nail to keep it that way.
In general you are right, Apple is very adamant about keeping full control over their ecosystem (and locking users in). They even sponsor a C compiler project so that they can avoid gcc. There are exceptions though, like AirPrint which is a marketing name for open standard technologies: https://wiki.debian.org/AirPrint
The switch to llvm/clang was about more than just avoiding gcc… it was also about filling holes in gcc’s functionality. Swift for example is heavily rooted in llvm/clang because at least at the time, llvm/clang was capable of a lot of things that gcc simply wasn’t, and the arcane nature of gcc’s innards made it unnecessarily difficult to add missing features to. Of course, GPLv3 poses issues too, but that’s only a single factor among several in Apple’s decision to deeply invest into llvm/clang.
Apple wouldn't allow a solution that's integrated within the OS, but if you want to share data between devices, AirDroid [1] is one such free service. It is available on multiple platforms, including Android, iOS, macOS, Windows and web. Use web.airdroid.com on devices/computers where you don't want to install the application.
Exactly. They are well aware of the fact that Android users are left out in the cold. They are only able to do this because of their market position. If they really cared all they would have to do is create an iMessage API and the community would build it for them, but they would rather try and drive iPhone sales by building a walled garden.
I mean, wink wink, it's exactly what they wanted to do. Apple wanted a closed garden, they built it, and they won't allow anybody in from outside, and for now it's working, so, for them, there's no problem.
Steve (Jobs) famously, and impulsively claimed they were going to release FT as open source - so impulsive that he never even checked with their lawyers first, because it never happened due to legal/licensing issues, not because they didn't want to.
Do you think that's true, sounds exactly like a publicity stunt to ensure a big launch in the face of having no answer to "is it interoperable".
If you say "yeah sure it is, we're even releasing it as open source" then you prevent the product falling at the first hurdle because people want an interoperable solution. Once you have adoption network effect carries you through.
Sounds like most probably a standard corporate lie by Jobs; do you have evidence to the contrary?
I guess you mean something that is first party on both platforms? Otherwise SMS-like communication is obviously a solved issue (Signal/WhatsApp/Messenger et al).
> Does Android have something similar to iMessage?
Kind of-ish. The RCS support is rolling out slowly. Of course there is a big difference in that RCS is not end to end encrypted. Also, there is nothing stopping Apple from implementing RCS as well, but I doubt they will do.
If Apple does adopt RCS, it's likely only as a fallback for iMessage, much like SMS today. And rather than Apple abandoning control over their messaging platform, I'd sooner bet on Apple releasing iMessage and Facetime for Android with a small subscription fee. Probably in the range of US$1-$2/month.
If there's one thing that Apple hates more than lost profits, it's handing over control of anything to carriers. Cheap-subscription iMessage/Facetime for Android would strangle RCS/Duo in the cradle and put a thumb in the eye of the carriers, while bringing in some steady and significant regular revenue. Let's not forget that Apple is already offering a subscription service on Android, Apple Music. They're not averse to the concept.
This looks interesting. I'm a bit confused though. It seems like all the pricing plans are structured in a way that you're either only paying for iOS devices or for every kind of device except iOS ones. Does this mean transfers between iOS and non-iOS devices aren't supported? Or do I have to buy two plans in order to do it?
edit: Okay, on a closer look, it seems like you do need two licenses, because the iOS license can only be purchased through the iOS app. Just wish they'd provide a brief trial so I could test it.
edit2: Nevermind, apparently, if you install the apps, you can use them in a limited fashion before buying. It doesn't say anywhere what the limits are, but it says there are limits.
One of the limits is that you can't choose where your downloads go. Also, there are now ads on the unpaid version, which I think weren't there before.
I grew to dislike the app after running into the limits/ads.
It does work, but it feels cheap and unpolished. There's no way I'd ever buy one of the pricing plans. I just don't feel like I would be getting my money's worth, especially when I could just go through a more involved solution.
Ah, okay. I'm considering buying it because it fills a certain need (transferring files easily between my iOS and Android/PC devices easily and quickly. 10€/year for something that works well, takes a minimum of tapping buttons and just doesn't get in my way, it seems worth it. But then I'm somebody who'll pay 20€ for an app if it means I "own" it and it solves whatever problem I have.
Maybe because it's not free: 5$ a year for 4 devices but only the same type, you cannot install it on your Mac and your iPhone With only one licence.
So if you want to transfer between Mac and iPhone, it's at least 10$ a year.
And the other big problem - that is $5 a year just for your own devices. I have plenty of ways to move files from one machine to another, but if I have a file that I want to give to a family member sitting next to me, or a co-worker across the table the chances of him having the same service is zero unless I sell them on it. With AirDrop I just look at what hardware they have and immediately know if it is compatible.
Let's talk for a minute about why all the Apple things aren't open. There is zero about iMessage or AirDrop that should be proprietary. The only reason I know of is vendor lock-in, and that stinks for users. It would be way more helpful to way more people if these features were ubiquitous, open, and standards-based like SMTP or IMAP is for email. We wouldn't except an Apple-only iMail, why do we accept iChat and iPhotoShare?
What does Apple sell? A commodity computing device running undifferentiated software? Or the experience of a holistic tool?
When someone distributes a Messenger by itself, you don't ask why it isn't open. You don't ask why the hardware device isn't open. Why should a vendor unbundle the two just to make you happy?
What if it doesn't really unbundle, what if the capabilities combine to offer most buyers something they value more than pieces parts?
I totally get that argument for things that aren’t commodity. Chat and file share should be interoperable services in 2019. Like Email is and has been for ages (though not always as any GroupWise or BBS user will tell you). The vendors can bundle and have closed source implementations, but by definition they should work with any provider. But I feel the same way about “you can only iMessage other iPhones” as I would if I could only email other iPhones (or as a better analogy if emails to non-iPhones were reduced in functionality.) and yes, they should make me happy- I’m the customer.
I agree that they should be open, but there are some advantages to the current model besides vendor lock-in.
Requiring a device certificate to communicate puts a pretty high bar on the integrity of the system. Being closed also allows unrestricted innovation at a faster pace.
It’s very difficult to add features once an open standard is out. Just look at the evolution of the major web browsers for example.
> Being closed also allows unrestricted innovation at a faster pace.
Theoretically but has there been any new actual development in AirDrop though? From the outside it seems like it's been the same since it was released but my only Apple device is my iPad Pro so I don't really have a use for it.
Yes but it's been out for 8 years surely if it being closed allowed quicker development we would have seen something visible from the outside in that time.
I haven't looked at the OP yet but as convenient as AirDrop is, I find its reliance on both BT and WiFi confusing. One needs both the devices to be connected to the same network to be able to able to drop stuff.
Few days back my home router broke down and I was unable to send URLs from my iPhone to Mac just because there was no common network.
You just have to be within bluetooth distance; AirDrop doesn't require the same wifi network. It does require wifi to be enabled, because AirDrop creates its own wifi network side channel for the actual transferring of files.
Some (more and more) people simply have it disabled, because 4G is faster and more reliable than wifi. And if you don't have any bluetooth devices, it will save some batterylife.
This is very interesting to me. Does anyone know if there are material differences between Apple's and Android's implementations (in the battery savings sense)?
If you use the controls that slide down from the top of the screen (I'm at a loss as to what Apple calls it) and "disable" Wifi it simply disconnects from the associated AP and won't reassociate. It will turn itself back on at 7 AM tomorrow morning.
"Disabling" Bluetooth is similar, it will disconnect from paired devices, but BTLE is still available.
You have to use airplane mode, or go in to the Settings app to fully disable it. If that isn't discouragement, I don't know what is.
I have mine set to Contacts Only, and regularly use it to move photos between iPhone and iPad when on flights (and indeed other locations) without wifi or cellular.
It's sad there is a certificate involved so it can't be 100% compatible open alternative. I use Linux with KDE and KDE Connect which offers "send file" functionality from Android and it's enough for my use case.
I wouldn’t consider that legitimate at all. Legitimate airdrop is its exact intended use case: p2p transfer between people who are both consenting into it.
Balls in Android's court to deliver some kind of p2p connectivity that works beyond Android2android. Can't happen soon enough. Stop playing with yourself & start doing real computing, Android.
What a dumb reply. Webrtc works on the web for p2p ok. But how do people in the same area transfer files with each other? There are huge rendezvous problems webrtc comes no where near addressing.
God, I wish people would stop using Python for these sorts of things.
It is an okay language, but after tracking down why it doesn't build and considering messing around in my system and making either installing older versions of libraries or messing around with symlinks I stopped and asked myself "really? I want to spend my time fixing this?" and just deleted the entire clone of the git repos.
Python is a nice language and all, but it is not a language suitable for writing applications that you distribute. (I wish the Python core developer would devote some time to making Python less horrible for distributing applications, but after around 30 years, I don't think so).
Except to add that Python has issues but is clearly winning hearts and minds for a reason. It’s wonderful that an open source language has become so powerful, versatile, and widely adopted.
I’d take this era of computing over the days of ActiveX and Flash being the ubiquitous approach to releasing software.
I don't use Python, but I've never had any issues with any distributed Python app. I have, however, had endless problems with C and C++ apps.
The C way of referring to header files and libraries on the host system invariably leads to situations where the app wants to use a specific version that your system doesn't have. And we're not necessarily talking about system libs, either. Apparently authors thought the only way to mitigate the problem was to invent Automake/Autoconf in order to sniff what your system is capable of. (The saner solution for non-system libs would be to "vendor" your dependencies inside the app's source tree.)
Python has that pretty much solved with PIP. (Dependencies can still be a problem if a package uses the C way to link to things like Readline or OpenSSL or whatever.)
I deal with biology researchers trying to install various analysis programs. Python has caused me some pain recently. Its not easy, some use python 2.7 some packages use python 3.0. Many use different environments (pyenv/conda..).
I ended up with a separate environment for each package..
To be fair java based software install isn't much better.
R is oddly a standout, in ease of installing packages. (Except the one time it didn't work, but in this case it wasn't much worse than anything else).
In general software distribution could be made much better.
I honestly haven't run into this issue in a long time. `pip install --user` is one of your friends. Just using the official python:3 docker container is another. If you really want, you can even go back to virtualenvs.
npm was also really bad about nothing building or working a few years back. It's improved, and there are alternatives like yarn. Rust/Cargo has this issue as well (whenever I attempted to pick up some Rust; every example I found would break -- constant language changes were an issue; not sure if that's still the case).
Package management is a big problem in general, but we have solutions like the ones I've mentioned. This is a bad argument against not using Python. I honestly thing this type of application is fine in Python (you might need a privileged container if you go the docker route; wasn't sure how low level the Wi-Fi stuff it needs is).
What language do you recommend for this type of application and why?
Primarily any language that can produce executable binaries. Preferably statically linked binaries so that you can ship a unit that will not depend on the state of the system you try to run things on.
(With disk and memory sizes, dynamically linked binaries aren't really as relevant anymore since the often trivial cost of size more than makes up for the nontrivial cost of having to fiddle around to make things actually work)
Wasn't the original argument more about allowing the system to provide patches to shared libraries, therefore moving the burden of patching to sys admins instead of the developer of the package?
It's possible to produce a self-contained executable with Python, there's multiple solutions for this it's just not a part of the core language.
I agree that the situation isn't perfect in the python world but it's actively being worked on and I think PyOxidizer looks like one of the most interesting recent developments in this space: https://pyoxidizer.readthedocs.io/en/latest/
Some other alternatives (depending on your use case, e.g. target platform) are PyInstaller, py2exe, py2app, cx_Freeze, Shiv, PEX (basically tooling for native .pyz), XAR, Nuitka (compiles Python into a native binary), pyninst (creates windows installer), PEP 441 style .pyz (executable python archive, can easily vendor in dependencies). Then there's tools like fpm if you want to create packages for deb, rpm, FreeBSD, macOS .pkg, paceman, tar-archives, etc.
I've used some of these in enterprise settings building rich GUI-applications being distributed to end users who have no idea of what Python is and to whom underlying technology choices are invisible.
> What language do you recommend for this type of application and why?
This is the #1 reason why .NET Core gets so much of my mindshare these days. If I write something and put it out there, it will Just Work (TM) 99.99999% of the time. As soon as the user runs `dotnet build` or `dotnet run`, it'll automatically go grab the exact right versions of packages from NuGet and set everything up locally. The only time anything tricky ever happens is if some third-party library doesn't ship a native library dep for each platform; that's rare, but does occasionally happen.
You’re right and the other half of this is the hoops you jump through as a developer to target as many python releases as possible. It’s insanity.
I built all of Cronitor in Python, still love it, but when we started building server agents it was not a hard decision to leave Python behind. (In this case, for Go)
I do agree that's an issue, although I wouldn't put it as drastic as you did.
I was working on a Flask project many years ago and it didn't seem straightforward to "vendor" dependencies with the `pip install` paradigm. Guess I was wrong.
There's a reason why virtualenv and similar exist. Another interesting point. Gentoo Linux (which uses Python in the Portage package manager) doesn't support installing modules as system (by default) to presumably avoid breaking the system package manager.
People use Anaconda to avoid this and get a pretty seamless experience. Lets you use multiple python configs on the same system and manages that for you.
I learned this when taking a computational finance class specifically to figure out why that and the quant/ai/ml community gravitated around python.
Python's advantages here came from about 12 years ago, when the syntax was friendly but other similar syntax friendly languages had a lot of overhead and other compromises. But these differentiators are mostly non-existent today, and Python is the only one with basically two different programming languages (py 2.x, py 3.x) operating under a single "Python" brand.
I agree, it really is to bad there isn't a better way of distributing Python. I've not had much of an issue with it running for console based pure linux stuff mind you, it tends to in my experience fall apart when you add gui elements and/or go outside of linux.
Some examples:
* 1-tap file transfers: https://darker.ink/static/media/uploads/08_awarebeam_1.mp4
* Sharing presentations, images and drawings: https://darker.ink/static/media/uploads/05_meshpresenter_1.m...
* Playing Quake 3 (OpenArena): https://darker.ink/static/media/uploads/02_openarena_1.mp4
If you want to know more details, this talk is a good starting point:
https://fosdem.org/2019/schedule/event/device_to_device_netw...
https://darker.ink/blog/mobile-design-with-device-to-device-...