If they can get the manufacturing process for them worked out. I really do wish memristors turned out to be a success, HP hyped them so massively in the 2011-2015 era. What happened was that the material they had was susceptible to rusting, so what seemed like a good initial yield would become unusable some months down the road.
I google for memristors sometimes, and all the activity regarding them is still confined to the lab unfortunately.
> The PRs are really good but there's no way to talk to the actual developer working behind those PRs.
I'd really like an avenue to get into the US market as a remote worker, but am being unfairly treated by this job market. It is a pity as I am both a highly skilled programmer and have nearly a decade of experience. I'd consider this service if it could serve to showcase my skills, but if I am not going to get any credit for doing the work personally, there doesn't seem to be much point to it.
If you are a highly skilled programmer, why not just start contributing to open source repos yourself? You don't need to go through GitStart to get there.
Many companies, including my own and the commercial open source companies mentioned in this post, consider open source contributions a major factor in hiring remote talent.
If you're comfortable sharing, could you elaborate on how a transition from unpaid contributions to some kind of paid work arrangement typically happens.
In your experience, is there usually a more or less deterministic path to a stage where the question of starting to get paid usually comes up? Who initiates it?
The result of this discussion may very well be that the company isn't interested in this kind of relationship for any number of good reasons. But what's important, I think, is for a contributor to be able to have the right expectations coming in. As in:
- Should I join on a purely for fun basis and see where it goes from there, keeping in mind things most likely will stay this way going forward.
- Or if everyone is happy with the quality of code, communication, etc across a number of pull requests, then it's definitely OK and expected to bring up the question of payment/employment.
Your public contributions are a showcase of your alleged skills, in most cases.
There is no deterministic way to transition from unpaid to paid. It's just one signal among many that a recruiter or company looking for services would look into.
I don’t think you’re being treated any more unfairly than anyone else trying to break into the US market. It’s simply a case that a decade of experience and being a skilled programmer are not enough to stand out from the crowd.
Depending on your location there may be legal restrictions preventing from US companies hiring you - even through a B2B contract.
Most companies hiring remotely don't have if your 1-person company is US-based or not.
The ones that care to hiring within vetted countries for $reasons usually will not accept exceptions. Notable example is GitHub which has a list of countries they hire from (even though they're owned by MSFT and could hire on the Moon if they wanted).
Having a company is mostly for tax purposes. It makes everything easier. I think the hiring company doesn't care if the contract is done with a business or an individual. Both are usually limited liability and offer no advantage in case of contract breaches.
If you are highly skilled already, it should be very viable to just work with OSS projects directly. For example in Python, the Django & DRF projects are always looking for contributors (though Django can take a long time to land substantial features).
In my experience as a hiring manager it was quite rare to see lots of OSS work in candidates’ GitHub accounts, but I’d absolutely prioritize those that had good work in OSS. (Also worth emphasizing that technical design, collaboration, and documentation are important and underrated, and can also be showcased in an OSS project. If you can demonstrate good communications in an async OSS environment, that would probably reflect well on your ability to contribute as a remote employee.)
All that said I’m not sure that OSS is the best resume builder. For big companies you need to drill LeetCode and system design. Perhaps for startups it is not the worst use of your time.
> For big companies you need to drill LeetCode and system design. Perhaps for startups it is not the worst use of your time.
Exactly. Any hiring manager with a brain and not bound my clueless corporate processes would use OSS contributions are a decent signal for proficiency and social skills.
That means nothing in a big corp though. The hiring panel will never accept a candidate that fails the Leetc0d3 test because that means other panels could do the same and then it all falls apart for them. Status quo and all.
To be fair, it’s a hard optimization problem. If you are trying to remove bias from your hiring process then it is difficult to objectively score things like OSS contributions. (I do agree it’s something most bigcorps could do better.)
As a small company you don’t need to try to remove bias with objective metrics (indeed, “culture fit” and “thinks like me” can be good heuristics for building a small tight-knit and high-performing team) but when you hit the company size where you must introduce multiple layers of management, then fully trusting each line manager’s subjective judgements can lead to very disparate quality and other political/organizational issues.
We currently attribute commits back to every single dev involved in a PR (including reviewers) as co-authors. We also actively work with our customers to allow devs to mention their contributions in their CV publicly. And you can always reach out to them directly if they have an open position (especially mentioning your experience working with them through GitStart)
What would be an ideal way to attribute the hard work back to the devs in our case?
'Autism' when applied as a slur to everything and everyone has grown to mean 'talent.' It is very rare to see it referred to the actual medical condition these days.
Regexps aren't even Turing complete as far as I know, if whatever they have in their paper works for arbitrary programs it would be shocking. I'll give it a read.
*Edit*: The algorithm in the paper is a DP like algorithm for building regexes. They use a matrix, and it has all the potential strings to be checked on one axis, and all the potential regex programs on the other axis, and in-between values (the actual matrix values) are booleans saying whether the string matches the program. The algorithm builds the matrix iteratively.
I haven't understood how regex evaluation is done, probably directly, but obviously this algorithm is only for checking whether a particular regex program matches an output rather than general purpose synthesis.
We'll have to wait for AI chips to really scale genetic programming, GPUs won't cut it.
There is no magic in GP. It is just another form of searching the space of programs, i.e. program synthesis. The search mechanism is a local, stochastic search, known to be especially inefficient (for example you may hit the same program multiple times). What's good about GP is how simple it is, so it's a good starting point.
My story is that even though I've always been talented at programming, to the point of winning the national programming championship of Croatia back in 2002, I didn't start it seriously until 2015. I had a dream of wanting to pursue the Singularity, and I worked hard every day, weekends included, to get closer to it. I wanted to get better at ML, so I worked on ML libraries in F#, which eventually grew into work on my own programming language Spiral. It took years of full time work, but I implemented my own PL, and a GPU based ML library from scratch. You can check out the language on the VS Code marketplace. This experience made me really good at implementing ML papers and algorithms. It also made a master of functional programming, and a strong generalist. Most of what I know isn't on the resume, and I am even familiar with dependently typed programming in languages like Agda, and theorem provers like Coq.
Unfortunately, I've stumbled. What I really wanted to do using these skills is create something like a poker bot, and crush the online gambling dens, but no matter how much effort I put into ML, I could never overcome the state of the art in a significant way. This is a huge problem since I am interested in RL, but RL only works on toy problems. I hate that I know everything that is wrong with current ML techniques, but have no idea how to fix it.
Instead of the world we have in the current timeline, it was obvious to me from the start that the way ML is currently done is broken, but I thought the research community would be able to overcome it, and give me some tools I could use to make a real world effect. Also, I thought there would be a ton of AI chip startups coming out with novel hardware, in which Spiral could have found its niche, but so far they've been a huge dud, and NVidia reigns supreme.
I am really looking for work so I can finance my own ML research, thus far I didn't need it, but I've come to the conclusion, that if I want to make a real breakthrough, I should be implementing genetic programming systems on AI hardware (not GPUs) but that approach would be highly intensive on computational resources which I cannot afford. In my work on Spiral, I've pretty much reinvented the field of partial evaluation, and if the job involved making advanced software like interpreters on hardware that had poor software support, I'd probably be peerless at that, even compared to anybody else in the world.
But as for web dev jobs which I am seeking currently, it feels like I am mid range in terms of skill. In the past few months I've gotten familiar with React, Fable, and now am pivoting to Blazor, and by the end of the year, I should be good at that.
Mojo is a language that thinks it will impress the Python programmers with its ability to implement matrix multiplies directly in it. I don't think it will be that easy, but it might replace Cython.
It is a pity Concurrent ML didn't take off. F# has a great library called Hopac that implements it, but it is 50 times less popular than its closest competitor Rx.
Also seconding that other post. Actor models and async concurrency are only useful if you need to send messages between machines, but otherwise you want to use synchronous concurrency as it is easier to deal with.
Another thing that could happen is a fundamental algorithmic advance to replace backprop. The brain doesn't use it, and backprop doesn't have answers for how to do long term credit assignment and continuous learning. It works poorly with RL as well.
Better algorithms would require writing new ML libraries which would weaken the stranglehold GPUs have on ML. New niches opening would inevitably be bad for Nvidia, but good for upstarts.
If they can get the manufacturing process for them worked out. I really do wish memristors turned out to be a success, HP hyped them so massively in the 2011-2015 era. What happened was that the material they had was susceptible to rusting, so what seemed like a good initial yield would become unusable some months down the road.
I google for memristors sometimes, and all the activity regarding them is still confined to the lab unfortunately.