Hacker News new | past | comments | ask | show | jobs | submit | aripickar's comments login

I don’t think they were the same difficulty at all as some of the upper divs. Classes like CS 170 (Efficient Algorithms), Cs 189 (ML) and CS 182 (Deep Neural Networks) were all significantly more difficult than any 61 series class.


Of course a computer scientist would call out my imprecise language. :)

What I meant was that it's difficulty was in line with expectations for a lower division course given the difficulty of the upper div courses in the same major.


For full context, this is someone with a wife and two kids[1] who had a couple of semesters and just needed a temporary solution, not a 20 year old kid. Different then the title might suggest.

[1] https://www.berkeleyside.org/2023/05/15/uc-berkeley-la-plane...


Doesn't seem so. The article talks about a person getting a sociology degree, while the Reddit post talks about someone working on an MEng degree. Given the poster's extensive posts on Flyertalk and his comments saying that there are two of them[1], I think there actually might be two different folks doing this.

edit: Found an older post where the poster says they're doing an MEng Civil Engineering program[2]

[1] https://old.reddit.com/user/greateranglia [2] https://old.reddit.com/r/berkeley/comments/urcxua/is_it_poss...


Correct, but you don't even need to search the post history. The difference in the major is already noticeable in the original post. These are two different people. Just imagine, these two could have been commuting together and syncing up.


IMO one of the way that most schools are going to end up being able to detect plagiarism is going to be a custom word processor (or something similar) that can track all of the edits made into a document. Basically, have the students type an essay where all of the keystrokes are recorded by the program, and so it can be detected by the program whether someone is copy and pasting whole essays, or if someone is actually typing and revising the essay until it is submitted. Essays that are just turned in in general are probably going to be a thing of the past.


Maybe, but I doubt it. Spyware-based systems are doomed to failure as other commenters note. There's nothing you can do to prove the text came from a human. Faking inputs is extremely easy. People will sell a $20 USB dongle that does appropriate keyboard/mouse things. Worst case, people can simply type in the AI generated essay by hand and/or crib from it directly.

Schools are going to have to look at why take home work is prescribed, and if it should be part of a grading system at all. My hunch is that it probably shouldn't be, and even though it's a big change it's probably something they can navigate.

I predict more in-person learning interactions.


It's a cat-and-mouse game for sure. At the first level, any dongle that simply types the AI response through a fake HID device will be easy to detect. No real essay writer just types an entire document in one go, with no edits. They move paragraphs around, expand some, delete others, etc.

So this dongle will have to convincingly start with a worse version that's too short (or too long!). It'll have to pipe the GPT output through another process to mangle it, then "un" mangle it like a human would as they revise and update.

If trained on the user's own previous writings, it can convincingly align the AI's response with the voice and tone of the cheater.

Then the spyware will have to do a cryptographic verification of the keyboard ("Students are required to purchase a TI-498 keyboard. $150 at the bookstore") to prevent the dongles. There will be a black market in mod chips for the TI-498 that allow external input into the traces on the keyboard backplane. TI will release a better model that is full of epoxy and a 5G connection that reports tampering...

... Yeah, I also predict more in-person learning :)


Sure, but all of the above regarding making input look human is trivially easy -- because, again, AI.

More stringent hardware based input systems are likely non-starters due to ADA requirements. For example, disabled students have their own input systems and a college will have to allow them reasonable accommodations. Then there's the technical challenges. Some authoritarian minded schools might try this route, but I hope saner heads will prevail and they'll be able to re-evaluate why take-home work exists in the first place, and whether it's actually a problem for students to use AI to augment their education. Perhaps it isn't!


> whether it's actually a problem for students to use AI to augment their education.

To augment? No, but the problem is we can't tell the difference between a student who is augmenting their education with AI, and a student who is replacing their education with AI. Hence things like in-person proctored exams, where we can say and enforce rules like "you're allowed to use ChatGPT for research, but not to write your answers for you".


I'd build a structure/robot that I'd attach to my keyboard, and it would press the keys.

I started to write how it would be possible to control for that, but it got too Orwellian/horrible and I stopped.


> I predict more in-person learning interactions.

Which would be a huge benefit for the overall quality of education. A lot of student can write a passable essay in a word processor with spell check and tutors... but those same students sometimes have absolutely no idea what they've written. Group assignments has taught me this many times over.


My wife started teaching a class at the local university. She had a bunch of positives on the anti-plagiarism software used by the university. She ran a bunch of papers by me and man, analyzing the results are an art within it's self. People will unconsciously remember and write down phrases and smaller sentences they have read all the time. A little highlight here and there just has to be accepted. Then there are the papers that almost the entire thing is highlighted. It's the ones in between that are tricky as hell. A lot could have gone either way and it's a judgement call on the teacher whether to send it to the administration for review. I expect AI will just make it more difficult or hand writing is going to be the new hot subject taught to new levels in elementary...


To me it seems like academic papers force people to back up every statement with a quote and agree with assigned readings. This style of writing leads to unoriginal results.


Isn't that how non-fiction is supposed to work? It's about finding interesting evidence that adds up to something, not making stuff up.

Though, ideally by finding interesting evidence in books that aren't in the assigned reading.


It is, but I think that it would lead to a lot of false positives for automated plagiarism detection.


Yet another arms race. Use this key logging training dataset to generate a simulated realtime response on the usb port.


LLMs are useful for a variety of things. What you're describing would only be useful for students cheating on assignments. I doubt that it will attract the many millions of dollars spent on training GPT-4.

But more importantly, LLMs are always available over the Internet. If students need to use a physical device to cheat, that's already a big step forward, since it increases the chance of detection — a key factor in deterring misbehavior.


When I was in college we had a number of group projects and I thought the whole time that it would make a ton of sense for the professor to set up a class repo (I'm a old person so they would be a CVS repo at the time) and be able to see exactly what each person had contributed to the project. Even for single person projects it would have made it so much easier to detect cheaters. I also think it might light a fire under some of the less shameless slackers.

I hope schools do this now. Not only for detecting cheaters but to get the kids used to working in a more real world environment.


I think you overestimate the competence of the majority of professors. They can’t require version control if they don’t understand what it is or how to use it.


Back in the 90s I could kind of see this angle, but today it's so easy to set up a Gitlab there is no excuse.


The problem isn't the accessibility of Git. I agree that it's easy enough to set up a Github account today.

I've been somewhat of a Git evangelist. I've tried and failed countless times to convince people of the utility of version control. Perhaps I'm just a poor teacher, but in my experience, the features that make version control useful are too esoteric for most people to grasp.

This may come off as arrogant and jaded, but I would speculate that at least 50% of the population is incapable of learning Git without extensive coaching. That's not to say it couldn’t be useful for most people; it's just that they can’t envision Git’s utility for themselves.

Utilizing version control to combat AI generated papers would require students and teachers have a deep enough understand of git to break their work up into small commits and branches. I don’t see that happening outside of the CS departments of big 10 schools.


> I've tried and failed countless times to convince people of the utility of version control.

Are you conflating version control (the topic) with git (a specific implementation)?

In my experience, it's really easy to clue people in to the value of version control.

Git specifically, though, is genuinely difficult to learn and understand.


> I've tried and failed countless times to convince people of the utility of version control.

Don't pitch it as version control. Pitch it as "homework submission process" that has the side benefit of being a backup if their laptop crashes. Students are used to horrible homework submission processes (looking at you Blackboard) and quickly adapt to seeing version control systems as a pretty nice alternative.

And, for about 25% of your class, the lightbulb will go on and they'll start using version control even in their other courses.

> at least 50% of the population is incapable of learning Git without extensive coaching

Mercurial can be taught to mere mortals just fine. Same with Subversion. Same with CVS. I've done that for all three. People tell me that lots of artists use Perforce quite readily.

Git is the only dumbass version control system that revels in being obtuse.


What % of the grade should be based on LOC and what % based on story points?


Typically I'd expect the group project to be graded on its own and all students get the same grade from the project. However, when a project shows that some of the participants committed zero lines of code or suddenly dropped in enormous blocks then they should be asked about it. At the very least they should be encouraged to use branches and make frequent commits like in the real world.


As a parent whose student has worked with multiple essay entry editors/forms, they're almost all terrible with most students having to revert to writing the essay outside the system or risk losing their work multiple times. And this was with a simple editor - not more complex connections to even more sophisticated systems.

The budget available for educational technology is not sufficient to maintain the operation of the software, let alone sufficient to pay technical staff adequate to assess and select reliable systems.


Yeah, then you get a bunch of non-cheating students who are intelligent and just annoyed with the text editor will use cheating tools to insert their essay they wrote in a proper word processor - further poisoning the dataset.


“cheating tools” = cut and paste


But then you can have ChatGPT write your essay on a phone/tablet and you just slowly re-write it.

I think schools will need to change the way they go about testing student understanding of topics. Personally I'm excited for what this might look like and it is a great opportunity for hackers to really innovate the educational field.


Or they could move to a more British style, with in-person essays, proctored by human observers (not that there aren't old-fashioned ways to cheat on those too, but they're well-known).


Yes, when I had to take university entry exams in Brazil, all parts of the exam were in person, including writing the essay, with a mandatory topic only disclosed when the exam starts. Preventing ai cheating might become more difficult for educational projects that are long form, like writing a dissertation, or big coding challenges. Although, for coding, one thing that I have consistenly seen work, is to just ask students to do a walk through of the code. People that just copy someone elses work are generally lazy, and don't really study what they copied, and it becomes easy to see who put in the work.


Writing a dissertation is a completely different kind of thing to graded homework assignments. A dissertation isn't graded, or even if it is, noone cares about the grade.

The work in writing a good dissertation is done prior to writing, the writing is just wrapping up. If you can write a good dissertation with AI, so much for the better.

Meanwhile, the work in writing a bad dissertation is never done at all and the dissertation is a more-or-less-undedectably plagiarized document read by at most two people (and perhaps noone, including the writer). This process is a waste of time and accelerating it with AI will change nothing other than saving a few hours for people who wanted a degree (and definitely would have gotten it without AI) without doing any research.


This seems so backwards to me.

If it's so easy to just copy and paste an essay from an AI generator that is of such high quality that it cannot be detected, then why are we still making students learn such an obviously obsolete skill? Why penalize students for using technology?

Surely, there are still things that are difficult to do even with the help of AI. Teach your students to use these tools, and then raise the bar. For example, ask your art students to make complex compositions or animations that can't be handled by Midjourney without significant effort.


The reason it's done is to teach students how to think. By writing down their thoughts they are forced to think about a topic. It's the same reason small children are still taught arithmetic although we have calculators.

That's the theory, anyway. In practice students learn that "really thinking for themselves" in essays is usually not rewarded while paraphrasing some reading assignments with some sprinkled quotations works much better and is less work than thinking about topics they don't care about.

Maybe the AI stuff will lead to practice better approximating the theoretical goal.


> If it's so easy to just copy and paste an essay from an AI generator that is of such high quality that it cannot be detected, then why are we still making students learn such an obviously obsolete skill? Why penalize students for using technology?

That's like asking, why do we have students do PE (physical education) when professional athletes exist? Clearly, having students play basketball is obsolete, because the NBA exists. Essay-writing is PE for thinking.


The difference is that GPT can convince the teacher that the student is a competent essay-writer, but can’t convince the PE teacher that the student is an NBA player.


>why are we still making students learn such an obviously obsolete skill?

Just because a machine can generate an essay of questionable quality with a fair chance of containing hallucinations making it unusable for many fields of human endeavor doesn't mean that writing is no longer a useful pedagogical tool. Learning to write is a part of learning to think.


I also had a similar idea on how to determine that a piece of writing is genuine. It would be to make students use a word processor that contains a full audit trail of all changes, timestamped. The software would then use a trained AI to look for patterns that deviate from normal composition activities. This could catch a lot of the current fraud. Until someone creates AI bots to get around it...


I don't know if this is the way things should go, but it seems like a decent prediction about how they probably will. In fact, many law school exams are already administered using "blue book" software that functions as rudimentary word processors that lock down the computer's other functions for the duration of the exam. Perhaps other disciplines use this software too.

In the exam context, this software probably already solves the AI problem. Locking down the computer would not, of course, be a solution for other kinds of assignments, but I'll bet it won't be long until schools are using software like you described that are just do a lot of snooping instead of locking down the computer.

Unfortunately, the existing software is very clunky and not very reliable. And it doesn't seem like anybody has a strong incentive to improve it. (The schools license the software, and the schools understandably don't care all that much whether the software is nice to use.)


Open chat gpt on your phone, ask it to write your essay, then retype its response manually


You might even learn and retain the material better that way (assuming the gist of it was correct, that is).


The cat and mouse iteration will be using ChatGPT integrated with Webdriver to slowly type the essay, writing a prompt that says "make occasional mistakes", etc.


Wouldn't it still be easier to type out the entire AI generated assignment than to come up with an assignment and then type out the assignment you came up with yourself?


Obviously typing from start to finish with few edits is also a "failed" result in such a program. Someone actually writing an essay should be creating structure, taking notes, rearranging paragraphs etc.

Then again you have a good point. Often you blat out an essay and then edit it. Same thing goes with typing in an AI generated template.


> Obviously typing from start to finish with few edits is also a "failed" result in such a program.

I hear ya, but I wonder if there are people who have the proper mental organization to write a well-organized coherent essay in one shot.

I'm not one of those people, but I assume they exist. And I assume they would be unfairly penalized with such a system.

That being said, I think we are going to end up in a world where we are all communicating with each other via ChatGPT (or whatever succeeds it).

ChatGPT will be our "Lingua Franca" as well as our "Mens communis" (I got that by asking ChatGPT). Strange times ...

* edits for clarity: ha! </irony>


Proctoring software of this sort are already in use by large test-taking agencies such as PSI and Pearson-Vue. Microsoft also has its Take a test app.


Instead of spyware, just issue mechanical typewriters.


Or we will move on to teaching higher conceptual skills which are actually relevant to a post-AI society.


You would need to ensure that Chrome extensions and keyboards with macros were disabled somehow


This is a fantastic idea


This is a horrible idea.


Maybe you connect to school chat AI and then it probes you for knowledge. Same AI watches you write essay type bits and helps you out if you get something wrong. Teacher will get report how well you did and how present you were.


  Location: San Francisco 
  Remote: Not full time (at least hybrid, no full time remote)
  Willing to relocate: No
  Technologies:  Java, Python, Bash, SQL, AWS, Postgres
  Website: https://aripickar.github.io
  Résumé/CV: https://aripickar.github.io/Ari_Resume_Winter_2022.pdf
  Email:ari.pickar@gmail.com

Hi HN! I am a fullstack/backend engineer with 3 years of experience looking for opportunities to work in person in San Francisco (preferred) or the greater bay area. I have previously worked in both the python (django) and java (spring) ecosystem at AWS and Noom. Currently looking for full time opportunities. I'm open for anything from first engineer to larger companies.


  Location: San Francisco 
  Remote: Not full time (at least hybrid, no full time remote)
  Willing to relocate: No
  Technologies:  Java, Python, Bash, SQL, AWS, Postgres
  Website: https://aripickar.github.io
  Résumé/CV: https://aripickar.github.io/Ari_Resume_Winter_2022.pdf
  Email:ari.pickar@gmail.com
Hi HN! I am a fullstack/backend engineer with 3 years of experience looking for opportunities to work in person in San Francisco (preferred) or the greater bay area. I have previously worked in both the python (django) and java (spring) ecosystem at AWS and Noom. Currently looking for full time opportunities.


Yeah, probably the best way to prevent this from happening would be to have the FDIC guarantee up to 40 million for businesses, or something on that level. Going from having >95% of your cash uninsured to even like 30% uninsured (for a huge company) would be a massive change.


That sentence seems enough like some bastardized version of shakespeare[0] and Lisp that I bet you could get it to compile in some dialects.

[0]https://en.wikipedia.org/wiki/Shakespeare_Programming_Langua...


It's less that it's illegal/impossible and more that it's not in the interests of the company in the long run. Say you do that, then what? If you screw over the seed investors, they are probably are going to tell the series A (and B, C, etc) investors that you screwed them over, and its going to be a lot harder to impossible to raise the next rounds. Plus who would want to invest in a company when the founder already screwed over the last investors. The only way that it could work is if you are able to grow the company indefinitely without raising more money (See Toptal).

Basically, you are exchanging all the goodwill and ability to raise in the future for a small percentage of equity. Not a great trade, if you ask me.


Not quite. The way that it works is that the 375k is invested now, but at terms that are determined in the next equity round. If the next round values the company at 10 million, then the 375k would be 3.75% of the company.


Woah, okay, didn't totally understand that until you put some numbers on it.

That's... almost unbelievably founder-favored, yeah? Neat.


Not unbelievably, just happens to be a win-win. The founder likely wants the capital now and YC wants more ownership.


Yes.

The company gets the money now.

The more they grow, the less YC gets for the 375k. But the more they grow, the higher the value of 7% is going to be. And also, the more they grow, the more they are likely to grow in the future. So the 375k share is also more likely to keep growing.

So in a nutshell: the 375k is incentive for the company to grow, which is also in the interests of YC, since they have 7% (+ x%) and in general getting startups to grow is the whole point of YC.


win (founder) - win (YC) - lose (VC), to be correct.


This is also not true. Their uncapped MFN note assumes the terms of the lowest-capped safe (or other investment) after their investment. So if founder accepts $3.75m capped safe soon after YC’s investment, then later raises an equity round at $10m valuation, YC gets 10% more, not 3.75% more at that time. There may be dilution from the equity round but that’s a different matter.


You are incorrect.

As per https://www.ycombinator.com/deal “The $125k safe and the MFN safe will each convert into preferred shares when your company raises money by selling preferred shares in a priced equity round, which we refer to below as the “Safe Conversion Financing” (this will typically be your “Series A” or “Series Seed” financing, whichever happens first).”

Edit: Sorry, I am absolutely wrong here. I completely misunderstood what nirmel was saying.


They are correct. The MFN safe converts at the best terms. So if there is a SAFE with a post-money $3.75m cap, then even if the next round is priced at $100m, YC gets 10% at that 100m valuation. It converts at an equivalent ownership compared to the cap. That's why caps exist.


The MFN applies to other SAFEs too. YC will get the "best" price during the priced conversion. If you took other money at a lower SAFE, that would peg the "best" price in the conversion -- and thus, that's what YC's $375k would get.


Thank you for the correction. They mention this at the footnote of the article: “1 The $375,000 is on an uncapped safe with ‘Most Favored Nation’ (MFN) terms. MFN means that this safe will take on the terms of the lowest cap safe (or other most favorable terms) that is issued between the start of the batch and the next equity round. Simply put, we’re giving the company money now but at terms you’ll negotiate with future investors.”


What's not true? It seems correct to me.

You're just providing an alternate scenario that isn't as favorable. And since the initial $125k implicitly has a $2m valuation attached to it, if you raise again at $3.75m, then that's probably not ideal.

So a sensible approach would be to view this as providing an implicit minimum value to target for your next round, i.e., >$5m (7.5%).


Is there a timeline for when you will be rolling out the availability to ride in one of your vehicles to the general public?


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: