Hacker News new | past | comments | ask | show | jobs | submit login
Ask HN: What automation tools have you used to replace mundane activities?
408 points by beatthatflight on Feb 17, 2020 | hide | past | favorite | 229 comments
For my travel site side hustle (beatthatflight.com.au/), I find and publish cheap flights for Australians. Sounds simple, but even if say, someone gave me the flights each day, there's posting the, sometimes to deals sites, as well as then editing it and posting it on my site, and out to social media, and then to specific mailing lists on mailchimp, depending on the source city of the deal, or type of post.

A LOT of that becomes tedious. Mailchimp can auto post to FB/twitter/instagram etc, but even converting the blog post into mailchimp emails gets tiring, as you need to choose a feature photo, the specific mailing list etc.

I'm a selenium guy, so have built some scripts to minimise some of it, but the rush each time of finding a deal still pains me when I think of the tedium ahead of publishing it.

Ideally, my goal would be to submit [to][from][when] to a script and the whole process would be automatic. I can see that selenium could do it, but my goodness, it'd be slow and potentially flaky all via the web UI for those sites.

I'd love suggestions, examples of automation you've done, and tools used?




I have a slack workspace that has several bots, some listen for specific keywords and or phrases around the web and post to the relevant channels. Other bots remind me if I'm slacking off (maybe stayed a little too long on youtube), remind me of appointments, to drink water, if a stock price falls and some more. Also have filtered rss feeds for business and finance news etc. All of these bots also have custom identities that make it much more engaging like Axelrod is my bot for finance related news.

Others tell me if my server is getting swamped, forward important emails like renewing a cert, a deploy failed, bug in prod or something else that requires often immediate attention. Chat widgets from sites in production also go directly to Slack so I can reply quickly if needed. (Gmail, Sentry, Freshchat, Ploi etc, integrate them all and filter out what's critical).

Everything generally operates on a 15 min interval or so it's not noisy and I've pretty much tuned it to only receiving specific stuff I want to see. Having a central assistant like hub to go to is very useful and time saving.


+1 for having a personal Slack workspace - I think this is an under-appreciated way to host side projects and learn how to develop with Slack. There are so many services which already include Slack integration, you can wire everything up into your personal notification space.


Oh cool, I didn't know if others were doing this. I've recently just started going down this path with some zero effort IFTTT triggers but I'm not happy with how infrequent they seem to trigger. My next step is to build my own set of listeners/scrapers.


I posted the same thing on a similar thread yesterday but if you're looking to build some scrapers and listeners and make it easy for yourself I'd highly recommend Huginn (https://github.com/huginn/huginn). I set it up a few months back and I've been using it to scrape all sorts of info and alert me on things. IE:

* A change in GBP -> NOK exchange rate, to indicate if it's a good time to buy ready for my next trip to Norway.

* Alerts on flight prices to help me purchase flights when they're at their cheapest.

* Daily digest email with content I'm interested in from HN and some web comics.

* A summary of new content coming into my RSS reader.

It works really well for this sort of stuff and I found it fairly easy to get up and running.


Thanks for the link! huginn looks cool and useful (good combination!). Impressive work by 179 contributors.

I have had numerous simpler projects automating my digital life but I made the (probable) mistake of always starting from scratch and writing stuff all on my own.


I was in a similar boat but refused to build my own system. Huginn was a perfect solution for me and I really enjoy using it. You can write your own agents if you need to but I haven't had to so far.

Since I set it up a few months ago I've been using it a bunch, it's event driven system is great for when you want to trigger off multiple streams half way through a flow. You can also go back through past events if you want which is really powerful.


I found this last night and I'm already looking at replacing/extending my existing setup with it. Thanks for the share!


Nice! It didn't take me too long to setup with docker-compose on a cheap Digital Ocean instance. I'd suggest reading through all the options and toggles provided by environment variables though. There were quite a few I didn't find for a few weeks which proved to be important or useful and they're not explained in the docs all that well. Aside from that, I've not had any teething issues really.


I found https://www.scrapingbee.com/ the other day. Checkout their blog as well, filled with useful stuff.


How does you not know if your slacking too long on YouTube? Do you use some chrome extension ?



Did you build these bots or got them from somewhere?


Some are are custom built from apis but I daresay you can find just about anything to hit a webhook or integrate into slack or elsewhere nowadays. Some examples are https://littlebirdie.io/ I use to filter hackernews, Zapier for RSS & filtering feeds, google calendar etc, nothing complicated, just know what you need from a specific service then figure out a way to get it to yourself in a recurring distraction free manner be it an existing addon or something you can roll yourself.


Wait isn’t having a bot send you notifications all day the opposite of “distraction free”?


Not OP, but I think that the idea is that if a notification makes it through the system, it has been filtered to be worthy of distraction. So, there isn't a cognitive cost to the user of "is the notification worth reading" since it has already been determined to be that way.


You're right. On average I probably get about a dozen spaced out during my working hours, 10am to 4pm. On a busy day I might get a couple dozen but the intervals make these easily manageable and I always have priority. If I see something from Gilfoyle in alerts that's probably more pressing than a company I'm following that just released unaudited financials in news. Setup might sound odd but in reality as said above anything that makes it through has some stake in something I'm invested in and want to keep an eye on.


What does this cost?


Costs me nothing. Some channels eventually reach their lifetime usefulness like a topic I was interested in and amassed some 500+ mentions of it on twitter, business news from 8 months ago about someone getting acquired etc. I can export the messages and delete the channel to avoid history archival. Only real cost would be the one or two times a year maintenance and improving bits and pieces every now and then.


I think the better question is how long did it take you to set this up initially?


Right, I started with RSS initially. It wasn't in one go, it was over a period. If I had to give you a definitive answer however, I haven't spent more than a couple hours total on this and I've been using this close to 2 years now. Did a recap last night and I have a total of 17 bots, some post daily, others I see once a month. I'm also seeing tools being shared here that could make this a lot easier as well such as huginn.


There’s no subscription charges or hosting costs?


None whatsoever. Other than my server which is unrelated (because the same service could run locally) where I have a twitter bot that listens for "save this" from my main account and will unroll a thread and format it nicely for reading later. See https://threadsja.com/ThreadsJa/1214555011289690112 for an example. Used to be personal but I wiped it, got a domain and other people are slowly starting to use it.

So about every service I have connected to it is on a free tier except Ploi (free as well) but I need for actual work and is also too good not to pay for.


would love to talk to you more about how you set this up


You can see the discussions under my main comment. Pretty much just make a slack workspace and start finding integrations for services you use. Zapier for one makes it super easy to filter out what can get to you from these but I'm seeing good things about https://github.com/huginn/huginn now.


I use YNAB (https://youneedabudget.com/), which requires inputting every one of your transactions by hand or importing them from your bank. The downside of the import is it takes a few days for the transaction to show up and you need to give YNAB your bank username/password, which I'm not comfortable with.

I wrote this https://github.com/buzzlawless/ynab-live-import to import credit card transactions instantly with no need of giving up my bank credentials.

The whole stack runs on Amazon Web Services. Simple Email Service receives a notification email from the bank that I've made a purchase, saves it to S3, and triggers a particular lambda function tailored to whichever bank the notification came from. The lambda function retrieves the email from S3, parses it for transaction data (account, payee, amount, date), and writes that data to a DynamoDB table. The table has a stream enabled, which triggers another lambda function when the table is updated. The function reads the transaction data from the stream and posts the transaction to YNAB using their API.

I've mentioned this on HN once or twice before and got some positive interest with people even submitting pull requests, which is awesome :) Going to find the time soon to review those and maybe add more features


> you need to give YNAB your bank username/password

I am constantly amazed by the people that are willing to do this for various value-added finance sites; people that I would normally consider sane. Unless your bank provides (and you used) a read-only account authentication, if something goes wrong and your money disappears from the bank, the bank is going to tell you to take a hike. I can't imagine taking this kind of risk with my money.


Sorry, but there’s no bank in the world that would tell you to simply “take a hike” if your password was breached and your money was stolen.


They will if you literally give your credentials to a 3rd party when they explicity say don't give out your details anywhere, only between you and your bank.


Giving away your password voluntarily and knowingly is not a breach.


For me, the username/password gives you read-only access and any kind of transaction requires 2FA.


That sounds a lot more complicated than just downloading bank statements and uploading them to YNAB and adds a lot more points of failure


Downloading and uploading bank statements is a manual process, unless you're willing to give the service your bank ID/PW, right? Seems to me if you set up your bank accounts to send you notification emails for any balance change, you can automatically reflect the balance without needing to deal with sensitive credentials.


It does, but it also keeps YNAB more in sync with your bank account, since transactions will typically take a couple days to show up in the statement.


> transactions will typically take a couple days to show up in the statement

Wait, in what century do you live that your online banking transaction overview doesn't have up to date data ? Also, your bank allows you to log in with just a username/password ?


A banking "statement" usually refers to a PDF you can download; the electronic version of the thing they'd previously send you by mail.

The online banking CRUD interface might have up-to-date data, but it isn't an API, and being behind some weird proprietary single-sign-on setup makes it pretty hard to scrape, too.

> Also, your bank allows you to log in with just a username/password ?

Yes, this is common in the US (and here in Canada, too.) We sometimes get asked for "security questions", but support for 2FA is extremely rare, and I don't think there's any bank in North America that requires 2FA to login to your online banking. (The "government or bank issues you a smart card that can be used for session encryption; bank issues you an adapter to plug it into your computer" thing doesn't happen here.)


> The online banking CRUD interface might have up-to-date data, but it isn't an API, and being behind some weird proprietary single-sign-on setup makes it pretty hard to scrape, too.

That’a why they also provide an API. They are required by law to do so (https://en.wikipedia.org/wiki/Payment_Services_Directive)

> I don't think there's any bank in North America that requires 2FA to login to your online banking.

This is required by the same law.


Bank of America only added 2FA as an opt-in option starting in 2017. It is most definitely not required... Law or not.

I'm pretty certain it's the same way with every other major US bank.


@Aaargh20318 could've mentioned that he linked to the EU laws about banking.


> > I don't think there's any bank in North America that requires 2FA to login to your online banking.

> This is required by the same law.

FYI: no bank has to follow EU laws in North America.


Regarding statement freshness, derefr already answered. Regarding username/password, in Brazil at least yes (sort of) - you log in with account number and a password, but it's sort of a "read-only password" - you can see statements and stuff with it, but if you want to actually move the money, a different password is required.

Trivia: this is used by a Mint-like company in Brazil called GuiaBolso - you give them just the "read-only password".


Sure, but a database? Isn’t that part of what YNAB provides?


One of the key features of YNAB is using it to actually inform you before you make purchases. By having it near instant (manual, or by this method), your decisions are fully informed when you make them.


Yes, but a DB of "transaction notifications" your bank sent you could still be useful so you can reconcile/debug if needed, much like the actual bank statement.

I do agree that the DB could be removed from the system, adding the transaction directly from the SES-triggered event, and that would work for most cases.


Yeah it could probably do without it, but initially made this for a hackathon so it’s by no means perfect


It probably started as a side project that would specifically let them experiment with those technologies.


It's an amazing job what you have done here. I've been tinkering with exactly the same idea (and same implementation) but haven't had the time to go ahead. Your work will instantly jumpstart my own use case (VISA and Beancount) and I thank you for that. Kudos!


Thank you! Appreciate the kind words


> you need to give YNAB your bank username/password

Why don't they have oauth for this?


What do you do about cash transactions?


Not OP, but there's not much to do about them - best way is to enter manually in the app at the time of purchase or write them down in a notebook and enter them in batch once in a couple days or per week.

If your cash spending is sufficiently small that you don't care much to tracking each cash transaction individually, you can just track money as a category instead of an account - just record an outflow transaction in your bank account when you withdraw.


I don't use ynab anymore, but when I did, I usually just didn't include cash in it. I almost never use cash, and so it made sense to just deduct the money when it became cash.


I used to work at an architecture firm that specialized in design and manufacturing. One of our more popular products was facades made from aluminum panels with "graphic perforation", a pattern of holes made to form an image at a distance. As you can imagine, for larger projects the number of unique pieces to be cut was into the hundreds. This represented a lot of time for our CAM programmers because the (quite mature and industry standard) CAM software we used was incapable of determining the proper cut orientation without manual input. This meant clicking holes a bunch of times per panel to switch which side of the curve was cut. No Bueno.

One of our primary CAD suites was Rhino which is very nice and has a great python API. So I wrote a full-fledged 2.5D CAM processor for that. This allowed us to batch process hundreds of these parts with a single click.


Playwright is a new alternative to Selenium, and besides being cross-browser, is focused on performance, developer ergonomics and reliability (death to flakiness!). Might be worth checking out? https://github.com/microsoft/playwright.

Disclaimer: I work with the Playwright team at Microsoft.


Hi thanks for mentioning your project. Are there any plans to use it for automated UI testing for desktop apps? Selenium supports it (more or less) with WinAppDriver, which is another project from Microsoft.

https://github.com/microsoft/WinAppDriver


Desktop support is definitely on our long-term roadmap! We’re simply starting web, since it’s an area we think we can make big improvements to, before moving on to the broader developer landscape.

As we build out Playwright, we hope to learn a lot about how to make automation more reliable, ergonomic, and enjoyable for developers. Ideally, we could then apply those principles/learnings to desktop, mobile, etc. automation too. Stay tuned!


Might check out Sikuli


Looks promising ! although most of my problems (but not all) were related to environment and not Selenium itself.

> setTimeout-free automation

How ? you still need an upper limit for operations


Each of the page operations (e.g. click) automatically wait for the specified element to become visible before being performed. That way, your automation code is declarative about what you want to do, and to what element(s), and you can allow the Playwright framework to handle any “waits” on your behalf (without having to rely on time as a pseudo event).

To make the act of selecting elements more resilient to change, Playwright also supports a collection of “selector engines” that allow you to choose the best strategy for selecting elements (via CSS selector, xpath, text content). Additionally, you can author entirely custom selector engines, and compose them together with the built-in types. This is still somewhat experimental, but we think this can enable another level of robustness to test automation: https://github.com/microsoft/playwright/blob/master/docs/sel....

Finally, in addition to waiting for element visibility, Playwright allows you to wait for specific page events to occur (e.g. network request work made). This enables your code to leverage deterministic “signals”, not timeouts (which are the source of a lot of flakiness!). Overall, we want it to be possible to author automation that is entirely event-driven, and includes zero arbitrary timeouts.


What about smart selectors? First run uses defined selector but also saves othe attributes. If in next run element can be found by defined selector then those saved parameters are used to locate element.


Keen to check it out; the flakiness of Webdriver/Selenium I often wonder is more pain than it brings. So many sites I see disabling tests based on it because they sometimes work/sometimes don't, the hours that get plowed into it.

Edit: Missed the Microsoft ref. Not so keen.


Out of curiosity: why does the fact that Playwright is a Microsoft-sponsored OSS project discourage your interest?

The team is working very hard to be open/transparent/available (https://github.com/microsoft/playwright), and we’re very excited to build a better automation stack with the help of the community. I only ask since I’d love to hear any feedback for how we could improve the way we run/position the project. Thanks!


Check out https://www.cypress.io Been super happy with it!


Can you attach it to an existing browser window/session that is already open? Ie. I navigate and authenticate to a site, run code, and it continues from there with what's on screen?


Playwright allows you to connect to existing browser instances, but it currently requires you to get the “debug URL” of the browser instance first: https://github.com/microsoft/playwright/blob/v0.11.1/docs/ap....

I’d love to hear a bit more about the scenario you have in mind, since that might help inform some improvements we can make. Thanks!


The issue I'm having with Selenium (well, Watir with Ruby) is booting up some automated scraping logic for a page that may be in my current session once I've done the whole authentication dance manually (so I don't have to focus on automating that piece initially).


Ah this looks great. I'm super happy I'm learning about this project. Do you guys have any plans of adding extensions ala puppeteer-extra?


As part of our goal to enable better developer ergonomics, we’re very keen to explore “better”/higher-level APIs and additional extensibility points. For example, we allow you to define custom “selector engines” (https://github.com/microsoft/playwright/blob/master/docs/sel...), which is just one example of the kinds of customizations we want to enable.

Out of curiosity: are there any specific extensions from puppeteer-extra that you’ve used and/or would be interested to see?


Looks great. The ones I use the most are probably: puppeteer-extra-plugin-block-resources puppeteer-extra-plugin-anonymize-ua puppeteer-extra-plugin-adblocker puppeteer-extra-plugin-stealth (this the most heavily)

Cheers!


Thanks for the heads up! Selenium tests flakiness is definitely a big pain for us. I am also a big fan of the puppeteer api. We will check it out.


Great! Let us know if you end up having any questions/comments/feedback: https://github.com/microsoft/playwright.


Just tried the examples on our site and it works really well! Amazing project!

PS: I remember interacting with you from the early days of code-push I think :)


Hey! I definitely remember you :) It’s always satisfying to run into the same folks across different projects, over the course of multiple years.

I’m glad to hear the examples worked out of the box! We’ve been iterating on the docs and API quite a bit (based on feedback), so don’t hesitate to let us know if/how they can be improved: https://github.com/microsoft/playwright.


Any plans for a visual editor like Kantu?


Low/no-code authoring is definitely something we’re interested to explore. Once we’ve solidified the core Playwright library, we plan to build (and collaborate with the community on) tools that can simplify automation tasks even further, on top of a modern, reliable and performant core.

Are you currently using Kantu? If so, I’d love to hear more about your specific use cases, and which capabilities you’d be interested to see. Thanks!


Love the project, just installed it and trying it out now, got some nice results right away.


Awesome! Please don’t hesitate to let us know how we can improve. We want to make web automation not only more reliable, but also more enjoyable. So we’re keen to hear how we can continue to push that goal forward :)

https://github.com/microsoft/playwright


Plugging another alternative to the horrible Selenium made by two friends of mine! https://uilicious.com/


Just here to let you know the Selenium developers are human and read HN, too.


Don’t like the tool, didn’t say anything about the Developers. Not the finest choice of words though I agree.


Doesn't seem open source?


It’s not.


>>my goodness, it'd be slow and potentially flaky all via the web UI for those sites

Generally, I've had better luck using undocumented APIs for this kind of stuff. I was a heavy user of Selenium to automate many a task (I work in wholesale construction goods, tons of automations needed everywhere).

But then discovered that using internal APIs (which surprisingly don't change much) is far easier than trying to exception manage changing UIs.

Open up dev tools on your browser and watch the GETS/POSTS as you complete your daily tedium. I use Python (usually just requests and beautifulsoup is enough) to mimic the calls and I have yet to find a use case where I wasn't able to automate something online.

I'm currently developing an SDK for our 18th-century ERP system, which has no API. I've also automated getting shipping container tracking data from a various shipping lines and railroad company websites, many with complicated login processes.

Happy to chat, email in profile.


Yup. I did this with a local Android app, that tells you the gps location of a public bus. I sniffed the http packets, found the requests i needed and made a python script that pinged me at slack when my bus was 5 minutes from my bus stop. Useful when I worked at night.


I'm familiar with the web dev tools, but no clue about mobile. Can you point me in the right direction on how to do the same?

Were you using an android emulator on desktop or there's some way to easily do this on android?


I use mitmproxy or charles proxy with an android emulator, with an older version of Android so it allows for a man-in-the middle, without being finicky about certificates.


Nice, that's the next step up.

Man-in-the-middle Android apps to discover (sometimes different API backend).

I did the same with a shipping line which turned out to use a SOAP backend.


(Facepalm)

I read your post.

Paused.

Opened up debugger on the site I'm currently selenium-ising.

reloaded.

boom. there it is. The request, and beautiful json response.

So...so..so much simpler.

Thank you, my evening is sorted ;)


Haha.. love it, I'm so glad it helped.

I had the same experience a couple years ago after spending months selenium-ing our ERP software.


finally got to it last evening. 20 minutes of playing with json libraries, and I've got a ~10 line script that runs in under a second and produces output that would have taken probably a minute to run in selenium and occasionally been flaky too.

/starts looking at other APIs to use!

Thanks!


Awesome! enjoy the rabbit hole :D


Good point, using the APIs directly is the also the corner stone of stable test automation


Ha ha, yep. I do this kind of thing all the time.


I have 141 custom scripts in my $HOME/bin. Most of it is in bash, but there's a couple of node.js scripts as well.

- I mostly automate my bookkeeping with a set of recurring & dependent taskwarrior tasks and scripts as annotations that I run with taskopen[1]. That's creating a bunch of folders, turning some emails in mutt into PDFs, gathering PDFs from emails, fetching bills with selenium, moving files from $DOWNLOADS into the appropriate bookeeping folder, putting a date on some files, turning the whole thing into a zip file, and sending it to the bookkeeper with mailx.

- I automated the email send of my daily summary to my clients with mailx (so I can send it directly from vim)

- I automated turning screen recordings into thumbnails+mp4 link (since GitHub only supports gifs)

- I automated making before/after screen recordings for when I do noticeable performance improvements (page load/animations)

- I automated booting/killing my development servers

- I automated making PRs with `hub pr` (finding the origin/upstream, putting labels, etc.)

- I bound to a key combo switching to the logs of specific development servers

- I turned my client's time tracking (tempo) into a CLI because I got tired of using the UI to say I worked X hours on that ticket and 7.5 - X on the other. Now I only do `tempo log $ticket1 2h $ticket2 3h $supportTicket rest`

[1]: https://github.com/ValiValpas/taskopen


How do you like using Node for CLI? I started playing around with it a couple years ago, and at this point, I don't even consider BASH scripts if they involve the slightest bit of logic. Part of me feels like by going all-in, I'm going to lose touch with my ability to write direct BASH scripts, which are likely to work literally forever. Meanwhile, I expect my Node scripts to have a relatively short lifespan, particularly in that I'm leaning on NPM. It's been so nice though, I've decided it was a worthy tradeoff.


I've been doing pretty much the same thing. If the logic gets crazy, of if I want lots of subcommands, I'd write it with node & commander. There's only so much you can do with `jq` when it comes to processing JSON.


Would you be willing to share the code for Tempo automation? I am in market for the same thing.


Sure! It's in my dotfiles. You can find it here [1].

There's the `excel` subcommand you probably don't need and you'll need to export the JIRA_TEMPO_USER and JIRA_TEMPO_TOKEN variables before running it.

- `JIRA_TEMPO_TOKEN` can be found in the Tempo Settings

- `JIRA_TEMPO_USER` can be found by inspecting the requests sent to tempo from JIRA IIRC.

[1]: https://github.com/charlespwd/dotfiles/blob/master/bin/tempo

EDIT: Confirmed, I found my `JIRA_TEMPO_USER` as the only accountID sent as parameter the POST https://app.tempo.io/rest/tempo-timesheets/5/private/days/se... when booting up the app in JIRA.


Thank you! Gonna tweak it a bit but it’s a great base for what I was just looking for :)


I run Home Assistant with Zigbee devices for automatic lighting at home:

- switch on when I get home and sun is set

- switch on when I'm home and the sun sets

- switch on in the morning, full brightness, coldest color to help waking up (really helps me a lot and is not as stressful as an alarm clock)

- automatically turn light off when I (my smartphone) leave home (not visible in the WLAN for 10 minutes)

Will look into more automations there. Perhaps something about turning the heat up in the bathroom in the morning when the windows are closed (there are sensors for that).

I ran NodeRed on a Raspberry Pi too for some time, but I found that helped me only little. Mainly did logfile analysis through that but did not look into other use cases for me.

For work I've got the de-facto standard for the printing industry, I'm doing everything possible with that.


Interested why you use the coldest colour for waking up? I do the same, but use a sunrise simulation instead that goes from warm to cold.


Hm, maybe it's self inflicted torture. I just feel uncomfortable with the blueish light and will get up quickly to leave that behind me. Maybe I will give your sunrise simulation a try, might work just as well, not sure.


Cold light helps you wake up in the morning, since it's more stimulating. That being said, it shouldn't be harsh like midday light; closer to an in between. I've heard it described as more of a green light than blue.


I manage a Family Office and do monthly performance presentations with PowerPoint, the slides have an unique design, and updating them every month became extremely tedious and error prone.

I was already looking for an excuse to learn Python and this was perfect.

Instead of recreating the slides I created templates (just the last set of presentations) and update them with info directly queried from the database, using python-pptx and pyodbc.

This isn’t exactly an example of a “tool”, but I think the most important part isn’t really the tool, but to be able to identify what you can automate. How you do it might range from DIY to paying someone to code for you, but the feeling after getting rid of manual processes is great!


From the past 10 years of side projects, I can definitely confirm that the most important skill one could have is recognizing what can be automated, and especially, what is worth the effort of automating. - Your example is definitely a good one!


Also at an FO here, this is great. We started templating everything last year and it’s been a huge relief. Project management in Asana we also turned into templates, things are simple but flow so much easier.


Nice! I work at a quant fund and we Python-pptx, too.


Check out revealjs.com


Reveal is good but very hard to recreate ppt slides.


Yes, Xkcd has nice strip for this called "Is it worth the time?"

https://xkcd.com/1205/


I spend a lot of time with web automation

Especially for public library catalogues. There the webpage shows when the borrowed books are due, and you need to check it every week, or pay late fees. And you cannot set a bookmark, since you need to login first, which is tedious. So I automated it to renew all books and show me a warning.

And because that needs to run silently everyday, I decided to make it as fast as possible (and because I only had a single core cpu with less than 1gb ram). No browser, no javascript, no selenium, no python, no java. Only viable languages would be C or Pascal, and for memory safety I implemented everything in Pascal.

Because it is also tedious to build HTTP requests in native code and recompile it all the time, I then wrote a scripting language for it, all the webscraping is done in the script, but it is still fast because the processing is done in native code.

To read data from the HTML, I use pattern matching, e.g. <a href="{.}"/>+ would read all links, and <a href="{.}">foo</a>+ would read all links with text foo.

Nowadays I do not even visit public libraries anymore, but the automation tool has become an open-source project at http://www.videlibri.de/xidel.html .

I also started playing some browser games, but they are boring, so I stopped. But I want to stay in the highscore list, so I wrotes bot for them. Because they have anti cheat detections, you cannot send the http requests directly. For javascript based games, I wrote a bot with Greasemonkey. For flash games that does not work, so I have decompiled the flash file, added the automation functions to the flash, and then recompiled it.


I use Home Assistant to automate a lot of stuff around my house. The one I use at least daily is a trigger based on the light switch in my bedroom. A double click down triggers an automation that turns off all the lights in the rest of the house, makes sure the garage doors are closed, and that the rest of the doors are locked.


One future Home Assistant project that I would love to implement is a whole house lighting system that gradually transitions all the lights from white/blue in the morning, to yellow/orange in the evening to aid natural melatonin production. Such a project would have to be fully automated, otherwise I'd forget to set it each day. Blue LEDs are terrible for sleep and throw off your body's natural sleep pattern. I got the idea from the excellent book "Why We Sleep"[1]. It's possible that a similar HA project already exists.

[1] https://en.wikipedia.org/wiki/Why_We_Sleep


I use Circadian Lighting [0] component for this one. Works pretty well all around the house.

[0] - https://community.home-assistant.io/t/circadian-lighting-cus...


Wow, very nice. This looks like exactly what I had in mind.


Look into philips hue. You can set the lights so they turn on at a different color temperature depending on time of day. Some of the remotes/switches even come pre-programmed to do that right out of the box.


Good to know thanks.


You can do this with a bunch of any “smart” bulb that supports schedules. It is not very hard.

I used to have a little script running on my computer that changed one bulb’s color based on local weather forecasts and the time, so that I never had a screamingly bright light telling me it was about to rain in the middle of the night. That’s when it starts to get vaguely complicated.


I did this with good results, using warm white and daylight white LED strips, a couple of PWM dimmers, and an arduino, at a former apartment a few years ago. I have a similar setup now but it's a single LED strip featuring both color temps, with its own remote controllable dimmer for brightness/color temp; I haven't bothered to automate it.


My brother prototyped this LED panel ages ago does that. Made for a good project, and it's still hanging in his office.

https://lbnc.ca/luz-lightbulb-meet-sunlight


I use a lot of automation around my house too. I noticed that my wife and kids tend to leave certain lights on frequently, so I replaced them with motion sensor switches that automatically turn on/off the lights. Having them turn on the lights became a very nice feature.

I also have all the outside lights on wifi light switches that turn on automatically at dusk. There are a few lamps inside that turn on automatically too, so it looks like someone is home and so it automatically creates a nice ambiance when we are home.

The lamps also happen to use RGB Hue bulbs, so they're normally a white shade, but if one of the outside motion sensors (also Hue) is triggered, then they turn red, and if it's late enough that the outdoor lights are off, then it automatically triggers the wifi light switches to turn on all the outside lights.

Amazon echo show is great for checking cameras. Logitech harmony remote is great for starting up the tv/media box/amplifier and configuring them to their correct settings each time someone wants to watch tv. The automatic refill kit for the keurig keeps the water tank full. myQ lets me know if one of my garage doors was left open. A wemo plug turns on my hot water circulator during the hours when we're usually home and off when we're not. Nest camera lets me know when it spots a person on my driveway.


You mention selenium, i use Selenium IDE

Its a browser plugin that allows you to record and playback anything that you do in your browser.

https://selenium.dev/selenium-ide/

There is a right-click action to record hovering over a section if it is hidden by css, its a well-known gotcha in wordpress admin listings and other sites that love to hide until you hover over a section.

I've still to look into whether the file format that is saved from selenium ide could be used within a virtual browser like phantomjs or the like.

For direct scraping, i tend to go straight to xpath with dom.

Its far more reliable than trying to get stuff out via regular expressions, that i only use when i need to get a certain part of the node value from a text node.

I used to use php for that, but i wrote my own programming language that uses a library called htmlquery (its a go library)

https://github.com/antchfx/htmlquery

If i have to login, i fake each request with the correct headers. Plenty of libraries around that help with that for every language.


I threw together a quick python script which scrapes listings from cinemas local to me, along with each movie's IMDB score, and compiles the results into a weekly Friday-morning email which I send to myself and my SO.

It's nice to know what movies are out, and when we can book, without having to remember to check or wait for pushy emails from the cinemas themselves.

I'm planning on doing the same to email us the latest menus from our favourite restaurants/cocktail bars so we can see when they change.


Would love to see this turned into a website that does recommendations based on zip code.


I've done various versions of this myself, would you mind sharing your code?


Probably off initial topic, but:

- FileJuggler on Windows, to automatically clean Desktop, Download folder and archive everything into a date-based folder structure in Drive.

- AutoHotKey to insert text templates into everything (e.g. current date, formatted Jira issues, annoying things to type)

The problem with automation is that one will want to break even quickly [time saved > (time spent finding/setting/maintaining)] otherwise automation is just a glorified delayed action. Time saving is not the only reason for automating but I assume it's n°1's when it comes to personal use.


Perhaps underappreciated aspect of automation is friction reduction. Having a semi-automated process can be what it takes for you to stop avoiding doing some little chore, or what makes you no longer suffer as you do it.


I am really susceptible to friction. It's why my bike is at my front door, and why I don't stack things. Adam Savage has a principle he called First Order Retrievability that helps reduce friction for using tools around his workshop.

This thread made me think about automating my invoicing. If all it did was mail me the completed PDF to then forward to clients, that would definitely make me do it more consistently on the 1st of the month even though it's not completely automated.

https://hackaday.com/2015/02/28/adam-savages-first-order-of-...


Some more items that I've setup in AHK:

- Launching commonly used apps, like Notepad, Windows Explorer or Sublime Text

- Launching commonly used websites in a browser, e.g. gmail

- Paste from the clipboard into a language translation site (languageX to languageY via URL parameters)

- Open a to do list in Excel (hard coded file path)

- Text expansion for personal and work email addresses

- With one keypress, open the train schedule on the website, type in the 'from' and 'to' stations, search

- Text expand to symbols that I always forget, eg. euro symbol, hash sign etc

- Google/Wikipedia search clipboard text


Hopefully you are prolific enough in AHK to use it directly, for all other people I suggest Lintalist:

https://lintalist.github.io/

which is a AHK based tool, very easy also for the "casual" user (like I am).


Thanks for linking this, I hadn't heard of this tool before (I've been using AHK for many years).

Another very useful tool that I recommend for automating PixelSearch and the like is Pulover's Macro Creator:

https://www.macrocreator.com/

It spits out AHK code after recording actions (e.g. a long mouse and keyboard sequence), it's a great time saver for more complex scripts.


> Text expand to symbols that I always forget, eg. euro symbol, hash sign etc

Yes! That's a big time saver on my end to.

I was also using it to insert emojis, but the new `Windows+.` shortcut is actually useful for that.


Take a look at Huginn [0], it's very powerful with the existing agents for taking input, parsing, and outputting data to different locations.

I use it mainly for when I can't do things in IFTTT or have sensitive things I'd prefer to keep in a self-hosted system.

0. https://github.com/huginn/huginn


Now that you mention IFTTT. I've recently visited their website and found out it has a web hook "if". An http endpoint where you can GET/POST data and trigger actions on it.


Too many to list here, but one of my most useful ones is a Python script to help pay out affiliates.

It crawls through monthly CSV files provided by my payment gateways, finds sales and returns by a specific affiliate, calculates what I owe them and returns a dollar amount, a sales break down for each course, how to send it (paypal, wire, etc.) and who to send it to.

It's nice because it takes around 2 minutes to payout everyone each month with a very high confidence level in the results. Prior to the script, it was a stressful work flow with many chances of human error. I only did it once before automating it.

Another script I use helps me invoice my freelance clients each month. It's a Bash script and is open source https://github.com/nickjj/invoice. It saves me a ton of time every month since I can calculate how much I need to invoice folks in a few seconds with it.


https://smmry.com/about

Summarizing long text-based articles! Found this on HN. Now when my family/ friends send me stuff they haven't actually read ... at least I can reply somewhat intelligently.


I'd still love to be able to self-host this.


It's development has stalled long ago, but you could try OpenTextSummarizer [0]. It is available on the Debian repos [1] and you can try a web interface [2] to see if it fulfills your expectations.

[0] https://github.com/neopunisher/Open-Text-Summarizer/

[1] https://packages.debian.org/search?keywords=libots

[2] https://www.splitbrain.org/services/ots


This is fantastic! Just tried it with a few blog posts. Thank you.


10% of the time I have to copy paste the text


Finally, a sane way to read New Yorker articles!


I have many cron jobs in AWS Lambda functions for many tasks.

I wrote a blog post about how to monitor a competitor website using Python / Lambda and serverless framework for deployment: https://dzone.com/articles/monitor-your-competitors-with-aws...

For lots of workflows we also use No-code/Low-code tools like Zapier / Integromat.


How much does that cost?


I used google scripts and google forms for some of my automation like:

1 - Notify me when Movie tickets are available to book. Used google scripts that run every minute and hit a URL to check if tickets are available now.

2 - Book a class when it's available, I have joined a Gym where you have to book a class before going, and the Zumba is very famous there the time it opens it fills up in 30seconds and there are limited seats, so if you are late it's gone. I wrote a script that checks and books the slot for me and then adds the same in my calendar. I extended the same using google forms when I want to book another class at some specific time which is currently not available if somebody cancels it will be available. The script keeps on checking until class start time, if available books and notify me.

3 - In my team, everyone plays Foosball and after lunch, it's everyday discussion who will play first and with whom. I wrote a script which will decide the matches and players in each team. I used the google app engine to deploy it, which is still running and just by hitting the API will sort the things for us.

There are a few other automation I did using IFTTT.


I wonder who else is refreshing the class registration page and filling it up within 30 minutes? Or do the special people get advance notice that registration is about to open?

I remember reading a wild tale of how someone made a program for a used car salesman that would check a particular car auction site to make sure they get a good deal (I wish I could remember the details, or better yet, find the URL)


You're looking for "Defcon 21 - How my Botnet Purchased Millions of Dollars in Cars and Defeated the Russian Hackers"

https://www.youtube.com/watch?v=sgz5dutPF8M


That should be it. Thanks, Deepthroat!


That is a great talk. Thanks for sharing.


@netsharc, the class booking opens every day at 10:00PM and everybody knows that. :)


Have the same problem for my own local Zumba class, will have to investigate whether we can automate it vs just setting an alarm for when they release booking (also at 10pm)


Automated investing and wealth reporting.

I use a few services to automatically invest money into a few different financial vehicles at a regular basis

* Vanguard finds (bi-weekly transfer into various ETFS)

* Titan Invest (robo advisor bi weekly)

* IRA and 529 (monthly)

I use personal capital to connect see view my cash flow and returns across all my accounts.

I have been doing this for the past 3 years and my returns have increased significantly.

I am 34 and I wish I had started doing this when I was younger.


Have you figured out a way to automate exporting of data from Personal Capital into somewhere else (spreadsheet, DB, etc)?


That’s a great question. I haven’t tried it but seems like you can access their private API. https://github.com/haochi/personalcapital


I've been building something that looks very similar to this for Mint in Ruby but am still implementing exception handling for 2FA. I'll have to check it out--thanks!


I use tasker on android along with a plugin to automate turning down my screen brightness in the evenings, turning off screen rotation in a particular orientation for specific apps (IE I prefer to text in portrait mode since I can see what I'm replying to).

I've semiautomated setting up a new laptop using a private repo with my dot files with some bash scripts. Started with linux, now works with Mac os x (but won't handle linux well and I haven't gone back enough to make it worth case by case stuff).

I still have to generate a public private key pair, and some things on macs aren't perfect, but I have a todo list for the non automated things in the repo. I get going less on new macs nowdays since the keyboards are such crap (fingers crossed this year will bring better 13 inch keyboards), but everytime I do it I add something.


Last week I built an automated expense tracking dashboard on Tableau using SendGrid and Google Sheets. I use a Gmail filter to forward my credit card transaction emails to an Inbound Parser setup on SendGrid, which makes a POST call to my home server with the content of the email. A regex then captures the details from the POST body and pushes them to a Google Sheet.

Next I use Tableau Public to read data from Google Sheets and publish a dashboard to Tableau Public Online. Tableau Public automatically refreshes data on the server side from Google Sheets every few hours.

The good part is that all of this can run automatically without any manual intervention whatsoever and I get a beautiful near real-time dashboard for my daily expenses that I can access from anywhere (dashboard is publicly accessible but hidden).


Are you comfortable sharing your personal expenses with so many public sources?


Tableau Public is the one that makes me uncomfortable. If this dashboard continues to work well and be useful I'll look into self-hosted alternatives.


Metabase may be of interest as an alternative


Yeah, I've used it in the past. Problem is my home server is a tiny RPi Zero and Metabase is too heavy for it. I'll look into running it on AWS/DO if I absolutely have to.


I’ve given in and bought an energy efficient homelab server for a few hundred off eBay. Intel makes the nuc, Lenovo has tiny thinkcentres, and HP has the elite desk among others.

The strange thing is how many odd lxc container projects I’ve stood up just to play with. Proxmox is sweet.


Could you use Google Data Studio for it instead?


Thanks for the suggestion. Looks promising. Although it's not as polished as Tableau.


No, but has private dashboards for free and it is hooked into your Google account for easy import from Sheets and such.


This is an excellent idea for an expense tracker dashboard. I would like to create something similar for a business process automation course that I am currently working on. And I would use Google Data Studio instead of tableau as Google sheets has a data connector for data studio already.


How does error-handling in this case happens? Do you get a message if some part of this pipeline fails?


Send grid retires sending the message for 3 days incase it doesn't get a 200 from my server.

To capture errors on my server I use PaperTrail's email alerts.


Better Selenium alternatives would be - puppeteer, cypress, testcafe etc.

For Windows automation AutoIt is good. VBScript for MSOffice automation. Python/shell scripts for mostly everything.


Ansible!

I’ve been working on a “mail to cloud storage” project on the side for fun, but also as an excuse to learn Rust.

As part of this effort, I decided to try out Ansible. Man, it is one great tool!

For development, I have all of my server “groups” pointing to the same host, so my Ansible playbooks install and setup everything on a single machine. Makes testing much easier tbh.


I'm currently migrating a system at work to using Ansible and I'm absolutely in love with it.

My next project will be creating Ansible Playbooks for my workstation, so I can easily re-setup my Arch workstations on Desktop, Laptop, private and at work.


A bit more basic, but I use two free desktop PC automation tools which I now can't do without.

1. TyperTask (https://typertask.en.uptodown.com/windows) to automate common tasks like adding different email signatures for different addresses etc etc.

2. ClipX (http://bluemars.org/clipx/) for managing a huge clipboard history file. This saves me SO much time it's unbelievable.


It surprises me how many people don't use a clipboard manager. I use PasteBot on the Mac.


I've had Ditto [0] on my PC for almost a year now, and I could probably count on two hands the number of times it's come in handy for me. Can you explain what is so great about having a clipboard manager? I feel like I'm missing something, and if it would become truly useful to me by using it a different way I'd like to learn.

[0] https://ditto-cp.sourceforge.io/


It depends on your workflow. I sometimes find myself alt-tabbing multiple times to copy stuff over. Othertimes I accidentally overwrite my clipboard and lose some text that I was writing. Being able to recover it from the history is nice.

I use Pasteapp on macOS and FastKeys on Windows, which comes with other useful stuff, such as a shortcut for controlling volume by using the mouse scroll wheel on the right edge of the screen, macros, and a text-expander. The ones I use the most are:

ddate = 2020-02-18 ttime = 03:40:26 nnow = 2020-02-18 03:40:28 @@ = my email


I just commented on something similar: https://news.ycombinator.com/item?id=22345215

Disclaimer, it’s a project of mine, but if the websites you’re watching are compatible that will save you some hassle trying to extract data.

Use a lot of google sheets as well, it’s super helpful to make sense and use of the data you scrape.

You can check it out here [0], and feel free to ask if you need something.

[0]: https://monitoro.xyz


Autohotkey on windows is my goto tool. I don;t know much about it but I can do the basics like move mouse to x.y coordinate and click it, or send a string of keystrokes to a form etc.


Autohotkey is such a beast. I really miss that now that I've started using Linux. i spent nearly 4 years writing scripts to automate all sorts of things in windows but to recreate them on linux might take me much longer since i would have to learn maybe 6 or 7 different programs just to get the same features. but alas Windows was doing my head in so I had to jump ship!


Autohotkey is also a tool I sorely miss since I moved to linux so many years ago. Sadly, there is no linux-equivalent and no one seems keen on porting AHK to linux.


It is much easier to do those actions with Python (https://automatetheboringstuff.com/2e/chapter20/)


I would say autohotkey is far superior to python on windows. The runtime comes with everything you need, no need to install/manage external dependencies. For example, I copy-pasted a 324 line snippet from the forums (https://autohotkey.com/board/topic/16400-ahk-regex-tester-v2...) into an .ahk file and I immediately get a full GUI to test my regexes on, 13 years after the snippet was posted.

Additionally you can bundle the interpreter for your scripts into a standalone .exe that can run on anybody's windows machine without needing ahk installed. It's a one-click operation done in a program that was also written in autohotkey (https://github.com/AutoHotkey/Ahk2Exe). I can't imagine anything as resilient as AHK when it comes to scripting automation programs on windows.


Make is an incredibly powerful tool for managing any automated process, not just software compilation. It doesn’t know how to do any of the individual steps itself, of course, but it lets you break down a process into indivdual small pieces that can be automated separately.


'make all' the things with (GNU)Makefile!

From pandoc pre-processing with a great tool called 'pp', to generating my CV in different languages and paper sizes, to ensuring my repositories have the latest .gitignore files.

I feel it requires a significant effort to setup, but once you're there, it eases your development flow and enhances reproducible outcomes, in my opinion.


I use makefile to front each project in a consistent way (whether it's my blog, a golang project or a hardware flashing setup, `make start` will probably work..)


We use Jenkins to run .bat files that package our video game, upload to Steam and spool up Gamelift matchmaking servers on AWS every night. Before this I did it by hand, now I wonder how I could ever have operated with that inefficiency!


How do you find Gamelift? I don't know anything at all about the industry but I'm curious how it all hangs together!


Jenkins? .bat? That's some masochism.


Can you elaborate on how that's masochistic? The game likely needs to be built on Windows.


In a somewhat similar vein to the OP, some of the details associated with publishing a new podcast can be rather manual and error-prone. This is especially annoying in the context of simple interview/discussion podcasts that don't really take all that much time to edit/post.

A fairly simple Python script can automate the fiddly things like creating XML for a new podcast and uploading things to wherever you're hosting. (I used to use a script to add intro and outro audio too but I now do that as part of the audio leveling process with Auphonic.)


It looks like you could build a small SaaS around this...


I build a web site https://visalogy.com with help of a keyboard/mouse automation tool.

It was too simple to just automate and control my laptop with scrolling to certain web pages, xpath the elements, and copy-pasta data to a CSV file.

Nowadays, I use bash scripts to automate some scraping. Cron scripts in Travis CI are amazing, this https://visalogy.com web site practically runs by itself thanks to them.


My kid just drop by when I was looking at your site and pointed out the flag you have for Sri Lanka seems a bit wrong - the lion in the flag is missing. :)

Anyway, great work to build such a useful site!


Greatly simplified, accelerated and automated the creation of project directories.

I have always been annoyed by creating directories for new projects. Always the same procedure. Always the same commands. Always the same source files. So I wrote a small shell function which created C++ and Python projects for me. But after a few months I started to learn golang. And there I was again. Creating directories and files myself. But at this point the function could hardly be extended.

So I started to transform my shell function into a powerful and expandable go application. Learning go by starting a new project and solving a personal problem at the same time? Perfect!

Now a few months have passed and proji has become much more powerful and diverse. The templates it uses, which are called classes, are not bound to languages or anything like that, you can create a class for really everything. No matter how complex the class is, proji creates the project in seconds. Class configurations can be imported and exported, making it easy to share them with other users.

With the latest version of proji there is a new feature that takes proji to the next level. Proji can copy the structures of repositories on Github and Gitlab and import them as a class which you can use locally to create your own projects.

Additional features: Classes support shell scripts, template files to minimize the boilerplate code you have to write, ...

[ 1 ]: https://github.com/nikoksr/proji


I jut begun to do some partial automation of website interaction with the extension Tampermonkey for Google Chrome to inject my own scripts on certain websites, often creating an additional control panel UI for custom actions such as scraping or data input. I use Tampermonkey in such a way that the code managed by Tampermonkey are only loading scripts that load my actual scripts from GitHub and other sources, such that I always get the latest published versions of my scripts.


would be interested to see why you use tampermonkey instead of GreaseMonkey. I tried both but stuck to GreaseMonkey as it was open source.


Not OP, but I decided to switch to Tampermonkey after GM version 4 which broke almost all my scripts due to the GM_## api changes. I no longer remember the exact details but it was poorly executed/communicated, and it’s easier to switch to TM instead of updating all the scripts.

The dashboard interface in TM is a lot nicer than GM too


When Firefox stopped supporting legacy extensions, the Greasmonkey developers saw that as a chance to redesign their API (to use promises).

Tampermonkey used to be open source, but unfortunately isn't any more. I still use it though, as it has better UX (e.g. nicer dashboard, prefills @match when creating new scripts, better editor, etc).

An alternative to Tampermonkey that is open source and still uses the old style of user scripts is Violentmonkey (which also lacks in the UX department, if i recall correctly).


If you travel regularly, Tripit is a great service for organizing your flights and hotels on to your calendar (Google Calendar also does this with Gmail but it doesn’t work reliably, especially if you book some of your travel via your personal email account). You can set up a free Tripit account and either forward your receipts to it or let it look into your email for them automatically. Add the calendar of your Tripit trips to your work Google Calendar. These now show up in as a second calendar but are not visible to your co-workers. Create a free Zapier account and use a Zap to automatically copy all events from your Tripit calendar to your work calendar, which your colleagues can see. If you want to be shown as available while traveling, you need to change the multi-day event that Tripit creates from busy to available. And if you change your flights, you need to manually delete the old ones. But, people will no longer book you into meetings when you're on a plane.

Zap: https://zapier.com/apps/google-calendar/integrations/google-...


* Home Automation, like opening shutters at sunrise and closing them at sunset or turning on LED lights at night when movement is detected and turning them off after a certain amount of time since the last movement. Z-wave relay modules control most devices, and the command-center is Raspberry Pi + Aeon Z-Stick (aeotec.com/z-wave-usb-stick/), which allows for a vast amount of flexibility compared to standard Z-Wave controllers.

* an extensive collection of custom made /usr/local/bin scripts which automate things such as video recording and encoding for youtube (recordmydesktop + avconv), backups, and certain time-taking and repetitive operations on websites (like downloading invoices; through selenium-webdriver - github.com/SeleniumHQ/selenium/wiki/Ruby-Bindings)

* zaps on zapier.com for pushing data between multiple cloud services that would otherwise have to be moved by hand (i.e., Gmail email with matching subject to a task in kanbantool.com)

* autokey on Ubuntu, which allows me to type phrases like "!thx" and have them automatically expanded to i.e., "Thank you for your email!"


Written couple of scripts related to storage space in Shared Drive Folder and Database storage. If the storage goes below certain threshold, then the notification is sent to the group email.

Using the google rss notification (e-mail) for keywords which I am interested in (e.g my name, stock ticker and company name etc).

I use www.hnreplies.com for alerts when someone responds to my HN comments.


The script I get the most personal use of is for one of the most mundane things you can imagine—automating the repetitive process of keeping everything on my computer up to date. By "everything", I mean:

• my OS packages

• my OS-overlay packages (e.g. Homebrew, Nix)

• the versions of any standalone SDKs I use, that aren't maintained as OS packages (e.g. Rust from rustup, the Google Cloud SDK)

• any globally-installed packages for the package ecosystem of each language I install such stuff from (not libraries, but standalone utilities, e.g. youtube-dl in pip, or Foreman in Rubygems)

Here's the script: https://gist.github.com/tsutsu/270e09c68690ec85c51dbd054e22b...

I think automating this might help a lot of people, because people tend to forget that they even can update a lot of this stuff, or they don't know there's a command to get certain things to happen at all. (E.g. did you know that `brew cask upgrade` will switch out your application-bundles installed via casks with newer versions?)

I never polished it up, though, because it's still frail in some ways. (I still don't exactly trust the way it does pip package updates, for example; sometimes pip decides to upgrade packages into an inconsistent state where a package's deps get too new for it to use.)

But the idea is that you run this interactively, first thing in the morning when you sit down, in its own fresh terminal window, right after maybe letting your computer restart to update to a new OS version. It's like putting your workspace in order in the morning. It's doing the grunt work, but you're still making the decisions and fixing the exceptional conditions.

-----

I feel like this could be polished into a universal tool that'd literally update everything for everybody with one well-known command.

But, better yet, the problem could be reversed, and a standard for registering installed package ecosystems and their respective update/clean/etc. commands could be created, that installed ecosystems could register with by placing files in a directory (like e.g. pkg-config or bash-completion) such that this command could outsource its smarts to the ecosystem creators themselves.


I work at a local college and was recently tasked with collecting and aggregating instructor workload requests. We are in the planning stages of which courses each instructor will teach in the fall 2020 semester and my department wanted an easy way for instructors to submit their requests as well as be able to see what courses other teachers have requested and which are available.

Manually, if have to receive each spreadsheet or email of choices from each instructor and then add it to an Excel spreadsheet of some sort. This wouldn't have been live at all.

So I came up with a system.

I made an excel proposal template each instructor fills out with their name, school ID and the courses they want to teach as well as the hours they'd have available for each course. They then send it to my email which has a rule on it to send excel attachments to a Gmail account I own. This is because I can't access the API of outlook so I need to get outlook to send this stuff to my Gmail account. My Gmail account has a watch on it so whenever an email comes in to it, I get a push request on my server. My server reads the contents of the excel file and sends it to a database.

From the user side, if a teacher wants to access the full list of what courses each instructor has chosen, I've made a website in react with an excel like row and column layout where the columns are each teacher and the rows are the courses. Where the two meet we have the hours the teacher wants to teach for that course. When this page is visited, it pulls from my database and populates the whole thing with the latest data. The site can also export all the data to an Excel file to be implemented once all the teachers have made their choices.

I learned a lot with this project. Learned react, setting up https, basic authentication, postgres, running a server and routing, and a while lotta other stuff. Super valuable to me in my learning even if I could have spent less time doing it manually!


I've started playing with some things with node-red + open hab to control my lights, but more in line with what you're talking about I've also played with Huginn for automating some small daily actions and webscraping. So far it's been going well but I'm not doing anything terribly difficult yet.


Hey, I recommend you RPA (Robotics Process Automation), there some open source option such as Open RPA or community options as UiPath. Actually I work as an RPA developer and you can automate whatever you want. For example read an email, capture that information in an Excel and fill web forms based on the excel.


Be careful, my antivirus blocked the site when I visited the "Privacy" page from the top menu.

https://www.virusradar.com/en/HTML_ScrInject.B/description


I run a robotics themed weekly newsletter as a side project.

At one point I decided that it would be nice to add header images to every issue. I decided to do it through a quick and dirty python script that adds the text for me.

Another thing I've just recently automated is the python script calling mailchimp API to show me the most popular link together with unique opens. I'm currently planning to embedd this info in every issue.

The next step that I'd like to automate is the campaign creation. With the API I don't suppose this will be a huge deal, just need to put some time on this. When I have it then I'd like to create a commit hook that will do all of these things automatically after pushing changes to the repo.


In the engineering office (non-software) I work in, I've done wonders with classic scripting for me and my colleagues. Some Excel VBA, lots of Powershell, AutoLisp, etc. Doesn't need to be shiny to be effective.


What about headless chrome and some cron jobs to run tasks for you. It'd be similar to selenium. But, on a headless browser. There are also some scraping services that let you get data on a set interval with a phantomjs scraper. I have one that uses apify and then notifies a zapier hook on success and then zapier posts to my google sheet. My webapp then calls a GCP instance REST API endpoint which internally calls the google spreadsheet through an exposed api, cleans the data and sends back to the webapp.


I use a collection of tools from this repo to automate mundane developer activities such as posting release notes slack messages containing linked pivotal or Jira story titles: https://github.com/sufyanadam/pivotoolz

There's a bunch of stuff in there that saves you seconds each time you have to work with a user story or cut a branch or merge etc... Saved me a bunch of time over the years!


This one is a shameless plug (I created the app) while it's still in early stages of development, the core features are working, I use https://github.com/rmpr/atbswp to automate demos, whenever I have to present a demo in front of people, I just record it before and replay, but the app can have many use cases beside this. You're only limited by your imagination.


Airbnb experiences will not add an API, so if you run any kind of tours, you have to go the App to add/remove/change dates/prices/starting points.

Selenium takes care of it now.


Everytime I use Selenium, even for legitimate use cases, I feel dirty deep down under. My soul writhens and I want to do something else in life other than writing Selenium automation - perhaps chop some wood.


I've used IFTTT a bit, along with Zapier. Biggest issue is trying to do multiple actions when you need to automate tasks for multiple rows of data at a time.


Look at Integromat. I used to use Zapier but have moved mostly over to this instead.


Do you have some examples of what you use them for?


Usually, my download folder is quite messy. I wrote a script to re-organize any such folder using a simple CLI. It also has a config that can be updated to account for different file types or categories.

https://github.com/functioncall/neat


slightly mundane, but I'm an academic and my cv needs updating a lot. I finally got latex to work on netlify, and now all I have to do is update a yaml and push a commit and my cv is updated.*

* At least on one of the three locations my cv lives... the next step is to point all the other domains at my new CI/CD home...


I manage two calendars for work and often have to make sure my calendar events exist on both so I don't get double booked. I wrote https://syncmycalendars.com/ to automate and simplify that process for myself and others.


I started with selenium and still use it when I have to but try to just make http requests and parse responses directly whenever possible. Probably my favorite travel-related script was one that watched for an opening at a hard to book hotel and snatched it up when it became available.


and to parse the responses (Assuming they're HTML page sources, not json/xml) - you using scrapy or something else?


I pretty much always use lxml. It has an html method but I typically just use etree.parse(). It's very fast and has excellent XPath query support.


I use telegram, ifttt, Google sheets to calculate expenses.

At my last job, I used Zapier & Zoho CRM to automate creating leads from email requests which saved me hours of time.


Was at a startup that was super low-overhead (and as dysfunctional as a 4 person team could be), we used Zapier to automatically create Gmail drafts as replies to new lead enquiries on the website and follow-up responses as needed. We could then spice up the response if needed, or just bulk run through and hit "Send".

Appears personal to all email systems and spam filters, but actually was semi-automatic marketing automation.


Maybe check out huggin


Unfucking the many many ways that banks screw up exporting financial transactions into well-formed OFX files that MoneyDance can consume


Have you tried https://pipedream.com? You can run Node.js (with any npm package) on webhook/http requests, emails and schedules for free. It also manages auth (including oauth token generation and refresh) for popular apps like Google Sheets, Discord, Slack and Airtable -- just use the auths object to reference tokens and keys in code.


Looks very promising! However the pricing is opaque. What will this cost me, if my business becomes successful and hit their free tier rate limits?

https://docs.pipedream.com/pricing/


Hi, I'm a co-founder and engineer at Pipedream.

Paid tiers are coming soon, and if you hit our daily compute limit before that, we're happy to raise your limit to make sure your workflows run without issue.

Let me know if you have any other questions! Feel free to reach out to me directly at dylan [at] pipedream [dot] com.


auto delete certain messages from gmail after 30 days . i use filters to tag messages as "autodelete" and create a google script that runs periodically to delete them . It shouldn't be so hard to do something so simple.


I'm fairly certain this is how the "trash" in gmail works by default.

Anything that gets sent to there stays there for 30 days then is automatically deleted.


i mean, for my own reminders and messages that i forget to delete.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: