Hacker News new | past | comments | ask | show | jobs | submit login
Argdown, like Markdown for argument mapping (argdown.org)
191 points by urlwolf 9 months ago | hide | past | favorite | 47 comments



From time to time, I see a tool to present a discussion as a tree with arguments for and against it.

Unless it is a school essay, arguments don't go that way.

It is usually harder to encompass what a node (an atomic fact) is and what a link is (it usually goes beyond "support" and "counter"). Very often, this structure is not a tree. Maybe a DAG with weighted edges, but if it were that straightforward - knowledge graphs would simply work.

Instead of rehashing the same tree approach, we should adopt something closer to an LLM-embedding approach - for a given statement, we should have "relevant statements" with an additional dimension if it supports, counters, expands, provides an example, and so on. In this case, it wouldn't even be a DAG.


Concur, I discovered UMAP when looking for a way to dimension reduce and visualize embeddings, and it also works on non-embedded data too. Interesting idea to think about it applied to arguments in a debate... especially in conjunction with the work around using LLMs to infer knowledge graphs

https://umap-learn.readthedocs.io/en/latest/basic_usage.html

> Uniform Manifold Approximation and Projection (UMAP) is a dimension reduction technique that can be used for visualisation similarly to t-SNE, but also for general non-linear dimension reduction. The algorithm is founded on three assumptions about the data

1. The data is uniformly distributed on Riemannian manifold;

2. The Riemannian metric is locally constant (or can be approximated as such);

3. The manifold is locally connected.


I've experimented with constructing arguments as actual DAGs before here: http://concludia.org/ If you are strict about logical force and premises leading to lemmas and conclusions, I think it works pretty well.

There's a lamport paper lying around somewhere that also talks about representing arguments and proofs as DAGs.


I think it would be fun to include a framework where you could use logical fallacies to support common arguments that include them and ask people who support those arguments to replace them with non-fallacious arguments as a more nuanced way of debating and getting around that all criticism of popular arguments is somehow a strawman.


Love this is reasoning


Ha I thought it was a tool for managing complex args for command-line tools


checkout http://docopt.org then


That seems to be an abandoned project


Yes, according to the archived Rust implmentation[1] which in turn refers you to either clap[2] or structopt[3]. Other implmentations does not mention this but those I looked at had not been touched for years. Either very stable or unmaintained. Unfortunately the latter according to the Rust crate. The dotNet implmentation had a very small version bump in the dependencies but the rest of the project does not seem to have been touched the past 2 years.

[1] https://github.com/docopt/docopt.rs

[2] https://docs.rs/clap/latest/clap/

[3] https://docs.rs/structopt/latest/structopt/


FWIW, there seems to be a less-abandoned fork here: https://github.com/jazzband/docopt-ng

I'm sticking with argparse though


structopt became part of clap proper (as `clap_derive`) when clap v3 was released.


Whoah, that's a neat tool! I definitely need to implement this with my scripts.

Thanks!


It's abandoned, and tbh it's more trouble than it's worth. It's far easier and more reliable to specify the CLI and have it generate the help text than the other way around. All major languages have good CLI parsing libraries (some in the stdlib itself, like Go and Python).


Yeah..... it seems like it would be fragile and require lots of iterating to converge on a help doc that is both "pretty" and correctly parsed by docopt


It's a neat idea (I maintained an unofficial C port for a while); but, terrible in practice.



Yes, should perhaps change the title to "argumentation".


Yes, I did as well. I don’t really care for the way DocC documents arguments and was hoping for something innovative.


there's also usage: https://usage.jdx.dev/spec/


For an example of how these may be used, Kialo [1] uses a form of argument maps for structured debate. There's also an Obsidian plugin for argument maps [2], tho it's a bit out of date.

[1]: https://www.kialo.com

[2]: https://github.com/amdecker/obsidian-argdown-plugin


It was useful during WebGPU development [1] given that some topics were very nuanced in debate.

[1] https://github.com/kvark/webgpu-debate


I am the author of some recent literature on Competitive Debate datasets for the NLP community:

1. https://aclanthology.org/2020.argmining-1.1/

2. https://paperswithcode.com/paper/opendebateevidence-a-massiv... (under review at a main conference, but we had an acceptance to ARGMIN 2024 at ACL 2024 which we declined)

I'm very interested in talking with the authors of this work about how we can think about structured argumentation notations like this for the American Competitive debate community. American Competitive Debaters have their own informal markdown-like structures and fuzzy-syntax of their formatting alongside so much jargon that I really want to see how it can map to something like Argdown.


Is there some reason this is all focused around yes/no questions or a single statement? Is it like a standard format that all topics can be reworded to?

I'm wondering if this could this be used for something like comparing alternatives to solve a problem. In that case I'd expect the root to be a description of the problem, then alternatives, then pros-cons for those alternatives.

I'd never heard of this at all before despite searching, so I imagine there's a lot I don't know.


You can organize any number of alternatives as binary choices.


Sure but that forces you to group options in ways that don't always make sense discursively and push some alternatives down the tree, and we all know that arguments that are near the root get the most attention.


I believe that, but doesn't that make it much harder to read? Or are the organizational benefits worth it?

Edit: and I also don't see a way to do multiple arguments like that in a single document, unless you mean as a soft linkage.


And in actually complex rhetoric, the discrete choices are typically phrased to benefit the one casting the argument as such. False Dilemma is still false even when it’s embedded in a larger tree.


I tried writing diary entries in argdown for a while. It was fun having this big visual map of all my thoughts on everything, especially when I reused premises a lot. It wasn't particularly useful though.


You noticed reusing premises, potentially reinforcing your choices. You also took time to analyse and write down your thoughts. I think it was useful in some ways even if you didn't use the output afterwards.


Wow, it’s crazy seeing your dreams randomly pop up on hacker news. Guess I’ll be switching to this syntax!

OP, if you’re the author: any plans for next steps? I’ll be folding this into my upcoming book and website (and almost certainly extending it a bit), so I’d be curious to hear if there’s other large scale projects underway.

Beautiful docs btw, this style should be a lesson for all of us. I guess you’d expect someone interested in arguments to write clearly lol


Not OP, but OP is not the author.


I might know nothing about debates, but in my impression you always make the diagram that does its job best. Argdown seems to constrain everything to its framework and forces the style of the output.

Also, Mermaid (https://mermaid.js.org/) exists.


These argdown ligatures are nifty. I also really like Unicode logic symbols because they make annotation and expressions readable.

https://en.wikipedia.org/wiki/List_of_logic_symbols

These can be especially helpful for Causal Analysis based on System Theory (CAST) which is similar to root cause analysis plus uses more logical dependencies.


The next step could be to annotate statements with a confidence (1-99%). Currently I see no way to weigh the arguments against each other.


Huh, I've never needed something like this. What kind of industries/hobbies/interests use this type of tool regularly?


I'm not actively using it, but I'm interested to play around with this for a bit.

At work, we have architectural decision logs, for example how to structure our authentication, how to deploy services and such. Some of these decisions are fairly obvious and straightforward, but some decisions come from days and weeks of discussions between different teams and people over the past few years.

This looks like an interesting way to provide an overview of what has been considered and what the take of the group is on these different points. This would allow new people in the teams to challenge things more substantially instead of going through four steps we've been through several times already.


Politics, philosophy, and science would be my go-tos. Basically anywhere where your audience doesn’t merely consist of subject matter experts tho


I would love to use this as a teaching aid... to teach the practice and value of argument.


A similar approach is required in some safety-critical related programming contexts. Search for "safety assurance case".


I don't think you ever need something like this. It's just another tool in your toolbox. It's like graphs and diagrams - at some point some information will be easier to show this way than by writing it out in details.

I've used it a few times. For example when justifying a project of migrating a large legacy app to docker deployment.

It's also useful for getting a specific kind of result from LLMs - GPT knows how to summarise discussions/conflicts in ArgDown format.


There was a website called (I believe) Arguman (way) in the past, that used this kind of thing. It allowed everyone to add arguments, rebuttals, such a thing, and up/downvote them. Last time I looked at it, it was down (still on wayback, but it was a webapp, so it never really loads anything).

Edit: it's... "up", but suddenly Turkish and broken: https://arguman.org


https://kialo.com also works this way


Mapping different viewpoints to combat disinformation or create better policies is key in this age. I'd wish to see a better integration with cognitive psychology and it's overview of biases, also in relation to personal insecurities, trauma and with agogy and education, like The Evidence Toolkit.

A short overview of the Argumentation theory and tooling field:

“Within computer science, the ArgMAS workshop series (Argumentation in Multi-Agent Systems), the CMNA workshop series,[34] and the COMMA Conference, are regular annual events attracting participants from every continent. The journal Argument & Computation is dedicated to exploring the intersection between argumentation and computer science. ArgMining is a workshop series dedicated specifically to the related argument mining task. Data from the collaborative structured online argumentation platform Kialo has been used to train and to evaluate natural language processing AI systems such as, most commonly, BERT and its variants.” https://en.m.wikipedia.org/wiki/Argumentation_theory

https://en.m.wikipedia.org/wiki/Argument_technology

https://en.m.wikipedia.org/wiki/Argumentation_framework

Sibling tooling (with help from Sonnet and Wikipedia):

1. Argument Interchange Format (AIF): This is a standardized format for representing argumentative structures in a machine-readable way. It's used in various academic tools and research projects. 2006

2. Rationale: A software tool developed for academic use, particularly in teaching critical thinking and argument analysis. It offers more structured mapping capabilities than Kialo. 2004

3. Araucaria: An open-source argument mapping software developed by researchers at the University of Dundee. It's designed for analyzing and diagramming arguments. 2001

4. ArgDown: A markdown-like language for creating argument maps, which can be useful for programmatic approaches to argument analysis. 2016

5. OVA (Online Visualization of Argument): A web-based tool for argument analysis and visualization, developed by researchers at the University of Dundee. 2010

6. Argunet: An open-source argument mapping software that allows for collaborative work and integrates with a database of arguments. 2007

7. AGORA-net: A web-based platform for argument reconstruction and evaluation, used in academic settings. 2013

8. Kialo 2017 https://en.m.wikipedia.org/wiki/Kialo

9. ADA tools. Gregor Betz (2022). "Natural-Language Multi-Agent Simulations of Argumentative Opinion Dynamics". Journal of Artificial Societies and Social Simulation. 25: 2. arXiv:2104.06737. doi:10.18564/jasss.4725. S2CID 233231231 https://www.gregorbetz.de/

10. Argument Analytics http://analytics.arg.tech/

11. IBM's Grand Challenge, Project Debater 2011- Published in Nature in March 2021 https://en.m.wikipedia.org/wiki/Project_Debater

12. German research funder, DFG's nationwide research programme on Robust Argumentation Machines, RATIO. 2019 https://spp-ratio.de/

13. UK nationwide deployment of The Evidence Toolkit by the BBC. 2019 https://www.bbc.co.uk/teach/young-reporter/articles/z6v3hcw


Oh arguments like disagreeing with people, not arguments passed to a command line utility.


Seems like you could train an LLM to read an op-ed and output the arguments in this form. Could be fun.


Yes, it works really well in my experience. Although I mostly use it for technical information and justifying some decisions. But you can totally ask modern LLMs to "summarise this in ArgDown format" and they'll do the right thing.


Just call an API and supply the JSON schema.

Though I doubt that arguments in this restricted form are more expressive than the original representation.




Join us for AI Startup School this June 16-17 in San Francisco!

Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: