From time to time, I see a tool to present a discussion as a tree with arguments for and against it.
Unless it is a school essay, arguments don't go that way.
It is usually harder to encompass what a node (an atomic fact) is and what a link is (it usually goes beyond "support" and "counter"). Very often, this structure is not a tree. Maybe a DAG with weighted edges, but if it were that straightforward - knowledge graphs would simply work.
Instead of rehashing the same tree approach, we should adopt something closer to an LLM-embedding approach - for a given statement, we should have "relevant statements" with an additional dimension if it supports, counters, expands, provides an example, and so on. In this case, it wouldn't even be a DAG.
Concur, I discovered UMAP when looking for a way to dimension reduce and visualize embeddings, and it also works on non-embedded data too. Interesting idea to think about it applied to arguments in a debate... especially in conjunction with the work around using LLMs to infer knowledge graphs
> Uniform Manifold Approximation and Projection (UMAP) is a dimension reduction technique that can be used for visualisation similarly to t-SNE, but also for general non-linear dimension reduction. The algorithm is founded on three assumptions about the data
1. The data is uniformly distributed on Riemannian manifold;
2. The Riemannian metric is locally constant (or can be approximated as such);
I've experimented with constructing arguments as actual DAGs before here: http://concludia.org/ If you are strict about logical force and premises leading to lemmas and conclusions, I think it works pretty well.
There's a lamport paper lying around somewhere that also talks about representing arguments and proofs as DAGs.
I think it would be fun to include a framework where you could use logical fallacies to support common arguments that include them and ask people who support those arguments to replace them with non-fallacious arguments as a more nuanced way of debating and getting around that all criticism of popular arguments is somehow a strawman.
Yes, according to the archived Rust implmentation[1] which in turn refers you to either clap[2] or structopt[3]. Other implmentations does not mention this but those I looked at had not been touched for years. Either very stable or unmaintained. Unfortunately the latter according to the Rust crate. The dotNet implmentation had a very small version bump in the dependencies but the rest of the project does not seem to have been touched the past 2 years.
It's abandoned, and tbh it's more trouble than it's worth. It's far easier and more reliable to specify the CLI and have it generate the help text than the other way around. All major languages have good CLI parsing libraries (some in the stdlib itself, like Go and Python).
Yeah..... it seems like it would be fragile and require lots of iterating to converge on a help doc that is both "pretty" and correctly parsed by docopt
For an example of how these may be used, Kialo [1] uses a form of argument maps for structured debate. There's also an Obsidian plugin for argument maps [2], tho it's a bit out of date.
I'm very interested in talking with the authors of this work about how we can think about structured argumentation notations like this for the American Competitive debate community. American Competitive Debaters have their own informal markdown-like structures and fuzzy-syntax of their formatting alongside so much jargon that I really want to see how it can map to something like Argdown.
Is there some reason this is all focused around yes/no questions or a single statement? Is it like a standard format that all topics can be reworded to?
I'm wondering if this could this be used for something like comparing alternatives to solve a problem. In that case I'd expect the root to be a description of the problem, then alternatives, then pros-cons for those alternatives.
I'd never heard of this at all before despite searching, so I imagine there's a lot I don't know.
Sure but that forces you to group options in ways that don't always make sense discursively and push some alternatives down the tree, and we all know that arguments that are near the root get the most attention.
And in actually complex rhetoric, the discrete choices are typically phrased to benefit the one casting the argument as such. False Dilemma is still false even when it’s embedded in a larger tree.
I tried writing diary entries in argdown for a while.
It was fun having this big visual map of all my thoughts on everything, especially when I reused premises a lot. It wasn't particularly useful though.
You noticed reusing premises, potentially reinforcing your choices. You also took time to analyse and write down your thoughts. I think it was useful in some ways even if you didn't use the output afterwards.
Wow, it’s crazy seeing your dreams randomly pop up on hacker news. Guess I’ll be switching to this syntax!
OP, if you’re the author: any plans for next steps? I’ll be folding this into my upcoming book and website (and almost certainly extending it a bit), so I’d be curious to hear if there’s other large scale projects underway.
Beautiful docs btw, this style should be a lesson for all of us. I guess you’d expect someone interested in arguments to write clearly lol
I might know nothing about debates, but in my impression you always make the diagram that does its job best. Argdown seems to constrain everything to its framework and forces the style of the output.
These can be especially helpful for Causal Analysis based on System Theory (CAST) which is similar to root cause analysis plus uses more logical dependencies.
I'm not actively using it, but I'm interested to play around with this for a bit.
At work, we have architectural decision logs, for example how to structure our authentication, how to deploy services and such. Some of these decisions are fairly obvious and straightforward, but some decisions come from days and weeks of discussions between different teams and people over the past few years.
This looks like an interesting way to provide an overview of what has been considered and what the take of the group is on these different points. This would allow new people in the teams to challenge things more substantially instead of going through four steps we've been through several times already.
I don't think you ever need something like this. It's just another tool in your toolbox. It's like graphs and diagrams - at some point some information will be easier to show this way than by writing it out in details.
I've used it a few times. For example when justifying a project of migrating a large legacy app to docker deployment.
It's also useful for getting a specific kind of result from LLMs - GPT knows how to summarise discussions/conflicts in ArgDown format.
There was a website called (I believe) Arguman (way) in the past, that used this kind of thing. It allowed everyone to add arguments, rebuttals, such a thing, and up/downvote them. Last time I looked at it, it was down (still on wayback, but it was a webapp, so it never really loads anything).
Mapping different viewpoints to combat disinformation or create better policies is key in this age. I'd wish to see a better integration with cognitive psychology and it's overview of biases, also in relation to personal insecurities, trauma and with agogy and education, like The Evidence Toolkit.
A short overview of the Argumentation theory and tooling field:
“Within computer science, the ArgMAS workshop series (Argumentation in Multi-Agent Systems), the CMNA workshop series,[34] and the COMMA Conference, are regular annual events attracting participants from every continent. The journal Argument & Computation is dedicated to exploring the intersection between argumentation and computer science. ArgMining is a workshop series dedicated specifically to the related argument mining task.
Data from the collaborative structured online argumentation platform Kialo has been used to train and to evaluate natural language processing AI systems such as, most commonly, BERT and its variants.”
https://en.m.wikipedia.org/wiki/Argumentation_theory
Sibling tooling (with help from Sonnet and Wikipedia):
1. Argument Interchange Format (AIF):
This is a standardized format for representing argumentative structures in a machine-readable way. It's used in various academic tools and research projects. 2006
2. Rationale:
A software tool developed for academic use, particularly in teaching critical thinking and argument analysis. It offers more structured mapping capabilities than Kialo. 2004
3. Araucaria:
An open-source argument mapping software developed by researchers at the University of Dundee. It's designed for analyzing and diagramming arguments. 2001
4. ArgDown:
A markdown-like language for creating argument maps, which can be useful for programmatic approaches to argument analysis. 2016
5. OVA (Online Visualization of Argument):
A web-based tool for argument analysis and visualization, developed by researchers at the University of Dundee. 2010
6. Argunet:
An open-source argument mapping software that allows for collaborative work and integrates with a database of arguments. 2007
7. AGORA-net:
A web-based platform for argument reconstruction and evaluation, used in academic settings. 2013
9. ADA tools. Gregor Betz (2022). "Natural-Language Multi-Agent Simulations of Argumentative Opinion Dynamics". Journal of Artificial Societies and Social Simulation. 25: 2. arXiv:2104.06737. doi:10.18564/jasss.4725. S2CID 233231231
https://www.gregorbetz.de/
Yes, it works really well in my experience. Although I mostly use it for technical information and justifying some decisions. But you can totally ask modern LLMs to "summarise this in ArgDown format" and they'll do the right thing.
Unless it is a school essay, arguments don't go that way.
It is usually harder to encompass what a node (an atomic fact) is and what a link is (it usually goes beyond "support" and "counter"). Very often, this structure is not a tree. Maybe a DAG with weighted edges, but if it were that straightforward - knowledge graphs would simply work.
Instead of rehashing the same tree approach, we should adopt something closer to an LLM-embedding approach - for a given statement, we should have "relevant statements" with an additional dimension if it supports, counters, expands, provides an example, and so on. In this case, it wouldn't even be a DAG.