Can Large Language Models (LLMs) like ChatGPT be used to produce useful information?

Wasn’t sure where best to post this or how wide the interest would be so piggybacking on this thread.

While looking at different tools and APIs I’ve increasingly found MCP servers being provided by organisations, for example:
https://www.ebi.ac.uk/ols4/mcp
https://string-db.org/help//mcp/

What is MCP you may ask…
https://en.wikipedia.org/wiki/Model_Context_Protocol

So basically providers of bioinformatics resources are providing ways for LLMs to interact with them (particularly the agentic form of these LLMs) to retrieve information and often do other stuff, like running enrichment analysis or identifying protein interactions.
 
Article from Wiki Education:

Wiki Edu said:
Like many organizations, Wiki Education has grappled with generative AI, its impacts, opportunities, and threats, for several years. As an organization that runs large-scale programs to bring new editors to Wikipedia ... we have deep understanding of what challenges face new content contributors to Wikipedia — and how to support them to successfully edit.
My conclusion is that, at least as of now, generative AI-powered chatbots like ChatGPT should never be used to generate text for Wikipedia; too much of it will simply be unverifiable.

Our staff would spend far more time attempting to verify facts in AI-generated articles than if we’d simply done the research and writing ourselves.

The article also lists a few areas where chatbots can be helpful.

The article does not mention several of the issues related to chatbots, such as consent (for example, some data used for training was under copyright and should not have been used).
 
Back
Top Bottom