Unutmaz post about AI model use for MECFS

Discussion in 'Research methodology news and research' started by Jaybee00, Mar 9, 2025.

  1. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    3,143
    Location:
    Norway
    Ah, I see we have a thread on it:
    https://www.s4me.info/threads/biomapai-artificial-intelligence-multi-omics-framework-modeling-of-myalgic-encephalomyelitis-chronic-fatigue-syndrome-2024-xiong-et-al.39136/


    If there’s no overlap in the data, I agree that it’s encouraging that there are overlaps in the results.
     
    mariovitali and Peter Trewhitt like this.
  2. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    3,143
    Location:
    Norway
    Being able to express reasoning doesn’t mean that it’s actually able to reason. Or that the reasoning it expresses was the reasoning it used.

    This goes for both humans and AI. But just because humans do it, doesn’t mean that we should accept it from AI.

    If we want to give AI responsibilities, we need to know that it’s able to reason, and not just look like it’s able to reason.
     
    Yann04 and Peter Trewhitt like this.
  3. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,642
    Location:
    London, UK
    Well that is what I meant. I did not mean writing the assumptions into a program. It is the implicit assumptions that are involved in thinking a certain programming will do what it is supposed to do (get the right answer) that will be the problem.

    One simple assumption is that there is always one right answer to any question. All computers work with that assumption, even if we try tomato them have 'doubts'.The human brain does not use the sort of computing that requires one right answer. So in a sense we know that we are asking AI to do something that we wouldn't do but still pretend it is what we would do.
     
    Wonko and Peter Trewhitt like this.
  4. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,642
    Location:
    London, UK
    There is an old saying that great minds think alike and fools seldom differ.
    Sadly, research targets often converge on insoluble questions that never needed asking in the first place. Their insolubility guarantees there is always more work to do!
     
    Steppinup and Peter Trewhitt like this.
  5. mariovitali

    mariovitali Senior Member (Voting Rights)

    Messages:
    577
    @Jonathan Edwards I dream of a day where unbiased AI algorithms will be making the right decisions for us. Probably the biggest hurdle to move ME/CFS research has been -and still is- the "Selfish gene"
     
    Last edited: Mar 10, 2025
    rvallee and Peter Trewhitt like this.
  6. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    3,143
    Location:
    Norway
    Can you elaborate on this? I have not heard of this assumption before.
     
    Peter Trewhitt likes this.
  7. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,642
    Location:
    London, UK
    In computers all computing events consists of sending two signals to a gate that has a fixed rule for what the output should be. In general terms one inputsignal can be seen as determined by what you want to interrogate ('data')and the other signal is determined (ultimately) by the programmed rules for interrogation. The output you get for any given data input is totally determined by the programmed rules.

    In brains a signal carrying data to be interrogated is generally sent to about 10,000 places at once. Moreover, it will arrive at each of those places together with not just one other signal representing maybe 'expectations' or something else programmed by what has gone before, but perhaps 100 such signals, each with a different significance.

    The result is that the output consists of the firings and non-firings of maybe 10,000 integrator units, with the speed of firing depending on how well the data signal 'fits' with 100 other signals. The first few firings win out and inhibitable the others. So the system responds not with a 'right answer' but a 'best fit' answer. Moreover, the basis of 'best fit' will depend on a vast combinatorial range of prior best fit computations such that there are no knowable rules for the system.

    'Neural network' models used to programme AI machines notionally include multiple weighted inputs to integrators but in reality everything is simulated with fixed binary gates with two inputs. Best fit can be simulated in the way that pseudorandom number generators simulate randomness but it is ultimately arbitrary because there is no realtime analogue temporal competition generating best fit by analogue rules.
     
  8. Adrian

    Adrian Administrator Staff Member

    Messages:
    7,065
    Location:
    UK
    This isn't true if you look at the transformer model and how it works the tokens propagate through in a deterministic manner until the last layer which has an output of a vector of possible tokens. A simple algorithm would simply take the token with the highest activation. But what typically happens is the top k tokens are taken with their activations and turned into a probability distribution and this is randomly sampled to get the next token. Thus for the same inference we get different answers. LLMs have a temperature parameter that controls the sampling process and the amount of randomness and hence the 'creativity' in the answer.

    I guess you can argue it is deterministic given a seed to the prng but it this process leads to different answers (of different quality).
     
  9. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,642
    Location:
    London, UK
    This gets very complicated and I know nothing about the transformer model. However, as you describe it the system injects some randomness, may be to mimic 'creativity'. There is no doubt that randomness is involved in creativity but my understanding is that it is not what actually provides the value of creativity, which is to identify patterns that have a higher 'value' than others and for reasons that have never yet been acknowledged.

    What I don't see any mechanical system as doing is making use of genuinely continuous functions in generating a computed output in a way that reflects 'preference' based on local identification of value (rather than identification by the human user). At the levels Goldstone mode excitations I can see that being possible in post-synaptic integration. The integrator may be able to identify a 'higher value' that has never previously been identified in any rules. Something that would explain the haunting appeal of Erik Satie's music, which appears to simply break rules randomly but somehow creates one of the most evocative sounds ever created in a way that 100 other modern composers fait to achieve in spades.
     
    voner and Peter Trewhitt like this.
  10. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    3,143
    Location:
    Norway
    You might want to look into Jacob Collier. He’s an exceptionally talented musician that believes that everything can harmonise with everything. And he’s talented enough to pull it off!

    On a different note, Gymnopédie No. 1 is eerily similar to a very lovely soundtrack from the game Minecraft. That was a pleasant nostalgic surprise!
     
    Sean and Peter Trewhitt like this.
  11. poetinsf

    poetinsf Senior Member (Voting Rights)

    Messages:
    548
    Location:
    Western US
    Sentience doesn't matter though. Brain evolved to feel to make you do something in response, so that you can survive and proliferate. AI obviously doesn't have need for that since it didn't evolve and hence has no delusional notion of self. So, it is purely a predictive model, an emulation without purpose. It's a tool, in other words. But that doesn't mean that AI can't be smarter than humans in figuring things out. It already is in some cases.
     
    Sean and Peter Trewhitt like this.
  12. poetinsf

    poetinsf Senior Member (Voting Rights)

    Messages:
    548
    Location:
    Western US
    If you think about it, that's how human brains evolved too, out of needs to guess where the food might be and where the predators are, and then do something about it.
     
    Peter Trewhitt likes this.
  13. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    3,143
    Location:
    Norway
    True, but that wasn’t my point. Humans are able to use logic to prove that something is true or not. LLM AI doesn’t do that.
     
    Peter Trewhitt likes this.
  14. Adrian

    Adrian Administrator Staff Member

    Messages:
    7,065
    Location:
    UK
    The point is not about creativity (hence I put creative in quotes). The point is about algorithms that ensure a range of different answers - which I think you had said didn't happen with AI (or generally with computers). More generally with agentic reasoning we are seeing models being used for planning, generating ideas and assessing the value (look for example at the google co-scientist system). This is of course based on training - but then human knowledge and reasoning is also based on training. The latest AI systems (aimed at reasoning) are based on reinforcement learning giving the model reasoning tasks and then providing yes/no type training signals based on the overall result. Not too dissimilar to training people (apart from perhaps more feedback is given).

    In terms of identifying good patterns - if you look at say the AI used for playing Go. The AI system was trained against itself with reinforcement learning based on the result of the game (win/loose) playing thousands of games. This lead to a completely new set of tactics (valuable patterns) that beat human players and human players started to learn tactics from the AI. So in a very closed situation (although with a vast search space) an AI system was able to create new ways to play games.


    As I understand what you are saying is that you think some methods of computation lead to different abilities that others (so bio computing mechanisms lead to different abilities). I would argue that isn't the case (based on the Church Turing thesis). What will be the case is around computational efficiency and there is a lot of computer architecture work being done to speed up AI models but this doesn't talk to the underlying ability of whether a function is computable - often it is about the energy/time put in to get a result.
     
  15. Adrian

    Adrian Administrator Staff Member

    Messages:
    7,065
    Location:
    UK
    LLMs can do reasoning now and are being trained on lots of maths problems. So for example I asked a question of Phi4-mini (A small language model)

    And got this response which I thought was ok logical reasoning proving a statement true or not.

     
    Peter Trewhitt likes this.
  16. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    3,143
    Location:
    Norway
    This might be 100 % cherry picking and confirmation bias on my end, but this is the first search result on google for ‘can llms reason’:
    https://arxiv.org/abs/2408.07215
    The point being that appearing to be able to reason doesn’t actually mean that it is reasoning.
     
    Michelle and Peter Trewhitt like this.
  17. Adrian

    Adrian Administrator Staff Member

    Messages:
    7,065
    Location:
    UK
    I think reasoning capabilities have come along way since that paper was published (although its relatively new!). There has been lots of work on chain of thought training approaches- for example deepseek moved this along way when they released their models. The example I gave using Phi4 is the latest microsoft model released a week or so ago which has been trained to emphases reasoning capabilities. Basically they are training models on maths problems.

    The chain of thought deep seek style encourages the model to explain its reasoning which seems to help (and before CoT was being done more explicitly in prompts).

    I suspect there is a long way to go in this direction but lots of progress.

    You can argue if it is reasoning but to me if a model is appearing to reason over a range of problems then it is reasoning. With agentic systems more complex tasks can be achieved as the model can be asked to generate a plan and then execute the parts of a plan - including searching for information or solving each individual piece and combining results. Its hard to do complex reasoning tasks in one go (given limited short term memory).
     
    Utsikt and Peter Trewhitt like this.
  18. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    3,143
    Location:
    Norway
    They did find that they were able to solve some of the easier problems (relatively speaking). So in that sense, they might have some capability of reasoning.

    My worry, is that we use this as a kind of ‘proof’ that AI must have used reasoning when they provide an answer, and that the answer is logically proven as a result.

    When you combine that with the issue of getting AI to show its work (and not just appearing like it shows its work), you have a huge potential for misplaced trust in LLMs and AI in general.

    Humans are prone to all kinds of biases, but we know that and we’ve designed systems to try and deal with it. Some, like the peer review process, are failing miserably. So you guys created this forum to try and establish what we actually know and what’s complete fantasy.

    There’s also the issue of how LLMs appear like confident and authoritative figures, regardless of their reasoning skills. This is obviously a huge issue with humans as well, but having a supercharged Wessely-of-all-trades in your pocket is bound to go wrong at some point.

    Lastly, have you ever seen an LLM say ‘I don’t know how to do that’? I can reason about my own ability to reason. Do they?

    I’m going off topic here, but these are some of the reasons for my apprehension for labelling something as ‘true reasoning’.
     
    Michelle and Peter Trewhitt like this.
  19. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    17,642
    Location:
    London, UK
    But the Church-Turing thesis only applies to Turing machine type computation. Turing originally used the word 'computer' to mean a man who computes. It is now clear that brains do notresembvle Turing machines at all in the way they generate outputs from inputs. They might seemtoif you still believe in the default integrate and fire model of linear summation but in the last twenty years that has been shown to be wrong. A neuronal integrator can identify and respond to patterns involving hundreds of degrees of freedom. We do not have clear evidence that this provides abilities that mechanical computers cannot have but it seems very plausible.

    Penrose reasoned that humans can prove theorems that cannot be proven by Turing machines. I am not sure whether his argument was valid but his mistake was to think that brains operate roughly like Turing machines and they clearly do not. If they compute the way modes of excitation of the electron field dose can expect them to have powers way beyond a Turing machine - identifying a unique mode option in a semiconductor unit with notionally billions of distinct mode options.

    I appreciate that some computers may produce more than one possible answer but if that is achieved just by inserting some randomness it isn't clever. There is a widespread belief that processes are either determined or random but I am fairly sure that at a fundamental level that is a misconception. At that levelly events have both determined and stochastic components to their causation and the combination of those two generates something completely novel that we can call preference. It allows for 'value' to have meaning in physical dynamics. Turing machines have nothing like that. Throwing a bit of random in afterwards does not do it.
     
    Michelle, Peter Trewhitt and EndME like this.
  20. poetinsf

    poetinsf Senior Member (Voting Rights)

    Messages:
    548
    Location:
    Western US
    I don't know if the current crop of LLMs can or can't since I don't follow them too closely, but logic is mathematics and therefore is one of the simplest tasks for computers to perform. If not now, I'm sure they'll have one that can reason as well as, or perhaps even better, than humans eventually.

    There is no theorical reason why biochemical computer should be superior to electronics one in reasoning and predicting, other than for the complexity difference at the moment. The only difference is that biochemical ones evolved and therefore possess things like sentience, feelings, emotions or the sense of self. Those have more to do with self-preservation and replication within the bio realm rather than abstract problem solving.
     
    Peter Trewhitt likes this.

Share This Page