Unutmaz post about AI model use for MECFS

Discussion in 'Research methodology news and research' started by Jaybee00, Mar 9, 2025 at 6:51 PM.

  1. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,143
    Location:
    Norway
    Ah, I see we have a thread on it:
    https://www.s4me.info/threads/biomapai-artificial-intelligence-multi-omics-framework-modeling-of-myalgic-encephalomyelitis-chronic-fatigue-syndrome-2024-xiong-et-al.39136/


    If there’s no overlap in the data, I agree that it’s encouraging that there are overlaps in the results.
     
    mariovitali and Peter Trewhitt like this.
  2. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,143
    Location:
    Norway
    Being able to express reasoning doesn’t mean that it’s actually able to reason. Or that the reasoning it expresses was the reasoning it used.

    This goes for both humans and AI. But just because humans do it, doesn’t mean that we should accept it from AI.

    If we want to give AI responsibilities, we need to know that it’s able to reason, and not just look like it’s able to reason.
     
    Yann04 and Peter Trewhitt like this.
  3. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    16,351
    Location:
    London, UK
    Well that is what I meant. I did not mean writing the assumptions into a program. It is the implicit assumptions that are involved in thinking a certain programming will do what it is supposed to do (get the right answer) that will be the problem.

    One simple assumption is that there is always one right answer to any question. All computers work with that assumption, even if we try tomato them have 'doubts'.The human brain does not use the sort of computing that requires one right answer. So in a sense we know that we are asking AI to do something that we wouldn't do but still pretend it is what we would do.
     
    Wonko and Peter Trewhitt like this.
  4. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    16,351
    Location:
    London, UK
    There is an old saying that great minds think alike and fools seldom differ.
    Sadly, research targets often converge on insoluble questions that never needed asking in the first place. Their insolubility guarantees there is always more work to do!
     
    Steppinup and Peter Trewhitt like this.
  5. mariovitali

    mariovitali Senior Member (Voting Rights)

    Messages:
    548
    @Jonathan Edwards I dream of a day where unbiased AI algorithms will be making the right decisions for us. Probably the biggest hurdle to move ME/CFS research has been -and still is- the "Selfish gene"
     
    Last edited: Mar 10, 2025 at 3:29 PM
    rvallee and Peter Trewhitt like this.
  6. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,143
    Location:
    Norway
    Can you elaborate on this? I have not heard of this assumption before.
     
    Peter Trewhitt likes this.
  7. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    16,351
    Location:
    London, UK
    In computers all computing events consists of sending two signals to a gate that has a fixed rule for what the output should be. In general terms one inputsignal can be seen as determined by what you want to interrogate ('data')and the other signal is determined (ultimately) by the programmed rules for interrogation. The output you get for any given data input is totally determined by the programmed rules.

    In brains a signal carrying data to be interrogated is generally sent to about 10,000 places at once. Moreover, it will arrive at each of those places together with not just one other signal representing maybe 'expectations' or something else programmed by what has gone before, but perhaps 100 such signals, each with a different significance.

    The result is that the output consists of the firings and non-firings of maybe 10,000 integrator units, with the speed of firing depending on how well the data signal 'fits' with 100 other signals. The first few firings win out and inhibitable the others. So the system responds not with a 'right answer' but a 'best fit' answer. Moreover, the basis of 'best fit' will depend on a vast combinatorial range of prior best fit computations such that there are no knowable rules for the system.

    'Neural network' models used to programme AI machines notionally include multiple weighted inputs to integrators but in reality everything is simulated with fixed binary gates with two inputs. Best fit can be simulated in the way that pseudorandom number generators simulate randomness but it is ultimately arbitrary because there is no realtime analogue temporal competition generating best fit by analogue rules.
     
  8. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,838
    Location:
    UK
    This isn't true if you look at the transformer model and how it works the tokens propagate through in a deterministic manner until the last layer which has an output of a vector of possible tokens. A simple algorithm would simply take the token with the highest activation. But what typically happens is the top k tokens are taken with their activations and turned into a probability distribution and this is randomly sampled to get the next token. Thus for the same inference we get different answers. LLMs have a temperature parameter that controls the sampling process and the amount of randomness and hence the 'creativity' in the answer.

    I guess you can argue it is deterministic given a seed to the prng but it this process leads to different answers (of different quality).
     
  9. Jonathan Edwards

    Jonathan Edwards Senior Member (Voting Rights)

    Messages:
    16,351
    Location:
    London, UK
    This gets very complicated and I know nothing about the transformer model. However, as you describe it the system injects some randomness, may be to mimic 'creativity'. There is no doubt that randomness is involved in creativity but my understanding is that it is not what actually provides the value of creativity, which is to identify patterns that have a higher 'value' than others and for reasons that have never yet been acknowledged.

    What I don't see any mechanical system as doing is making use of genuinely continuous functions in generating a computed output in a way that reflects 'preference' based on local identification of value (rather than identification by the human user). At the levels Goldstone mode excitations I can see that being possible in post-synaptic integration. The integrator may be able to identify a 'higher value' that has never previously been identified in any rules. Something that would explain the haunting appeal of Erik Satie's music, which appears to simply break rules randomly but somehow creates one of the most evocative sounds ever created in a way that 100 other modern composers fait to achieve in spades.
     
    voner and Peter Trewhitt like this.
  10. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,143
    Location:
    Norway
    You might want to look into Jacob Collier. He’s an exceptionally talented musician that believes that everything can harmonise with everything. And he’s talented enough to pull it off!

    On a different note, Gymnopédie No. 1 is eerily similar to a very lovely soundtrack from the game Minecraft. That was a pleasant nostalgic surprise!
     
    Sean and Peter Trewhitt like this.
  11. poetinsf

    poetinsf Senior Member (Voting Rights)

    Messages:
    453
    Location:
    Western US
    Sentience doesn't matter though. Brain evolved to feel to make you do something in response, so that you can survive and proliferate. AI obviously doesn't have need for that since it didn't evolve and hence has no delusional notion of self. So, it is purely a predictive model, an emulation without purpose. It's a tool, in other words. But that doesn't mean that AI can't be smarter than humans in figuring things out. It already is in some cases.
     
    Sean and Peter Trewhitt like this.
  12. poetinsf

    poetinsf Senior Member (Voting Rights)

    Messages:
    453
    Location:
    Western US
    If you think about it, that's how human brains evolved too, out of needs to guess where the food might be and where the predators are, and then do something about it.
     
    Peter Trewhitt likes this.
  13. Utsikt

    Utsikt Senior Member (Voting Rights)

    Messages:
    1,143
    Location:
    Norway
    True, but that wasn’t my point. Humans are able to use logic to prove that something is true or not. LLM AI doesn’t do that.
     
    Peter Trewhitt likes this.
  14. Adrian

    Adrian Administrator Staff Member

    Messages:
    6,838
    Location:
    UK
    The point is not about creativity (hence I put creative in quotes). The point is about algorithms that ensure a range of different answers - which I think you had said didn't happen with AI (or generally with computers). More generally with agentic reasoning we are seeing models being used for planning, generating ideas and assessing the value (look for example at the google co-scientist system). This is of course based on training - but then human knowledge and reasoning is also based on training. The latest AI systems (aimed at reasoning) are based on reinforcement learning giving the model reasoning tasks and then providing yes/no type training signals based on the overall result. Not too dissimilar to training people (apart from perhaps more feedback is given).

    In terms of identifying good patterns - if you look at say the AI used for playing Go. The AI system was trained against itself with reinforcement learning based on the result of the game (win/loose) playing thousands of games. This lead to a completely new set of tactics (valuable patterns) that beat human players and human players started to learn tactics from the AI. So in a very closed situation (although with a vast search space) an AI system was able to create new ways to play games.


    As I understand what you are saying is that you think some methods of computation lead to different abilities that others (so bio computing mechanisms lead to different abilities). I would argue that isn't the case (based on the Church Turing thesis). What will be the case is around computational efficiency and there is a lot of computer architecture work being done to speed up AI models but this doesn't talk to the underlying ability of whether a function is computable - often it is about the energy/time put in to get a result.
     
    Peter Trewhitt likes this.

Share This Page