eh, in inverse kinematics its simply a fixed position of a point that the computer has to figure out how to reach... plus, i do read somewhere before thats its also quite cpu expensive but maybe thats back in the days of pentium 2's and stuff... the main problem is, like, if youve played half life 2, you can see alyx's legs snap all over uneven surfaces... the same goes for the striders, in fact the striders legs can change length to fit the terrain theyre stepping on , its very subtle but if you purposely spawn a strider next to a cliff you'll see what i mean...
its not just asking the computer 'what positions should these different joined pieces be in so that they join these 2 points' , it has to take account on how they will move so that they do not collide with anything, and that they will end up in the final position, etc etc etc... its abit like the comparison between collision detection and ragdoll physics...
in terms of timeframe and experience:
collision detection = VRML back in the pentium mmx days
ragdoll physics = unreal tournament 2003
in terms of how i think it works:
simple inverse kinematics = static and you simply define a very specific situation for the computer to work out the solution for, and time does not flow
modelling legs = dynamic, you may not reach the solution so you have to find ways of getting to the solution (like figuring out where to put the robot's foot without making it topple over), and for every timeframe that the computer is calculating , it has to take account of the speed , acceleration, etc etc of the moving parts, in addition to their position, as time flows