I ported the Hooke and Jeeves algorithm to Python from C. I ported it because I've been writing a Python toolkit for kinetic modeling of biochemical systems. There are certain modeling methods I want to use that are difficult or impossible to do with COPASI (I'll try to write more about this in a later post), so I'm making a library specific to my needs. I found the Hooke and Jeeves parameter fitting routine from COPASI to be the most useful for fitting parameters to my current system of interest (Nelder-Mead, and Levenberg-Marquardt also seemed to work pretty well, the other ones not nearly as much).
For the code I used as a basis, see:
For other ports of this algorithm and various other useful information see:
For my Python port, see:
I made two versions of it.
The first (hooke_jeeves.py) is nearly an exact port. I changed the interface of the optimization function to resemble the interface used by scipy.optimize.minimize.
The other version (hooke_jeeves_bounded.py) allows the user to specify bounds for the solution. Bounds are implemented in a fairly crude manner. Any time the algorithm tries to pick a point outside the bounds, the point is shunted to the nearest edge, and optimization continues from there. That seems to be the most straightforward way to convert an unbounded optimization algorithm to a bounded optimization algorithm, but I've read criticisms (don't remember where off the top of my head) that assert that it's not necessarily theoretically sound or robust. But, that's the way COPASI does it and it seems to work well in their implementation, so, lacking the expertise to do better, I'm just following their lead.
Anyways, hopefully someone finds this useful.
EDIT: I found a different, and I think better, way to handle bounds. I added the function hooke_bounded to hooke_jeeves.py. The new bounded function wraps the objective function in a closure that returns the normal value for a point in the bounds, but infinity for a point outside the bounds. This new method has much cleaner code, but may make the algorithm less likely to find optima near the bounds (whereas the previous method would do the opposite problem, tending to overemphasize optima near the bounds). It's a tradeoff, so I left the code for the other bounding method up there too.