'******************************************************* '* Program to demonstrate the use of multi-dimensional * '* Steepest Descent Optimization subroutine * '* --------------------------------------------------- * '* Reference: BASIC Scientific Subroutines, Vol. II * '* By F.R. Ruckdeschel, BYTE/McGRAWW-HILL, 1981 [1]. * '* --------------------------------------------------- * '* Example: Find a local maximum of: * '* F(x,y,z) = sin(x)+2*cos(y)-sin(z) * '* * '* SAMPLE RUN: * '* * '* How many dimensions: 3 * '* * '* Convergence criterion: 1e-9 * '* * '* Maximum number of iterations: 50 * '* * '* Starting constant: 1 * '* * '* Input the starting point: * '* * '* X( 1) = 1 * '* X( 2) = 1 * '* X( 3) = 1 * '* * '* The results are: * '* * '* X( 1) = 1.5707963 * '* X( 2) = -0.0000316 * '* X( 3) = 4.7123912 * '* * '* The local maximum found = 4.0000000 * '* * '* The number of steps was: 33 * '* * '******************************************************* defint i-n defdbl a-h,o-z cls print input " How many dimensions: ", l print dim D(3),X(l),X1(l),Y(3) input " Convergence criterion: ", e print input " Maximum number of iterations: ", m print input " Starting constant: ", xk print print " Input the starting point:" print for i=1 to l print " X(";i;") = "; : input X(i) next gosub 2000 'Call steepest descent optimization subroutine print : print print " The results are:" print for i=1 to l print " X(";i;") = "; print using "##.#######"; X(i) next print print " Local maximum found: "; print using "##.#######"; yy print print " The number of iterations was ";n print end '****************************************************** 1000 ' Function subroutine yy = SIN(X(1))+2#*COS(X(2))-SIN(X(3)) return '****************************************************** '*************************************************** '* Steepest descent optimization subroutine * '* ----------------------------------------------- * '* This routine finds the local maximum or minimum * '* of an L-dimensional function using the method * '* of steepest decent, or the gradient. * '* The function must be available in subroutine * '* 1000. In this version, finite differences are * '* used to calculate the L partial derivatives. * '* ----------------------------------------------- * '* INPUTS: * '* l - The dimension of function to study * '* e - The convergence criteria * '* m - The maximum number of iterations * '* xk - A starting constant * '* X(i) - Initial values of variables * '* OUTPUTS: * '* X(i) - The locally optimum set * '* yy - The local maximum found * '* n - The number of iterations performed, * * '*************************************************** 2000 n=0 'The routine needs three values of Y to get started 'Generate starting D(i) values 'These are not even good guesses and slow the program a little dd=1 D(1)=1#/SQR(l) for i=2 to l D(i)=D(i-1) next 'Start initial probe for j=1 to 3 'Obtain yy and D(i) gosub 1000 Y(j)=yy 'Obtain approximations of the D(i) gosub 4000 'Update X(i) gosub 3000 gosub 3500 next j 'We now have a history to base the subsequent search on 'Accelerate search if approach is monotonic 2050 if Y(2)-Y(1)=0 then goto 2051 if (Y(3)-Y(2))/(Y(2)-Y(1))>0 then xk=xk*1.2 'Decelerate if heading the wrong way 2051 if Y(3)Y(2) then goto 2100 'Restore the X(i) for i=1 to l X(i)=X1(i) next goto 2200 2100 Y(1)=Y(2) : Y(2)=Y(3) 'Obtain new values 2200 gosub 1000 : Y(3)=yy gosub 4000 'Get D(i) 'if dd=0 then the precision limit of the computer has been reached if dd=0 then return 'Update X(i) gosub 3500 'Check for maximum iterations and convergence n=n+1 if n>=m then return if ABS(Y(3)-Y(2))