Syllabus
Unit I
Introduction to LPP: Lines and hyperplanes, Convex sets, Convex hull, Formulation of a Linear Programming Problem, Linear Programming Problem; Graphical Method; Simplex method; Dual problem, Duality theory, Dual simplex method, Revised simplex method.
Unit II
Introduction to optimization: classical optimization, Optimality criteria – Necessary and sufficient conditions for existence of optimum point. Fundamental Region Elimination Rules to eliminate a region. One dimensional Search methods: Golden search method, Fibonacci method, Newton’s Method, Secant Method, Remarks on line Search Sections.
Unit III
Unconstrained Multivariable optimization: Introduction, Necessary and sufficient conditions for existence of extreme point. Conditions for local minimization. Direct search methods: unidirectional search, box evolutionary search method.
Unit IV
Gradient-based methods- introduction, the method of steepest descent, analysis of Gradient Methods, Convergence, Convergence Rate. Analysis of Newton’s Method, Newton’s Method for Nonlinear Least-Squares. Introduction -The Conjugate Direction Algorithm, The Conjugate Gradient Algorithm for unconstrained optimization problems.
Unit V
Nonlinear Equality Constrained Optimization- Introduction, Problems with equality constraints Problem Formulation, Lagrange Multiplier Method – Nonlinear Inequality Constrained Optimization: – Problems with inequality constraints: Kuhn-Tucker conditions. Specific Search Algorithms: Hill Climbing, Simulated Annealing, Genetic Algorithms, Ant Colony Optimization.
Objectives and Outcomes
CO1: To learn Linear Programming Problems.
CO2: To learn single variable optimization techniques
CO3: To understand the basics of unconstrained optimization problems and direct search, unidirection search methods for multivariable problems.
CO4: To learn the various unconstrained optimization techniques for multivariable.
CO5: To understand and solve the nonlinear optimization problem with equality and inequality constrained problems and to learn theory of few significant genetic
evolutionary algorithms.
CO-PO Mapping:
|
PO1
|
PO2
|
PO3
|
PO4
|
PO5
|
PO6
|
PO7
|
PO8
|
PO9
|
PO10
|
PO11
|
PO12
|
CO1
|
3
|
3
|
3
|
2
|
2
|
2
|
|
|
|
|
1
|
1
|
CO2
|
3
|
3
|
3
|
2
|
2
|
2
|
|
|
|
|
1
|
1
|
CO3
|
3
|
3
|
3
|
2
|
3
|
2
|
|
|
|
|
1
|
1
|
CO4
|
3
|
2
|
3
|
2
|
2
|
2
|
|
|
|
|
1
|
1
|
CO5
|
2
|
2
|
2
|
1
|
2
|
1
|
|
|
|
|
1
|
1
|
Text Books / References
Text Books/ Reference Books:
- Edwin K.P. Chong, Stanislaw H. Zak, “An Introduction to Optimization”, 2nd edition, Wiley, 2013.
- Mokhtar S. Bazarra, Hamit D Sherali, C.M. Shetty, “Nonlinear programming Theory and applications”, 2nd edition, Wiley , 2004.
- Mohan C. Joshi and Kannan M. Moudgalya, Optimization: Theory and Practice, Narosa Publishing House, New Delhi, 2004 (Reference)
- Kalyanmoy Deb, “Optimization for Engineering Design Algorithms and Examples”, Prentice Hall of India, New Delhi, 2004.
- S. Rao, “Optimization Theory and Applications”, Second Edition, New Age International (P) Limited Publishers, 1995.
- Bertsimas, Dimitris, and John Tsitsiklis. Introduction to Linear Optimization. Belmont, MA: Athena Scientific, 1997.