Econ 331 Lecture 1: Math Review Spring 2021
Hello, dear friend, you can consult us at any time if you have any questions, add WeChat: daixieit
Econ 331
Spring 2021
Lecture 1: Math Review
1. Functions of one variable: definition and examples
Function is rule which specifies the relationship between two variables, x and y : y = f (x)
Here x is called the independent variable and y is called the dependent variable.
Examples:
y = a +bx - linear function
y = axb - non-linear function
2. Slopes and derivatives
- slope of a function
slope = , shows how dependent variable changes when independent variable is
changed
- derivative of a function
Derivative is a generalization of the notion of the slope, it shows the rate of change of the dependent variable at a particular value of x . Formally
f ¢(x) = lim f (x + Dx) - f (x)
4. Functions of several variables, partial derivatives
Function of two variables assigns a unique value of the dependent variable z to each pair of independent variables x and y :
z = f (x, y)
Partial derivative with respect to variable x , , is the derivative of a function of
more than one variable which is obtained by keeping all other independent variables fixed. That is it shows the rate of change of the dependent variable when only x changes.
Example: f (x, y) = 10x - x2 + 20y - y2
¶f
¶x
¶f
5. Total differential, implicit functions
Total differential of a function z = f (x, y) is defined as
dz = fx dx + fy dy , where
fx = ¶f and f = ¶f
Total differential can be useful when dealing with implicit functions of the form f (x, y) = 0 .
We will often encounter situations in which relationship between two variables x and y can be stated only implicitly in the form f (x, y) = 0 . Sometimes it might be possible to solve for y as a function of x , y = g(x) and work with this function.
However often it is not possible to do. In this case we can still find = g¢(x) by
using implicit function theorem. In particular, note that taking the total differential of the implicit function f (x, y) = 0 we get
fx dx + fy dy = 0 , or
dy fx
dx = - fy ,
which produces the derivative that we wanted to compute. Note that for this to work we must have fy ¹ 0 in the neighbourhood of the point at which derivative is computed.
Example: 2x2 + y2 = 225
dy
dx
(1) Solve for y as a function of x and then find the derivative (note: remember that this will not work for more complicated functions).
y = 225 - 2x2
dy -2x 2x
dx = = - y
(2) Another way is to use total differential:
4xdx + 2ydy = 0 , which gives
dy 2x
dx = - y .
5. Finding maximum and minimum of a function
To find the maximum and minimum of a function of one variable the following steps must be completed:
(i) Find the critical points of the function, i.e. all values x* such that f ¢(x* ) = 0
(ii) Find the second derivative of this function and evaluate this derivative at x* .
If f ¢¢(x* ) < 0 then x* is a maximum of this function
If f ¢¢(x* ) > 0 then x* is a minimum of this function
- Finding maximum and minimum of multivariate functions
Find critical points of f (x1 , x2 ) , (x1(*) , x2(*)) which must satisfy:
¶f (x1(*) , x2(*)) = 0
¶x
¶f (x1(*) , x2(*)) = 0
¶x
Check second order conditions:
(1) < 0 and < 0 for a maximum,
(2) > 0 and > 0 for a minimum,
(3) * - 2 > 0 for either maximum or minimum.
Example: f (x1 , x2 ) = 10x1 - x1(2) + 20x2 - x2(2)
First order conditions:
¶f = 10 - 2x* = 0
¶x
¶f = 20 - 2x* = 0
¶x
Critical point: x1(*) = 5 , x2(*) = 10
Second order conditions:
¶ 2 f (x1(*) , x2(*)) = -2 ¶ 2 f (x1(*) , x2(*)) = -2 ¶ 2 f (x1(*) , x2(*)) = 0
¶x2 , ¶x2 , ¶x ¶x
Therefore x1(*) = 5 , x2(*) = 10 is the maximum of this function.
6. Maximization subject to a constraint
6.1 Equality Constraints
- Maximization of two-variable function subject to equality constraint Maximize g(x1 , x2 ) subject to h(x1 , x2 ) = z
(1) Substitution Method
Solve (if possible) constraint h(x1 , x2 ) = z for x1 as a function of x2 and z : x1 = r(x2 , z) .
Then maximize g(r(x2 , z), x2 ) with respect to x2 .
Example:
Maximize g(x1 , x2 ) = ln(x1x2 ) subject to x1 + x2 = z
Using the fact that constraint implies that x1 = z - x2 we write this problem as Maximize g(x1 , x2 ) = ln[(z - x2 )x2 ] = ln(z - x2 )+ln(x2 )
The first order condition for this problem is given by
g¢(x2 ) = - 1 + 1 = 0
z - x x
This implies that x2(*) = 0.5z . Using the constraint we obtain x1(*) = 0.5z . Taking the second derivative we get
g¢¢(x2 ) = - - < 0
Therefore point x1(*) = x2(*) = 0.5z is the maximum of this function.
(2) Lagrange’s Method
This involves reformulating the problem as that of unconstrained maximization and solving it. To do this we have to construct the following function
L(x1 , x2 , 入, z) = g(x1 , x2 ) + 入[z - h(x1 , x2 )]
The first order conditions for maximization are then given by
(1) ¶L = ¶g - 入 ¶h = 0
1 1 1
(2) ¶L = ¶g - 入 ¶h = 0
2 2 2
(3) ¶(¶)入(L) = z - h(x1 , x2 ) = 0
Next we solve simultaneously for x1(*) , x2(*) and 入* . The Lagrange multiplier 入* has a special interpretation in this problem. In particular it measures the marginal gain in the maximand, L(x1 , x2 , 入, z) , from relaxing constraint by 1 unit.
Example:
Given a fencing perimeter of length p how do we maximize the fenced area (provided that the area must have a rectangular shape)? This problem can be formulated as follows:
Maximize xy s.t. 2x + 2y = p
The Lagrangian function is
L(x, y, 入, p) = xy + 入[p - 2x - 2y]
FOC’s are given by
(1) = y - 2入= 0
(2) = x - 2入= 0
(3) ¶(¶)入(L) = p - 2x - 2y = 0
From (1) and (2) we obtain
x = y = 入, or
2 2
x = y = 2入
The (3) implies that
2入=
Finally,
x = y = p
+ =
2 2 4
Û 入= .
Therefore optimal shape of the fence is a square ( x = y ).
2022-09-05