In a previous module we derived two different objective functions for the purpose of solving the flash equilibrium problem. Let us take a closer look at these equations:
(13.1a)
(13.1b)
These equations arise from simple mole balances and the concept of equilibrium ratios. As we discussed before, if we are given {zi; i = 1,2,…,n} and, if for some reason, let us say, we are able to obtain all the equilibrium ratios {Ki; i = 1,2,…,n}; the only unknown in these objective functions would be the vapor fraction ‘’.
Once we are able to solve for this , we will have no problem applying any combination of equations (12.5), (12.7), and (12.11) to solve for all the vapor and liquid compositions at equilibrium {yi, xi}:
(12.5)
(12.7)
(12.11)
With all this information, the VLE problem would be completely solved. With the compositional information and the use of suitable equations of state and correlations, we can then find all other related properties, such as densities, viscosities, molecular weights, etc. This is why we call these the objective functions once we have solved them, we have achieved the objective of the VLE calculation.
Of course one question remains unanswered — how do you solve for that ‘ g’, which is buried inside those expressions? And secondly, which of the two objective functions that are available to us shall we use? It turns out that the answers to these questions are tied to one another. In fact, the proper objective function that we shall use to solve for ‘g’ is the same equation that simplifies the process of solving for that unknown.
To come up with these answers, the first thing that we have to notice is that both expressions are non-linear in g. This means that we cannot express ‘g’ explicitly as a function of the other variables. What do we use to solve equations that are non-linear in one variable? Doesn’t this ring a bell? We apply iterative techniques. And the classical iterative technique is the Newton-Raphson procedure. Now we can provide an answer to both of the questions that we just asked.
A distinctive characteristic of any Newton-Raphson procedure is that the success of the procedure depends greatly upon the choice of the initial guess for the variable considered. In fact, it is very commonly said that for Newton-Raphson to succeed, the initial guess must be as close as possible to the real solution. This ‘illness’ becomes worse when dealing with non-monotonic functions. In a monotonic function, derivatives at every point are always of the same sign — the function either increases or decreases monotonically. For Newton-Raphson, this means that there are neither valleys nor peaks that could lead the procedure to false solutions. If you apply Newton-Raphson to a monotonic and everywhere-continuous function, the success of the procedure is not dependent on the initial guess. In fact, if you apply Newton-Raphson to a monotonic function that is continuous at every single point of the domain as well, it does not matter at all where you start: you will always find the solution. It might take time, but you will be able to converge to a unique solution.
Why does this matter when dealing with equations (13.1)? The fact of the matter is that equations (13.1) are not monotonic, and this does not make things easier. If, as an exercise, you plot them as a function of g or take the derivative, you will realize that both functions may change the sign of their first derivatives for different values of g.
This poses a problem, obviously. You will not get a unique solution by applying Newton-Raphson, and you might end up with the wrong solution. Rachford and Rice (1952) recognized this problem and came up with a suggestion. They proposed a new objective function, based on equations (13.1), which simplifies the application of the Newton-Raphson procedure.
They combined equations (13.1) by subtraction to yield:
(13.2)
Hence, the new objective function becomes:
(13.3)
Equation (13.3) is known in the literature as the Rachford-Rice Objective Function. Rachford and Rice combined two very well known objective functions into a single objective function. Are there advantages to this “new” approach?
The wonderful news is that equation (13.3) is monotonic. The implication of this is that equation (13.3) is better suited for Newton-Raphson application than equations (13.1). How do you demonstrate the monotonic character of the Rachford and Rice objective function? To do this, we take the first derivative of the function:
(13.4)
Every item within the summation sign is positive — this is guaranteed by the squares in the numerator and the denominator and the fact that all compositions are positive. Hence, the derivative expression in (13.4) has no choice but to be negative, and the Rachford-Rice Objective function has been proven to be a monotonically decreasing function. With this approach, Rachford-Rice removed a major headache from the Vapor-Liquid Equilibria problem.
A remaining weakness of the Rachford-Rice objective function is that, although monotonic, it is not continuous at all points of the domain. By inspection, you can see that the function has ‘n’ singularities (as many singularities as components in the mixture), because it becomes singular at values of ‘ ’ equal to:
(13.5)
Hence, you may still face convergence problems if the procedure crosses any of the singularities. If, by any means, one is able to keep the Newton-Raphson procedure within values of where a physically meaningful solution is possible (within the two asymptotes where the values 0 < < 1 are found), the monotonically decreasing feature of the Rachford-Rice equation would guarantee convergence.