• Home  |
  • Blog  |
  • 2009  |
  • 7  |
  • 20  |
  • Automatic optimisation in Agency software

Automatic optimisation in Recruitment Software for Temporary Staff – some simple facts.

Posted Monday Monday, July 20, 2009 by Ian Pettman

Automatic Optimisation of temporary staff in a limited environment - some simple facts.

This is a non mathematical view of recruitment software for temporary staff which seeks to optimise cost for the end user.

Optimising people is not the same as optimising machines.

 

This was written by Ian Pettman who has a degree in Physics from Oxford University. It was written after a number of lengthy discussions with the following:


Dr Peter Kelen of Power Optimisation. Power Optimisation specialises in providing routines which cycle power stations down time and work schedule for major UK generating companies for the lowest cost of generating the electricity.
https://www.powerop.co.uk
Dr David Nelson who's Doctoral thesis was on optimisation of Nurses within a Hospital Trust in New Zealand. https://researchspace.auckland.ac.nz/bitstream/2292/332/9/02whole.pdf
Dr Barry Stoker who is heavily involved in extensive analytical, modelling, business and research consultancy including employment optimisation for large commercial groups.
https://www.jigsaw-consultants.co.uk/aboutus.html

 

Overview


There are three sections to this document: Mathematical factors and Human factors and Other factors. In spite of its name, Mathematical factors section is very non mathematic in its content being largely descriptive of the issues: The maths required is 5x4 level and we even give the answer!  Please give it a read, aspirin not a requirement.

 

Mathematical Factors

 

There are some straight forward rules when considering how difficult it is to perform optimisation. Optimisation of any sort rapidly becomes a complex issue when the number of items optimised grows. It is really just simple multiplication.


When it comes to choosing non interchangeable people or objects then if I am choosing one of five I have five choices. If I am choosing two of five then I have five choices for the first item but only four (remaining) choices for the second. The total is choices are 5x4 or 20. If I am choosing from ten items then 10 for the first choice and nine for the second. In this case I have a total of 10x9 or 90 choices. Only twice the number items to choose but over four times the range of choices. It rapidly gets worse as the numbers increase. In the case of an average day when 200 temp staff are booked spread over 7 or 8 categories with around a quarter HCA's (50) then the total possibilities are simply huge!   50x59x48x47x...7x6x5. In fact as numbers go this is a big one by anyone's standards.

 

To calculate every possibility will not take forever, it will take many lifetimes. Some drastic short cuts are needed for a computer program to make even a simple first choice.  Basically we need to guess what might be a good choice to start with. If we don't try a short cut the computer will not finish on the first day's shifts till after we are dead and gone. (It is actually a lot longer than that)! Of course a staff Bank administrator faced with such a task simply makes what they consider a good first choice for the first vacancy, fills it then goes on to the next. The Nurse Bank administrator can actually play an Ace card when they get to the end. They can phone someone up who has previously said they were unavailable and persuade them to work, thus not only filling shifts more quickly, but filling more in the end.

 

Actually for the computer things are a lot worse when it comes to optimising for cost in an NHS environment. Generally optimisation programs work by making a first guess, then swapping a couple of people or objects around. If things get better- good keep. If are not better, swap back and try entirely different pair of choices.  This works well when each object is slightly different. Skip the following if you think this might be reasonable.

(Example: say a bunch of sticks all of different lengths being put in storage boxes of different sizes maximum 1 stick per box. The problem: have the fewest sticks over which won't fit in any box. We have an empty box but the stick we have over is too long for the box we have empty. We have a long box with a short slick in it. Solution: swap sticks: put the short stick in the short box and the long stick in the long box. Keep doing this until we have the smallest number of sticks over and when we do a swap thing don't change (all the longest sticks are over). Now we think we have the best solution because we can see things getting better. However in the case of nurses being scheduled for lowest cost, HCA1 is on the same pay as HCA2. At first sight this does not make a huge difference. However, when we swap HCA1 and HCA2 there is no change in cost. The optimisation program knows things aren't changing and because this is what it tests for to find out if it has arrived at a good solution, it thinks it has arrived at the best solution. Wrong! This is known as the (double) valley problem. It is especially severe when optimising for overall cost when large numbers have the same individual cost.)

To sum up: if there are lots of identical values (people on the same grade) optimisation is hard.

 

Human Factors


Because by definition scheduling programs are inhuman they will lack the touch of a good staff Bank administrator. In the medium and long term, in the nature of things, the automatic program will generate higher degrees of disaffection than good Human interaction and leadership. The inevitable consequence of this will be lower levels of availability. Because of this the efficiency of staff optimisation need to be measured over an extended period and not just an initial blip of implementation. Also results need to be returned for ALL implementations.

 

We have all heard the one about "the emperor has no clothes". So by definition software that costs £100,000 has to be better than software that costs less than £10,000? The only problem is that if you analyse the level of success against the cost of the project across all IT projects then there is an extremely high correlation between the cost of a project and the chances of it NOT producing the desired results. However, in very simple terms it is a lot easier for decision makers to decide to write a large cheque and go home satisfied that they have set in motion a big project, rather than to tune a small project for improved results even though evolution may be the most certain route. Unfortunately this is just human nature.

 

By definition sales choose the most successful implementations as their references. Unfortunately when talking about statistical improvements, this is the same as flipping a coin and choosing the best sequence of heads and tails to prove their software makes heads come up 75% of the time.
When writing a cheque for £50,000, expensive software must absolutely guarantee a saving of double this to cover the high cost of ownership. Otherwise you might as well spin a coin to see if you will save money.

 

So is there a case for optimisation software? Surprisingly considering all that has gone before, the answer is a clear yes. There are a number of household names that have implemented such projects with significant payback. However when you analyse such scenarios you invariable find that the starting point was a chaotic "system" with multiple large departments operating their own individual policies, some not worried about overall cost: just making sure bodies were available so target were met; others exactly the opposite: minimising personnel costs but missing deadlines.

 

Other Factors

 

People are not machines. One of the "methods" of proving optimisation is cost effective is to take an historical set of data and then optimise it and show a net saving. Unfortunately this is not a real world scenario for two reasons. One: requests come in over extended periods. Optimise once (to give people reasonable notice) and all later requests will be ignored limiting the range and effectiveness of the process. Inevitability a staff bank will resource less costly staff first. Then (later scheduled) expensive staff (where savings can be made) will not be optimised. Ok so optimise twice before shift patterns. Now the very nature of the process is that staff will be moved around, some cancelled (at short notice): others booked (who now at short notice may not be able to fill and have to then be replaced by more expensive agency staff).  This will inevitably be an unfeeling process and lead to dissatisfaction. Compared to electricity power generators where optimisation can be a rolling process because machines do not get in a huff, people are by nature homeostatic: they like a degree of organisation.

 

Summary


What is achievable for a reasonably efficient temporary staff bank office employing between 2 and 30 consultants to allocate and manage staff temporary staff and a budget of £50,000 to £100,000 pa for software?
Ava can save you £40,000 to £90,000 guaranteed.

If you found this information useful, please share it!


0 comments for “Automatic optimisation in Agency software”

    Comments are closed for this post

Contact Information

To find out more about Ava solutions you can contact us in a number of ways:
Follow Us...