This approach is good, as I have used it myself on many projects. If you go one step further with this idea, you can increase the quality of your estimates greatly. The wide-band delphi approach takes estimates from multiple experts and combines them using a weighted average (or other similar technique), to reach a "consensus" estimate. This admittedly takes more effort and time to produce the estimate, but the team can gain a great deal of confidence by practicing this approach.
In addition, historical metrics can provide constraints to this approach. For software projects, estimates are typically optimistic. Instead of padding estimates with a static fudge-factor, try leveraging actual performance data from completed projects. Lines of code, effort and duration from a few similar projects will open the team's eyes to what has been done in the past, and therefore, what may happen in the future.
Of course, your mileage may vary, as the saying goes, so proceed with caution.
First ! is this a known application rather, do have previous expertise in Testing the said application or has your organisation at sometime done similar testing on the application and if there is expertise or a knowledge bank to help you in your estimates, "Expert Estimation" and "Wide Band Delphi" are very helpful. You can also make use of assistance from the "Work Break Down" method to give you a detailed Estimate of all that you need.
If however, this is the first time your doing testing on this Application, the Function Based Analysis is the prefered method which will give you close to accurate Estimation.
In my company we are taking metrics and classify the work.
When we need to estimate then we go and look metrics.
If the work will be done first time then I use the metrics of the old works which are done first time.
If the work is a done before then I go and look for a related metric.
If an estimation needed and no record for such a type then I prefer using first time or an average of all records.
How to compare work load for the new one and the old work to estimate, well it depends on what I have. If I have only requirements then I use requirement to evaluate metrics and make an estimation for this one. If I have functionalities then I use functionalities and etc.
In using FPA (Function Point Analysis), you calculate based on the Functions that you have to Test in the Application, break it down to manageable chunks and then estimate.
FPA calculation needs careful understanding of Transactional & Data Functions, for more details on FPA Calculation, IFPUG.org will be able to guide you
First Define your :
Data Functions - This will contain your Internal Logical Files and any External Interface Files. (and)
Transactional Functions :- This will contain your External Input, External outputs and External Inquiries.
Once you do this, You use Adjustmet Factors such as the Complexity of the Functions where you will rate them on High, Low and Average.
This will give you "Unadjusted Functional Points". After you have derived this, there is a Value Adjustment Factor, for which there are close to 14 standard questions, for which you will rate them based on your application.
Once this is done you get your Estimation with a Value Adjustment Factor of +/- 35%, and this is called Adjusted Function Points.
I can get on to details however it is time consuming and hence advise to look at ifpug.org or search for FPA on google, some of the pages will give you exactly as to what i have mentioned in this post.
What are the roles defined by your organisation for the said PL or TL. ?
If TL = Test Lead, then TL will do it, if its as TEAM LEAD for the Testing team, he will do it, else PL will do it. it all depends in the demographis of your organisation, project and stakeholder inputs....