14 August 2025

Questions and answers from the Ideas Grants 2025 peer reviewer webinar. Recorded 22 July 2025.

Table of contents

Speakers

  • Dr Julie Glover, Executive Director, Ideas Grants, Research Foundations, NHMRC
  • Dr Dev Sinha, Director, Ideas Grants, Research Foundations, NHMRC
  • Katie Hotchkis, Assistant Director, Ideas Grants, Research Foundations, NHMRC
  • Professor Stuart Berzins, Ideas Grants Peer Review Mentor
  • Dr Anna Trigos, Ideas Grants Peer Review Mentor
  • Dr Jimmy Breen, Ideas Grants Peer Review Mentor

Questions answered during Q&A session

Question 1a: Preliminary data

Question: While preliminary data is not explicitly required by the guidelines, it is widely acknowledged that including such data can enhance the likelihood of success. Presenting preliminary findings offers younger researchers an opportunity to demonstrate their capability. What approach should reviewers take when assessing this aspect?

Dr Dev Sinha: Preliminary data is not a requirement for the Ideas grant scheme, but if present, it can be considered a valid part of the proposal. The guidelines emphasise that applications should not be penalised for the absence of preliminary data. Applicants should be demonstrating their capability to do the work through various means, including preliminary data. However, if they successfully make their case without it, they should not be punished. Most applications do include preliminary data as it is a good way to demonstrate capability, but its absence should not negatively impact the assessment.

Question 1b: Preliminary data vs innovation

Question: If the application has very strong preliminary data to support visibility and publications from the group, does that negatively impact scoring for innovation?

Dr Jimmy Breen: The innovation and translational potential of a project are crucial factors in evaluation. Even if the technique itself is not innovative, its application to a new area can be significant. Assessors should not mark down a project solely because the technique has been used before, as long as the project demonstrates innovation and potential impact in its specific context.

Question 2a: Removing inadequate comments

Question: Despite everybody’s best efforts, a lot of inadequate reviews slip through the system and can cause a grant to be unsuccessful. Is there a way of getting rid of those?

Question 2b: Removing outlier scores

Question: What are the rules for the score that are very disparate from other scores on the same application? when we had the panel in person the rules was more than 2 points away from the spoke person required some explanation. Is this still the same now (more than 2 point)?

Dr Dev Sinha: The peer review process for Ideas Grants and other schemes places a strong emphasis on transparency, accountability, and quality assurance. The introduction of the comment sharing process is a key measure to achieve these goals. By allowing peers to see and engage with the comments, the process becomes more transparent and accountable. Additionally, we have implemented several internal quality assurance processes and checks to ensure fair assessments. One of these is our outlier scoring check, where we look for scores that differ from the other scores. Outlier scores are not necessarily a bad thing, but we do contact peer reviewers to confirm the score is correct if the occasion calls for it.

NHMRC is committed to enhancing transparency and accountability in the peer review process. Ongoing efforts aim to enforce these processes more effectively. Peer reviewers are encouraged to provide constructive feedback and support processes that ensure fair and accurate scoring. It's important to note that comments may not always fully reflect the scores given, as peer reviewers often aim to provide feedback that helps applicants improve their applications. In rare cases, inappropriate assessments are removed based on evidence and feedback from other peer reviewers.

A cultural change within the peer review community is also necessary to address inappropriate comments and promote a more collegial and transparent environment. By calling out inappropriate comments and fostering a culture of accountability, the peer review process can be improved for everyone involved.

Question 3: Budget review

Question: Budget is not part of the assessment criteria, but recent communication with colleagues find that peer reviewers tend to unconsciously bias towards subjective levels of budget limit. How to objectively prevent this during assessment.

Dr Anna Trigos: The budget should be justified by ensuring it aligns with the scope and magnitude of the proposal. Generally, while reading the proposal, you get an idea of the project's scope and expected costs. It's important to ensure that all items in the budget are consistent with what is outlined in the proposal. Some technologies are inherently expensive, and this should be taken into account. Applicants need to balance costs with sample sizes to avoid criticism for either being too costly or having insufficient sample sizes.

In Australia, research costs, including salaries, are relatively high, which can add up to significant budgets for some projects. If assessors consider that the budget items are well-justified, match the timeline, and align with the project's goals no budget comments are needed.

Question 4: Grant review panels

Question: Are you considering bringing panels back for increased integrity, accountability and transparency? Currently one hostile or lazy reviewer can sink an application. One of the criticisms of the review of Ideas Grant is that reviews are often very brief and the applicants are not able to work out how they were scored. Current peer checking doesn't work how can this be improved?

Dr Julie Glover: NHMRC moved to an application-centric peer review process for large schemes like Investigator and Ideas Grants to better match applications with reviewers' expertise. This change was driven by data showing that the previous panel-based system often resulted in reviewers receiving applications outside their area of expertise. The new process has been reported by peer reviewers to provide better-suited applications, enhancing the review quality.

While some miss the panel approach, NHMRC still uses panels for smaller schemes and is exploring ways to incorporate the benefits of panels, such as transparency and calibration, into the application-centric process. NHMRC is committed to maintaining the advantages of the new system while addressing concerns and improving the process based on feedback and ongoing discussions with Research Committee. Analysis that we have done in the past shows that panel members tended to follow the scores of the two primary reviewers. We also have data about the outcomes from panel discussions, and the most common thing happening was that after the panel discussion, the applications were scored lower in general.

The PRAC findings indicate variability in scoring among peer reviewers, which challenges the idea that applications must consistently receive scores of sixes and sevens to be funded. Data shows that even successful applications do not always receive these high scores across every criterion from all reviewers. It is therefore inaccurate to assume that only the highest scores result in funding. Peer reviewers are encouraged to use the full range of scores as appropriate. While all schemes remain highly competitive and resources are limited, the diversity in scoring is recognised and accepted within the process.

[Additional Resource: Please refer to the work of the Peer Review Analysis Committee. The work referred to here was published by NHMRC as a CEO communique]

Question 5: Single CI applications

Question: How would you score a submission that has been submitted by a single person and factor in capability?

Dr Jimmy Breen: The capability of a Chief Investigator (CI) in Ideas Grants should be assessed based on their expertise and ability to cover the needs of the grant. If the CI is an Early Career Researcher (ECR) with a large team of Associate Investigators (AIs) supporting them, the expertise of the AIs should not be used to compensate for any lack of expertise in the CI. The CI must demonstrate that they have the necessary expertise and capability to undertake the proposed work independently.

While it is common for ECRs to have a team of AIs assisting them, the assessment should focus on the CI's own expertise and ability to execute the project. If the CI can cover the project's requirements with their expertise, they should not be marked down for not having additional personnel. The goal is to ensure that the CI can independently handle the project's demands, even if they do not have expertise in every area covered by the AIs. If we are going to fund blue sky research or Ideas grants that are new and innovative and the person who wants to go out and do that on their own and has all the expertise, they shouldn’t be marked down because they are a sole CI.

Question 6: Assessing team capability

Question: Is it appropriate to see the entire team when assessing capability or should we be differentiating between CI’s and AI’s in terms of what they’re offering to the program?

Dr Julie Glover: My biggest piece of advice here is to always go back to the assessment criteria, the score descriptors describe what the CIs need to contribute and what the AIs need to contribute to the grant.

Question 7: Evaluating applications beyond reviewer expertise

Question: I often receive grant proposals outside my area of expertise, making it difficult to assess their innovation or creativity. If the methods seem justified but I'm unsure about how groundbreaking the project is, what is the best way to evaluate proposals in unfamiliar fields?

Professor Stuart Berzins: It is important to remember that you are not the sole reviewer of an application; the benefit of multiple reviewers lies in the collective expertise each brings to different aspects of the submission. Ideally, all components of the application will be covered by reviewers with relevant expertise. It is also the responsibility of the applicant to ensure their submission is accessible and comprehensible to assessors. As a reviewer, your primary focus should be on evaluating the sections within your area of expertise. If there are elements that you do not fully understand, consider whether this results from a lack of clarity in the applicant’s explanation, which could be a—potential drawback of the application, or if it simply falls outside your field. In the latter case, it may be appropriate to concentrate your assessment on those areas where you have substantial understanding.

Question 8: Providing feedback on weaknesses

Question: If we are supposed to identify the area of the weakness, how do we give recommendations on improvement?

Professor Stuart Berzins: Taking advice from NHMRC, avoid unnecessary personal opinions in your review and focus on what's written. It's acceptable to suggest more experiments could be helpful, but don't dwell on how you would have done things differently. Keep your critique centred on what was actually done; that's the main purpose of reviewing.

Question 9: Assessing new ideas

Question: For Ideas Grants, applicants might propose entirely new concepts or techniques and may lack a direct track record in that area. How do you assess their ability to execute something novel compared to their existing experience?

Professor Stuart Berzins: The primary responsibility lies with the applicant to present a compelling case for their proposal. If the application fails to do so, this limitation should be reflected in both the evaluative comments and the assigned score. It is recognised that applicants may not have direct experience in every aspect of their proposed work; however, they should possess relevant experience or provide a convincing rationale for why they are suited to undertake the project. Demonstrating such qualifications is an essential component of effective grant writing. Ultimately, applicants are expected to articulate clearly whether their expertise is grounded in prior related work or represents a new direction, and to show that they, their team, and their resources are well-positioned to achieve success.

Question 10: Clinical Trials and Cohort Studies

Question: What are the key differences when reviewing with this scheme compared to CTCS grants? Is the focus different?

Dr Dev Sinha: Essentially the CTCS is a very different scheme with different assessment criteria. It is track-record based, and it needs human participants in the trial. It has got a very different scope. The key point is if the primary objective is a clinical trial or cohort study, those applications should be directed to the CTCS scheme. The application’s objectives should be assessed accordingly against the Research Quality criterion of the Ideas Grant scheme.

Question 11: ‘Blue Sky’ research

Question: The expectations of Ideas Grants scheme, potentially by the reviewers, seems to have changed from ‘Blue Sky’ ideas to incremental and well-supported (usually with prelim data and detailed methods) projects. Is this the natural course of the scheme, reverting back to Project Ideas? It’s important to recognise that, with tighter funding and higher score requirements, demonstrating capability now often hinges on evidence of prior experience—even for blue sky or novel research. This makes it challenging for early-career researchers, who may be just as capable but lack an extensive track record. As scoring thresholds rise, innovative ideas increasingly compete with established experience, and younger researchers can be disadvantaged despite their potential. I think it’s essential to acknowledge this reality in the current grant environment.

Dr Jimmy Breen: Regarding blue sky research and my involvement with other initiatives, it should be noted that while they do not fund a large number of applications, there can sometimes be confusion between blue sky research as a scientific outcome or as a new technique. In contexts where work involves Indigenous communities, translation and community benefit are considered significant translational objectives, focusing on practical impacts for communities or populations affected by disease. Both Anna and I are bioinformaticians and computational researchers and we obviously go across lots of broad research areas, so we are not always able to determine the overall significance of a publication—whether it represents an exceptional contribution or originates from a consistently high-performing lab. Therefore, understanding the requirements of the field and its translational aspects is especially relevant when assessing innovative ideas that have potential to impact people’s lives. This perspective informs what I look for in evaluating such research directions.

Dr Anna Trigos: Balancing risk and benefit is key for highly innovative proposals. Applications should clearly state they're adopting new methods, acknowledge uncertainties, and outline mitigation strategies, such as involving experts or using relevant equipment. Demonstrated experience in quickly adopting new technologies also shows feasibility.

For example, when single cell RNA-seq first emerged, no one had direct experience with it. Researchers justified their capability by referencing their transcriptomics background and past success integrating new techniques. Evidence of this adaptability in the application’s capability section is often sufficient to meet the criteria.

Question 12: Removal of track record from Ideas Grants

Question: Impact and paper prestige continue to play a major role in reviews, KPIs, promotions, and funding. How is the new ideas scheme addressing or separating these factors in its review process? Currently, the process resembles previous grant schemes, which may disadvantage those with new ideas but limited track records.

Dr Julie Glover: Thank you for raising this important issue. NHMRC has been a signatory to DoRA, the Declaration on Research Assessment, for an extended period, but we acknowledge that cultural change within the sector is a gradual process. Bringing attention to this topic also contributes to educational efforts. It is promising to observe an increasing number of institutions endorsing DoRA.

If similar comments appear in other reviewers’ feedback, please bring them to our attention as we aim to address such issues. We review the comments ourselves, and the final step in the process is designed to gather your perspectives and input, particularly regarding any concerns about irrelevant considerations.

Dr Dev Sinha: Initially, our focus was on assessing capability, but Ideas Grants is now well established, it is important to clarify what we consider as track record. Traditionally, track record has been associated with publication volume and the number of grants obtained. However, when evaluating capability, we broaden this definition to encompass whether an individual can effectively execute the proposed project. This includes demonstrated expertise through successful completion of similar research projects, collaboration history, access to necessary infrastructure, and robust institutional support. Thus, our definition of capability extends beyond conventional measures, representing a significant cultural shift in evaluation criteria.

Professor Stuart Berzins: One of the responsibilities of being an assessor is to follow the criteria and part of the challenge is to recognise our own unconscious or conscious biases as well. If you notice that the publication source is influencing your perspective, but you know the assessment criteria should guide your approach, prioritise the assessment criteria. It's not to say that you can't mention that you've got a Nature paper or a Science paper, but it's the job of the applicant to make that relevant to the application. If the applicant has shown expertise in this field, and their argument is supported by their publications in the same area, then this approach can be considered reasonable. It is not appropriate to assume that simply publishing a paper should guarantee grant success. There are multiple ways to appropriately utilise such information in the grant application process. It could be used quite effectively to demonstrate feasibility and add to the capability argument of this group. Providing evidence that you have prior relevant experience and that your work has produced meaningful results is useful. However, it is preferable not to depend solely on metrics such as the number of citations or the journal impact factor when making your case.

[Additional commentary by NHMRC: NHMRC also recommends the San Francisco Declaration on Research Assessment (DoRA) guidance on Rethinking Research Assessment.]

Question 13: Assigning reviewers

Question: I'm interested why keywords only are being used to assign reviewers rather than the research areas.

Dr Dev Sinha: For application and assessor matching, we use an in-house mathematical optimisation tool. This tool uses machine learning to match applications with reviewers based on a range of information, not limited to research keywords. It includes the Research Keywords, Broad Research Areas and Fields of Research in your profiles.

We do probability of suitability ranking and use that to send you an initial list to declare your conflicts of interest and suitability against. Those who have reviewed for a long time will have noticed that the number of applications you need to declare against at the COI and suitability stage has halved.

The suitability of matches has improved as well. It is important to note the tool’s outcomes are verified by humans before being finalised.

Question 14: Score sharing

Question: Can we please consider a way to moderate scores after sharing reviews? It would be very valuable to able to consider some constructive expert reviews and adjust scores accordingly. Sharing scores together with comments would also increase accountability

Dr Dev Sinha: This is something we are currently exploring. We are looking at the feasibility of sharing scores, what the risks are and how we manage that internally. There will be some communication about this coming out in the near future.

Questions unanswered during the Q&A sessions due to time constraints

Question 15: Removing inadequate reviews

Question: Last year I found that approximately 30% of reviews used the bare minimum of characters and were essentially useless and probably scored them 5. My own application had one review that was useless. So even though there is opportunity to review other reviews, it clearly is not being done. How can we enforce reviewers to do a better job. Is it possible to remove inadequate reviews?

Answer: This year we will be writing to peer reviewers to confirm they have participated in the comment sharing process. A cultural change within the peer review community is necessary to address inappropriate or insufficient comments and promote a more collegial and transparent environment. Where required, inappropriate comments are removed.

Question 16a: Reviewer expertise

Question: Are there any processes used to account for differences in how critical reviewers tend to be across different fields of research for example, health services research vs basic science? Any calibration across reviewers with expertise in different fields of research?

Answer: No. Assessors are allocated to grants based on their conflicts of interests and suitability. NHMRC will check for outliers, and peer reviewers can raise concerns as part of the comment sharing process. Calibration and normalisation exercises were scoped out by PRAC in their analyses conducted on Ideas and Investigator Grant scores. More information can be found in the PRAC report.

Question 16b: Standardisation and calibration of assessments

Question: What actions are taken to ensure standardisation and calibration of assessment across peer reviewers, and across the different Fields of Research?

Answer: For the past few years, NHMRC has been conducting outlier screening checks in Ideas Grants to identify scores that differ significantly from other peer reviewers. We use statistical methods recommended by the Peer Review Analysis Committee, (PRAC) to do this job. PRAC was an expert committee that advised the CEO on this issue, and there's more information on PRAC on our website if you're interested.

Where outlier scores are identified, we will seek clarification from the peer reviewers if required. However, it is worth noting that there may often be valid acceptable reasons for an outlier scores. For example, they may reflect a specific expertise or judgement of the assessor, and therefore outlier scores are not necessarily always incorrect. This is just an extra quality assurance process we have put in place to ensure that the outlier score itself is not a result of an unintentional error or a typographical mistake.

[Additional Resource: Please refer to the work of the Peer Review Analysis Committee]

Question 17: Resubmitted applications

Question: I have reviewed for 3 years now for the Idea grants, I tend to get resubmissions, is this part of the algorithm or by chance based on the keywords?

Answer: As we get better matching between peer reviewers and applications, this may be a more common occurrence. You will need to disregard what was previously submitted and focus on what they have put forward this year, scoring using the assessment criteria.

Question 18: Proposals

Question: To what details should reviewers examine proposals from non-medical perspectives? This is particularly true for projects that propose to use AI.

Answer: If reviewers only assessed proposals in their own fields, it would be difficult to secure enough reviewers for every grant application. Since each proposal is evaluated by multiple assessors, their diverse backgrounds help balance the process. Unless an application is so far outside your expertise that you cannot grasp the concepts at all, you should attempt to judge it according to the assessment criteria.

You should be able to evaluate whether the applicant makes a compelling case, identifies an important problem, presents sound concepts, and proposes significant outcomes. These are aspects that do not demand deep prior knowledge of a specific discipline. Additionally, applicants are responsible for ensuring that their proposals are clear and accessible to the broader scientific community.

Effective proposals should be written clearly and in a way that is approachable to reviewers from various fields. As a reviewer, you can assess the quality of an application based on the criteria, even if you are not a specialist in the area. A strong application should provide a solid rationale, clearly define the problem and demonstrate sound conceptual thinking. These qualities can be recognised without in-depth background knowledge. Ultimately, it is also the applicant’s responsibility to make their application understandable to a wide scientific audience.

Question 19: Consumer engagement

Question: There are many unknowns surrounding 'consumer engagement' when considering basic science projects. A clarification about this would be greatly appreciated.

Answer: Specific and detailed guidance on this issue will be communicated by NHMRC soon.

Question 20: Gender Balance

Question: We were instructed in the 2024 round guidelines that 'Consideration should also be given to the gender balance and development of new researchers within the applicant team.' Could the NHMRC provide concrete guidelines for what action should be taken by reviewers after considering gender balance and career stage of the team?

Answer: The main factor for the Capability criterion is whether the team has the appropriate people with the relevant skills, experience, collaborations, and infrastructure to conduct the proposed research. Other aspects, such as gender balance, are of secondary importance compared to the team's ability to achieve the research objectives. There is a bit of a commonsense approach there as well. For instance, you cannot expect gender balance in a single CI application. We have a section in the Peer Review Guidelines that helps you to understand your own biases as well. You should be aware of your biases, whether they are conscious or unconscious, and whether they are related to gender, ethnicity, institution, or research discipline. We also provide some guidance from the Declaration on Research Assessment and an implicit association test. These can help you to identify your biases and to ensure that you assess each application fairly.

[Additional commentary by NHMRC: Section 4.3.6.1 Mitigating Bias in Peer Review from the Ideas Grants 2025 Peer Review Guidelines on GrantConnect (GO7599) is being referred to here. NHMRC also recommends the San Francisco Declaration on Research Assessment (DoRA) guidance on Rethinking Research Assessment.]

Question 21: PSP packages

Question: This may not be directly relevant to the discussion here but why is that the PSP packages fall short of actual salaries by a long way? Is there something in the works to address this?

Answer: Your responsibility as the assessor is to determine whether the proposed PSP level is appropriate to carry out the work described. While it is acknowledged that a gap exists between the PSP rates and actual salaries and that, under the Funding Agreement, the institution is committed to covering this difference, your focus as a peer reviewer should remain on the role of the PSP. Consider whether the duties and designation of the PSP are justified for the scope of work proposed. Discussion around the gap between PSP rates and real-world salaries continues within NHMRC and its committees, but your assessment should centre on the position’s suitability for the project.

Question 22: Career stages

Question: What is the proportions of reviewers from different career stages?

Answer: We currently do not have the data available on this issue. Peer reviewers are recruited according to the Guiding principles for peer reviewer nomination and appointments.

Question 23: Capability of junior researchers

Question: Can you please clarify capability especially for more junior researchers. I noted last year in the reviews from other panel members that junior researchers were discriminated against despite having all the capacity to do the research but they'd never led a big project so their capability score was very low. This is despite having led small-medium projects and having CIs/AIs in their research team and at their location who are clearly mentors and whose role in the project is clearly indicated.

Answer: The Ideas Grants scheme intends to support the involvement of early career researchers. The capability assessment should focus on the ability of the applicants to perform the proposed work, not their career achievements. What matters is whether they have the skills and expertise to carry out the specific methods and techniques required for the project. As a peer reviewer, if you notice comments in other assessments that raise concerns, we encourage you to contact your secretariat during the comment sharing process.

Question 24: Proposals that substitute another funding

Question: How should one assess an Ideas Grants application when it is clear that the proposal is being used as a substitute for a fellowship or larger program funding? For example, very large budget greater than $1.5M.

Answer: NHMRC’s peer review process is designed to provide a rigorous, fair, transparent and consistent assessment of the merits of each application. Therefore, your role as a reviewer is to assess the grant on its merits against the Ideas Grants criteria. If you identify any eligibility concerns as outlined in the Ideas Grant Guidelines, please raise them with your secretariat.

[Additional commentary by NHMRC: Section 4 Eligibility from the Ideas Grants 2025 Guidelines on GrantConnect (GO7599)]

Question 25: Time reviewing grants

Question: How much time should we spend reviewing each grant?

Answer: The time it takes to review each application varies. In the past, Peer Review Mentors have said they do not have a fixed time limit, but try to review the applications close together, rather than spreading them out too much. Previous surveys have indicated that the majority of peer reviewers spend between 23 hours per application.

Question 26: Moderation of scoring

Question: Peer reviewers give scores in 4 categories, but how does the current process obligate them to ensure they apply the descriptors under each category? How is this process of anonymous peer review with no guidance, justification, moderation or reconciliation improving the quality assurance of NHMRC funding? Is the NHMRC process keeping up with the standard practice of peer review for publication or grant review around the world?

Answer: NHMRC continues to look for ways to improve peer review, based on our own experience and that of other funding agencies internationally. Peer review by independent experts against published criteria is considered the international gold standard for allocating research grants, but there is no universally agreed best process. Design of peer review processes can reflect a range of factors, including the scope and scale of the grant scheme, assessment criteria, availability of expert reviewers, resources (funds and staff), time available to make decisions and other local or historical conditions.

The Ideas Grants scheme offers a range of training opportunities to peer reviewers such as the live peer review forum, access to PRMs for questions and guidance and a wide range of written material. There are also a number of quality assurance checks throughout the peer review process such as outlier checks, internal checks and the comment sharing process. We strongly encourage peer reviewer participation in the comment sharing process to share concerns with NHMRC about Ideas Grants assessments.

PRAC was an expert committee that advised the CEO on many issues, including NHMRC’s current peer review processes. There's more information on PRAC on our website.

[Additional Resource: Please refer to the work of the Peer Review Analysis Committee]