Thank you in advance for your efforts as a member of the Program Committee for KubeCon + CloudNativeCon Europe, taking place virtually May 4–7, 2021.
These are the official CFP Scoring Guidelines and Best Practices to use when reviewing your set of proposals. Please bookmark this page for easy reference. If you have any questions, please email Nanci Lancaster.
Grade the quality of each proposal on a 5 to 1 grading scale for content, originality, relevance, and speaker(s):
Reminder: You are required to leave comments for each proposal you review, detailing the reasoning for your score. If we find that your comments are not constructive, i.e. simply putting “LGTM,” you may not be asked to return as a program committee member in the future.
For each proposal, you will indicate whether or not you see it ultimately being part of the accepted program by stating “yes” or “no.”
If you come across a proposal that does not seem to fit in the topic you are reviewing, you will indicate which topic you think the proposal fits best within an optional drop-down menu. Please still grade this proposal as you would any others within your review set.
Review Process Best Practices
Single-Blind vs Double-Blind Reviews
In a double blind peer review, the identities of both the authors and reviewers are kept hidden. For KubeCon + CloudNativeCon, we conduct single-blind reviews in which only the reviewers’ identities are kept hidden from speakers of proposals. The reason we do not hide the information of the speakers from the program committee is because we strive to have diverse perspectives from all avenues: experience, individuals, genders, and companies. This is difficult to achieve with nearly 2,000 proposals if all the information regarding the speakers is hidden.
In order to ensure the program committee doesn’t end up “yessing” all talks from the same company, or, marking someone down simply because they have typos in their description vs their experience in the subject, it is important for the program committee to know more information about the speakers within each talk.
Examples: If the panel they are reviewing’s panelists are all men, or, all the panelists are from the same company, that should be marked down because this lacks a diverse perspective. If they realize they’ve “yes’d” 50 talks from the same company, they have an opportunity to go back and review those talks under a tighter microscope to help with company diversity.
The final talk selection will be made by the conference co-chairs, Stephen Augustus and Constance Caramanolis, taking into account the recommendations from all track chairs and the need to balance diversity and that each speaker will give at most one non-panel talk and participate on one panel.
Program Committee Benefits
Each member may choose between two different tiers to determine the number of proposals they are assigned and benefits received:
|Tier 1||Review 50-100 Proposals||50% off Registration|
|Tier 2||Review 101–200+ Proposals||100% off Registration, $200 Amazon gift card|
Track Chairs are chosen from the program committee, and are granted 100% off the cost of registration and a $400 Amazon Gift Card. The Track Chairs’ main responsibility is to review the results of the program committee scores to form a list on which top 30% of proposals to accept with consideration to speaker, company, and gender diversity as well as experience level balance within their track. These short lists created by the track chairs are then sent to the co-chairs to make the final selections.
If you require any assistance reviewing proposals or have questions about the review process or any of the best practices we have suggested, please contact Nanci Lancaster for assistance.