Outstanding Questions

  1. What categories are missing from this evaluation process?
  2. Which of the TRL criteria are no longer necessary (if any)?
  3. What criteria are missing from the existing categories?
  4. What categories are superfluous (if any)?
  5. What is the difference from good code practices and this idea of readiness level?
  6. What do we consider just basic good code practices for a baseline given #5?
  7. What types of software can be evaluated within this kind of framework?
  8. How do the criteria vary across those software types?
  9. But, even in that variation, can a set of core categories and criteria be identified?
  10. How can “maintenance-only” or “unmaintained” be captured in readiness levels or in an evaluation category?
  11. Are there criteria for research software objects specifically that should be included in any evaluation process?
  12. How can these kinds core criteria be used to develop scaffolds for education purposes?
  13. Is there a minimum viable evaluation product?
  14. If we can create a progression beyond simply good code practices, can it be structured in a way that is meaningful from an implementation perspective?
  15. What criteria is important to consider but non-obvious as far as external or explicit signaling? What can an evaluator glean from a repository or other public materials and what may be missing from those that may indicate a criteria is being met through some restricted access platform?
  16. Likewise, can we identify intangible criteria - what kinds of questions, that are important for software quality or progression or reuse, are difficult for an evaluator to answer from the kinds of materials we make available, publicly or not?
  17. When is it appropriate to consider code metrics themselves (understanding the limitations inherent in those)? Cyclomatic Complexity, McCabe’s, etc.
  18. Or, rather than focusing on the difficulties found in #17, what can we say about the difference in evaluating for existence, ie the README file exists, versus evaluating for “quality”, ie the README exists but was auto-generated and provides little to no additional context? (So what would a potential linter cover and is it enough to provide good information?)
  19. Let’s clarify “reuse” in this context. For evaluation, especially for the progressions, do we mean a) reuse in other code efforts as in a code module or b) reusability in the Open Science context of sustainability and reproducibility or c) all of the above and how, then, does that affect the criteria and reusability progression here?
  20. Are the TRL criteria clear regarding when a question refers to the project information or the software product proper? Does this hold true for web applications, ie does “Support HTTPS” refer to the project website describing the software or the web application software developed for the project?
  21. What does interoperability mean in this context and how does that change based on the type of software being evaluated?
  22. Regarding the RRL criteria, do those descriptions provide enough guidance and detail for an evaluation process as-is? What would the evaluation criteria for these topics and levels look like?
  23. Again, considering the RRL, do the topics and criteria align with the TRL criteria? Should they? Do we consider a TRL evaluation process and an RRL evaluation process as separate activities?