CK has an integrated cross-platform package manager with Python scripts, JSON API and JSON meta-description to automatically rebuild software environment on a user machine required to run a given research workflow.18
CK enables reproducibility of experimental results via community involvement similar to Wikipedia and physics. Whenever a new workflow with all components is shared via GitHub, anyone can try it on a different machine, with different environment and using slightly different choices (compilers, libraries, data sets). Whenever an unexpected or wrong behavior is encountered, the community explains it, fixes components and shares them back as described in.19
Fursin, Grigori (29 March 2021). Collective Knowledge: organizing research projects as a database of reusable components and portable workflows with common APIs. Philosophical Transactions of the Royal Society. arXiv:2011.01149. doi:10.1098/rsta.2020.0211. /wiki/Grigori_Fursin ↩
Reusable CK components and actions to automate common research tasks https://cKnowledge.io/actions ↩
Live paper with reproducible experiments to enable collaborative research into multi-objective autotuning and machine learning techniques https://cknowledge.io/report/rpi3-crowd-tuning-2017-interactive ↩
Online repository with reproduced results https://cKnowledge.io ↩
Index of reproduced papers https://cKnowledge.io/reproduced-papers ↩
Ed Plowman; Grigori Fursin, ARM TechCon'16 presentation "Know Your Workloads: Design more efficient systems!" https://github.com/ctuning/ck/wiki/Demo-ARM-TechCon'16 ↩
Artifact Evaluation for systems and machine learning conferences http://cTuning.org/ae ↩
ACM TechTalk about reproducing 150 research papers and testing them in the real world https://learning.acm.org/techtalks/reproducibility ↩
EU TETRACOM project to combine CK and CLSmith (PDF), archived from the original (PDF) on 2017-03-05, retrieved 2016-09-15 https://web.archive.org/web/20170305003204/http://es.iet.unipi.it/tetracom/content/uploads/Posters/TTP35.pdf ↩
Artifact Evaluation Reproduction for "Software Prefetching for Indirect Memory Accesses", CGO 2017, using CK, 16 October 2022 https://github.com/SamAinsworth/reproduce-cgo2017-paper ↩
GitHub development website for CK-powered Caffe, 11 October 2022 https://github.com/dividiti/ck-caffe ↩
Open-source Android application to let the community participate in collaborative benchmarking and optimization of various DNN libraries and models http://cknowledge.org/android-apps.html ↩
Reproducing quantum results from nature – how hard could it be? https://www.linkedin.com/pulse/reproducing-quantum-results-from-nature-how-hard-could-lickorish ↩
MLPerf crowd-benchmarking https://cknowledge.io/c/solution/demo-obj-detection-coco-tf-cpu-benchmark-linux-portable-workflows ↩
MLPerf inference benchmark automation guide, 17 October 2022 https://github.com/mlcommons/ck/tree/master/ck/docs/mlperf-automation ↩
List of shared CK packages https://cKnowledge.io/packages ↩