How it Works
Credential Registry Infrastructure
User Community Support
Web Applications Hosting and Source Code
In collaboration with the National Training and Simulation Association (NTSA), the Advanced Distributed Learning (ADL) Initiative is excited to announce iFest 2017. iFest provides unique opportunities for military, government, industry, and academia professionals to share the latest in distributed learning innovations. This year’s theme emphasizes learning analytics with associated topics, such as technological inter-operability (e.g., xAPI), implementation, privacy, and security. Learn more http://www.ndia.org/events/2017/7/31/adl-ifest
Among the program offerings is a panel on how competency standards are key to advanced distributed learning. ADL is represented on the TAC and some of the panel are fellow TAC members.
Competencies are pervasive in education, training, and workforce development. They are used to describe learning objectives, a learner’s current state of progress, the relevance of learning materials, job requirements, and so on. Standardized digital competency data are needed by smart apps such as personal assistants, adaptive learning environments, and intelligent tutoring systems as well as for competency-based credentialing, staffing, and talent management applications. In this session, distinguished panelists representing leading standards development organizations will discuss their perspectives and how their standards relate to personized learning, learning analytics, lifelong transcripts, and workforce transition. After the panel discussion, Dr. Jim Belanich of the Institute for Defense Analyses will facilitate a discussion with the attendees about competencies in the DoD.
Panelists (and their affiliations):
- Joshua Marks, IMS CASE Project
- Eric Shepherd, HR Open Standards
- Michael Sessa, PESC
- Rosalyn P. Scott, Medbiquitous
- Robby Robson, CASS Project and IEEE Standards Association
- Jeanne Kitchens, Credential Engine
- Discussant: Jim Belanich, Institute for Defense Analyses
- Moderator: Avron Barr, IDA and IEEE LTSC
The comment period for the June 30th, 2017 CTDL release has officially started. Please review the change history and provide comments and feedback ahead of the official release on June 30th. These changes include some important new updates, such as ceterms:ConditionManifest and ceterms:CredentialPerson.
Change History: https://credreg.net/ctdl/release
Here you can see all of the changes for the forthcoming CTDL release. Follow the links for each item to see them in the pending metadata viewer and post comments/feedback in the linked github issues:
To view the entire pending CTDL schema: http://credreg.net/ctdl/pending
Note that only new classes and properties here have a status of pending. Changes to existing items can be found in the change history.
Thank you on behalf of the Credential Engine Technical Team.
Thank you for participating in this month's TAC meeting. Meeting notes and followup information can be found on the Meetings Minutes and Resources page.
About Credential Registry
Credential Registry allows users to see what various credentials represent in terms of competencies, transfer value, assessment rigor, third-party approval status, and much more.
The open and voluntary registry will include all kinds of credentials, from education degrees and certificates to industry certifications, occupational licenses, and micro-credentials. Each credential will describe its name, type, level, competencies, assessments, accreditation, labor market value, and so on.
The goals are transparency and clarity, and to help align credentials with the needs of students, job seekers, workers, and employers.
Get Started Now
The Credential Engine project’s technical team (TT) provides credentialing pilot site partners with a set of services beneficial to each organization’s short- and long-term technology planning.
Through each phase of the Credential Engine project, the TT provides assistance, including non-technical and technical materials for administrators and technical staff.
Throughout the process, partners provide information and access needed to co-engineer the Credential Registry, Credential Directory, and other potential apps. Pilot site partners are also discovering the benefits of linked data, and providing iterative feedback on the process and outcomes.Learn More
Developers are able to leverage the the Credential Registry API to build applications that can read or publish as much or as little information about credentials as they need to.
The Credential Engine project's developers are using Dublin Core Application Profiles process to create sytems that communicate all virtually all aspects of credentials.Learn More
Technical Advisory Committee
The Technical Advisory Committee (TAC) promotes collaboration across, and harmonization of, standardization initiatives that are developing data models, vocabularies, and schemas for credentials and competency frameworks, and related competency information such as criticality ratings and assessment data typically captured with a wide variety of systems.
The goal is to identify, document and openly share solutions that support comparability of credentials and competencies across industries/sectors, human resource systems, education, and government systems.Learn More
Easy to Use
The code base, data schema, and API endpoints are easy to use, and easy to learn. The code base is extensible, to make new features easier to add over time.
Source code, specs, and docs are all open. The system is designed to ensure metadata stays open, as well.
No server or API is 100% reliable, so the Credential Registry will distribute its metadata. It backs up data to archive.org, an organization dedicated to ensuring data never disappears.
Based on reseearch into metadata distribution by US Departments of Defense and Education, and many other organizations.
The Credential Registry is designed to scale horizontally by allowing communities to form independently and vertically to handle high demand.
Security is very hard to get right. We have deisgned an open metadata distribution system so that tight security is not necessary. We use cryptographic security within the open data itself to ensure organization identities are non-impersonable.