Question:
What is ReadyGo's philosophy on course progress?
Answer:
Our philosophy is that the course is reporting both a score and a completion status. These should be handled independently of each other. However, a large number of customers want score>90% = completion. So, you’re throwing away one of the very few reported data available. Each LMS-pack has a different philosophy. Some tie the completion to the score; some do completion when the student reaches the exit page; some do completion based on visiting a certain number of pages. This way course creators can choose a pack that meets their needs.
Thursday, June 4, 2009
Wednesday, June 3, 2009
Can my authoring tool support more then one behavior?
Question:
How can I verify that my authoring tool supports more than one behavior?
Answer:
Most authoring tools simply have 1 SCORM behaviour and 1 AICC behaviour. If the author wants a different behaviour, they have to break the courses down to page=SCO. Then, they can pay for the programming effort. In ReadyGo WCB, all the behavior is programmed through JavaScript. Each LMS interface has a file that lists the component pieces used for it during course generation. There are components for the "index.htm" page, for the main/first page, for the bullet pages, for the test pages, for the sidebar, for the services/menu bar, and for the exit page. These different pages play their own role in the lifecycle of a course. The listing is in a .des file. The contents of the .des file are described in a file called “desfile.txt” in the LMS folder. The .des file contains the names of the files that are used in the different places throughout a course (e.g. when a test loads, within the index page, etc.) There is a file called "descriptions.txt" in the "lms" folder that contains a summary of every LMS pack.
How can I verify that my authoring tool supports more than one behavior?
Answer:
Most authoring tools simply have 1 SCORM behaviour and 1 AICC behaviour. If the author wants a different behaviour, they have to break the courses down to page=SCO. Then, they can pay for the programming effort. In ReadyGo WCB, all the behavior is programmed through JavaScript. Each LMS interface has a file that lists the component pieces used for it during course generation. There are components for the "index.htm" page, for the main/first page, for the bullet pages, for the test pages, for the sidebar, for the services/menu bar, and for the exit page. These different pages play their own role in the lifecycle of a course. The listing is in a .des file. The contents of the .des file are described in a file called “desfile.txt” in the LMS folder. The .des file contains the names of the files that are used in the different places throughout a course (e.g. when a test loads, within the index page, etc.) There is a file called "descriptions.txt" in the "lms" folder that contains a summary of every LMS pack.
What does a SCORM LMS actually do?
I was just asked some great questions on how LMS's actually work with a course. I thought others may be interested in this conversation:
Question:
I have worked with a number of LMS's. Why is it that all the work is done on the course side? The LMS must do something so that the course can send data. What does the course do, what does the LMS do? Is the LMS the data receiver and then processes the data in a given way (how flexible?. Does the LMS create the reports?
The course sends the data. I need an example here for example the status complete:
Does an LMS has a datafield that is set to “completed – not completed” depending on what the course sends? How many datafields are there for a course to be “SCORM”-compatible? Which fields? What does an LMS need to do to be SCORM-compatible?
Start a course, notice the exit, write a status, store the test results ----?
Answer:
For most LMSs, what the LMS has to provide is a User ID when they launch the course, the “suspend_data” (2048 characters that the course set the previous time it was taken), the user name, the previous time on course, and a few other “core” fields. That is what is provided with a good SCORM course. There is no logic that the LMS is supposed to perform (with SCORM 1.2 and earlier): An LMS only provides data storage and retrieval. With AICC, since the course couldn’t really get to the content sent by the LMS, the requirement for data retrieval was minimal.
For SCORM 1.2, conformance, all that a course needs to do is send LMSInitialize and LMSFinish. This would make a course SCORM “conformant”. That is why you will see tools out there that can turn a Word document into a SCORM conformant “course”. Of course, instructionally this has no value except to track that someone has taken the course. A simple web statistics tracker could tell you that (you can see what users based on their IP address have seen what pages of content.)
For an LMS, to be SCORM 1.2 conformant, they must handle LMSInitialize(), LMSCommit(), LMSSetValue(), LMSGetValue(), and LMSFinish(). The important work is set by LMSSetValue, which specifies the variable name and the value. To be 1.2 conformant, there are 10 “core” values that must me supported. Most 1.2 LMSs have traditionally only supported about 5 of these (student_id, student_name, lesson_location, credit, lesson_status, entry, score, total_time, exit, session_time). There are some other groups of values called “objectives” and “interactions” where a course author could put in the score for individual questions. In SCORM 2004, support for these is required. The ReadyGo courses have used these in SCORM 1.2, but there are only a handful of LMSs that have tracked them and even fewer that have been able to report on them.
For continuity of behavior from one session to another, therefore, the course generally has to put the data it might need into the “suspend_data” field, and then it can parse it when it loads. With the better LMSs, the course could ask for a listing of the objectives and interactions that were previously provided along with previous lesson status and score. The difficulty is if you have 20 questions in 4 tests within one SCO (unit). It is hard to retrieve this type of detail from most LMSs since they only support the “core” set. So, with WCB, we use the suspend_data field to store/retrieve previous scores that have been set by the course. Then, in terms of logic for certificate generation, the course has to take care of it.
Because of the difficulties in retrieving data, there is a perception that a SCO should be designed to only have one question in it. That is why you will see many authoring tools that set up every test question, and usually every page of content to be a separate SCO. The golden rule with SCORM is that one SCO cannot send the user to another SCO, so this means that after each SCO (or “page” in most cases), the course has to exit from the LMS, and the LMS has to launch the next SCO. In some cases, this can take a noticeable time since the LMS has to retrieve the user’s previous data and package it so that it can be obtained by the SCO. In a good LMS, this may be a lot of data. So the end user suffers a delay between each SCO. If the SCO is designed as a chapter or an entire course, the delay will not be noticeable. However, if each page is a SCO, the course-taking experience could be painful.
Question:
I have worked with a number of LMS's. Why is it that all the work is done on the course side? The LMS must do something so that the course can send data. What does the course do, what does the LMS do? Is the LMS the data receiver and then processes the data in a given way (how flexible?. Does the LMS create the reports?
The course sends the data. I need an example here for example the status complete:
Does an LMS has a datafield that is set to “completed – not completed” depending on what the course sends? How many datafields are there for a course to be “SCORM”-compatible? Which fields? What does an LMS need to do to be SCORM-compatible?
Start a course, notice the exit, write a status, store the test results ----?
Answer:
For most LMSs, what the LMS has to provide is a User ID when they launch the course, the “suspend_data” (2048 characters that the course set the previous time it was taken), the user name, the previous time on course, and a few other “core” fields. That is what is provided with a good SCORM course. There is no logic that the LMS is supposed to perform (with SCORM 1.2 and earlier): An LMS only provides data storage and retrieval. With AICC, since the course couldn’t really get to the content sent by the LMS, the requirement for data retrieval was minimal.
For SCORM 1.2, conformance, all that a course needs to do is send LMSInitialize and LMSFinish. This would make a course SCORM “conformant”. That is why you will see tools out there that can turn a Word document into a SCORM conformant “course”. Of course, instructionally this has no value except to track that someone has taken the course. A simple web statistics tracker could tell you that (you can see what users based on their IP address have seen what pages of content.)
For an LMS, to be SCORM 1.2 conformant, they must handle LMSInitialize(), LMSCommit(), LMSSetValue(), LMSGetValue(), and LMSFinish(). The important work is set by LMSSetValue, which specifies the variable name and the value. To be 1.2 conformant, there are 10 “core” values that must me supported. Most 1.2 LMSs have traditionally only supported about 5 of these (student_id, student_name, lesson_location, credit, lesson_status, entry, score, total_time, exit, session_time). There are some other groups of values called “objectives” and “interactions” where a course author could put in the score for individual questions. In SCORM 2004, support for these is required. The ReadyGo courses have used these in SCORM 1.2, but there are only a handful of LMSs that have tracked them and even fewer that have been able to report on them.
For continuity of behavior from one session to another, therefore, the course generally has to put the data it might need into the “suspend_data” field, and then it can parse it when it loads. With the better LMSs, the course could ask for a listing of the objectives and interactions that were previously provided along with previous lesson status and score. The difficulty is if you have 20 questions in 4 tests within one SCO (unit). It is hard to retrieve this type of detail from most LMSs since they only support the “core” set. So, with WCB, we use the suspend_data field to store/retrieve previous scores that have been set by the course. Then, in terms of logic for certificate generation, the course has to take care of it.
Because of the difficulties in retrieving data, there is a perception that a SCO should be designed to only have one question in it. That is why you will see many authoring tools that set up every test question, and usually every page of content to be a separate SCO. The golden rule with SCORM is that one SCO cannot send the user to another SCO, so this means that after each SCO (or “page” in most cases), the course has to exit from the LMS, and the LMS has to launch the next SCO. In some cases, this can take a noticeable time since the LMS has to retrieve the user’s previous data and package it so that it can be obtained by the SCO. In a good LMS, this may be a lot of data. So the end user suffers a delay between each SCO. If the SCO is designed as a chapter or an entire course, the delay will not be noticeable. However, if each page is a SCO, the course-taking experience could be painful.
Tuesday, May 26, 2009
What works when creating Web-Based Training
When creating Web-Based Training, the best approach is to look at what works best on the Web. I am a fan of allowing the student to navigate anywhere in the course with just 3 clicks. This means providing navigation menus at all times, with access at least to all the start-of-chapter pages or to a course-wide table-of-contents. What is so powerful about web pages is that they give easy access to whatever content you want to allow. If you give the student control over their navigation, they can adapt the training to their immediate needs.
The biggest frustration I have heard (and experienced) is from not having control over my time when I am doing "self-paced" training. That is, if I have to visit every page, wait for the narration to complete, and/or follow the course author's idea of how they'd like me to go through the content, I get very frustrated, and tune out of the course. If the course allows me to quickly find the areas where I need more information and allows me to take the necessary tests for certification, I enjoy the course much more. Beyond that, I don't expect to retain 100% of the content in the course. But I can bookmark the course. So, if I can get back to the content for a just-in-time refresher, the course will be really useful to me.
A good authoring tool will make it trivial for you to include the necessary navigation (without having to manually add it on every page). If done properly, it will be unobtrusive, but very useful. Beyond that, number every page with an outline number (chapter number.page number) and or a location number (page x of y).
The biggest frustration I have heard (and experienced) is from not having control over my time when I am doing "self-paced" training. That is, if I have to visit every page, wait for the narration to complete, and/or follow the course author's idea of how they'd like me to go through the content, I get very frustrated, and tune out of the course. If the course allows me to quickly find the areas where I need more information and allows me to take the necessary tests for certification, I enjoy the course much more. Beyond that, I don't expect to retain 100% of the content in the course. But I can bookmark the course. So, if I can get back to the content for a just-in-time refresher, the course will be really useful to me.
A good authoring tool will make it trivial for you to include the necessary navigation (without having to manually add it on every page). If done properly, it will be unobtrusive, but very useful. Beyond that, number every page with an outline number (chapter number.page number) and or a location number (page x of y).
Tuesday, March 17, 2009
Including Surveys with Courses
The most difficult part of online surveys is that they are so optional. If the survey is part of a required course, response rates can be raised as long as the end-users know that their survey responses are also required, and that there will be consequences of not answering (or rewards for answering).
One trick to get assessment data from courses with tests is to mix the survey questions in with the required test page(s). That is, if some of the questions on a test are survey questions (there is no "correct" answer), but the other questions are graded, there will be a significantly higher response rate to those questions. If your test/survey software only provides one question at a time, this feature can be bypassed by the end-users. Software like the ReadyGo Server Side Testing module can track pages with combinations of test and survey questions, and therefore get higher response rates.
Another trick is to make survey responses a requirement to be able to take the final exam in a required course. Incentives such as a raffle for survey participants provide a more positive form of invitation to take the survey.
One trick to get assessment data from courses with tests is to mix the survey questions in with the required test page(s). That is, if some of the questions on a test are survey questions (there is no "correct" answer), but the other questions are graded, there will be a significantly higher response rate to those questions. If your test/survey software only provides one question at a time, this feature can be bypassed by the end-users. Software like the ReadyGo Server Side Testing module can track pages with combinations of test and survey questions, and therefore get higher response rates.
Another trick is to make survey responses a requirement to be able to take the final exam in a required course. Incentives such as a raffle for survey participants provide a more positive form of invitation to take the survey.
Thursday, March 12, 2009
Assuring security when giving tests
The requirements of online testing and absolute security/verification are generally incompatible. As noted, the user and their assistant simply need to be at the terminal at the same time in order to cheat the system. If absolute verification that the individual is responsible for every answer is required, the only guaranteed solution is proctored testing with periodic checks using a government issued photo-identification card.
If the user is trying to cheat the on-line system, there are many ways they can bypass all the security measures proposed by the various vendors:
1. The registered user and their assistant can both be at the terminal/computer taking the test. Any biometric or password-based verification can be completed by the real user, while at the same time their assistant provides them the answers.
2. The registered user can share their password with their assistant, and thus any password-based system can be fooled.
3. A camera taking pictures of the end-user's environment will not pick up the blue-tooth headset through which they are being fed the answers.
Instead of trying to increase security, the approaches we have found to work well include:
1. Give the required questions multiple times throughout the course rather than just at the end. For example, after a page in the middle of a chapter, give a short quiz with the required question. Feed the answer to the student if they got it wrong. At the end of the chapter, repeat the question. Finally, in the final exam, give the question. Make all tests required, but only the last one counts. If, by the third attempt, the user doesn't answer correctly, you have a bigger problem with that employee, and alternative remediation will be necessary.
2. Require the user/employee to sign and send in an affidavit that their responses were their own work. This provides legal/regulatory compliance that the organization has done their due diligence.
Most end-users (especially those who cheat) will opt for the easier alternative. If the tests are comprehensive, but easier (and less annoying) than the effort to cheat, they will generally choose to just take the test. We have seen that a 45 minute PowerPoint presentation followed by a required test is a really bad way to present content. Users walk away until the automated part is finished, and then come back to just take the test. Instead, make it so that the user can navigate anywhere in the course. If they fail an exam, send them back to the content so that they have to pass the material.
If you make a reasonable effort, you should be able to satisfy the regulatory requirements. If requirements cannot be met this way, consider proctored examinations in a controlled environment.
If the user is trying to cheat the on-line system, there are many ways they can bypass all the security measures proposed by the various vendors:
1. The registered user and their assistant can both be at the terminal/computer taking the test. Any biometric or password-based verification can be completed by the real user, while at the same time their assistant provides them the answers.
2. The registered user can share their password with their assistant, and thus any password-based system can be fooled.
3. A camera taking pictures of the end-user's environment will not pick up the blue-tooth headset through which they are being fed the answers.
Instead of trying to increase security, the approaches we have found to work well include:
1. Give the required questions multiple times throughout the course rather than just at the end. For example, after a page in the middle of a chapter, give a short quiz with the required question. Feed the answer to the student if they got it wrong. At the end of the chapter, repeat the question. Finally, in the final exam, give the question. Make all tests required, but only the last one counts. If, by the third attempt, the user doesn't answer correctly, you have a bigger problem with that employee, and alternative remediation will be necessary.
2. Require the user/employee to sign and send in an affidavit that their responses were their own work. This provides legal/regulatory compliance that the organization has done their due diligence.
Most end-users (especially those who cheat) will opt for the easier alternative. If the tests are comprehensive, but easier (and less annoying) than the effort to cheat, they will generally choose to just take the test. We have seen that a 45 minute PowerPoint presentation followed by a required test is a really bad way to present content. Users walk away until the automated part is finished, and then come back to just take the test. Instead, make it so that the user can navigate anywhere in the course. If they fail an exam, send them back to the content so that they have to pass the material.
If you make a reasonable effort, you should be able to satisfy the regulatory requirements. If requirements cannot be met this way, consider proctored examinations in a controlled environment.
Thursday, February 5, 2009
ReadyGo's web site
We just updated our web site. You should take a look. We used ReadyGo WCB/ReadyGo Mobile to create the site. The benefit of using ReadyGo's tools is that the site is ADA/508 and W3C compliant for blind readers and works on all the mobile devices that have a browser. The new look of the site also shows what you can do with ReadyGo's templates.
Visit the site at: http://www.readygo.com
Visit the site at: http://www.readygo.com
Subscribe to:
Posts (Atom)