AutoRob

AutoRob

Introduction to Autonomous Robotics

Michigan EECS 367



Robot Operating Systems


Michigan ROB 511



Fall 2020

Flipped Classroom Hybrid COVID-19 Edition



#ScholarStrike

In observance of the #ScholarStrike, the course Interactive Session on Wednesday September 9 was dedicated a short presentation from Professor Jenkins ("That Ain't Right: AI Mistakes and Black Lives") and discussing the implications of Robotics and AI on society. This Inside Higher Ed provides a helpful description about the larger context around this decision.



Fall 2020 Course Format

The AutoRob course will have a flipped classroom hybrid format this Fall semester. At their discretion, a student will be able to complete the course remotely in its entirety. Course meetings, quizzes, and office hours will be held in-person only as needed and with consideration of the public health situation. All lectures will be pre-recorded and available online through this course website. The course staff will be available to students through regularly scheduled all-class interactive sessions twice per week, a laboratory section once per week, and small interactive study pods led by a member of the course staff, as well as office hours that are scheduled as needed.

IMPORTANT
Students enrolled in EECS 367 or ROB 511 should complete the AutoRob Student Workflow Survey as soon as possible, in preparation for the first week of classes.


Introduction

AutoRob at the University of Michigan (EECS 367, ROB 511) is an introduction to autonomous robotics for building robot operating systems and applications that perform mobile manipulation tasks. AutoRob covers foundational topics in the modeling and control of autonomous robots and their instantiation as algorithms and computational systems. AutoRob has two sections: an undergraduate section offered as Introduction to Autonomous Robotics (EECS 367) and a graduate section offered as Robot Operating Systems (ROB 511) with expanded advanced material.

The AutoRob course can be thought of as the foundation to build "brains for robots." That is, given a robot as a machine with sensing, actuation, and computation, how do we build computational models, algorithms, programming environments, and applications that allow the robot to function autonomously? Such computation involves functions for robots to perceive the world, make decisions towards achieving a given objective, and transforming action into motor commands. These functions are essential for modern robotics, especially for mobile manipulators such as the pictured Fetch robot.

AutoRob focuses on the computational issues of modeling and control for autonomous robots with an emphasis on manipulation and mobility. Successful completion of AutoRob will result in the student having implemented code modules for "mobile pick-and-place". That is, given a robot and perception (or "full observation") of the robot's environment, the resulting code modules can enable the robot to pick up an object at an arbitrary location and place the object in a new location.

AutoRob projects ground course concepts through implementation in JavaScript/HTML5 supported by the KinEval code stencil (snapshot below from Mozilla Firefox). These projects will cover graph search path planning (A* algorithm), basic physical simulation (Lagrangian dynamics, numerical integrators), proportional-integral-derivative (PID) control, forward kinematics (3D geometric matrix transforms, matrix stack composition of transforms, axis-angle rotation by quaternions), inverse kinematics (gradient descent optimization, geometric Jacobian), and motion planning (simple collision detection, sampling-based motion planning). Additional topics that could be covered include potential field navigation, Cyclic Coordinate Descent, Newton-Euler dynamics, task and mission planning, Bayesian filtering, and Monte Carlo localization.

Concepts to build your own robot operating system

AutoRob aims to provide a general conceptual framework that enables students to build their own robot operating systems. Like computing operating systems, the fundamental goal of a robot operating system is to bridge the gap between robot hardware and application programs to control the robot purposefully. In other words, robot operating systems hide the gory details of managing robot devices, software processes, and (especially) low-level sensorimotor routines. This abstraction provides a platform for robot applications to run seamlessly across a wide variety of robots with different physical and electronic configurations.

The area of robot operating systems has emerged from pioneering work in robot middleware systems over the last 20 years. Lightweight Communications and Marshalling (LCM), Yet Another Robot Platform (YARP), the MOOS (Mission Oriented Operating Suite), JAUS-based systems, Player/Stage, and the well-branded Robot Operating System (ROS) are examples of leading robot middleware systems. Topics covered in AutoRob will help you understand the insides of all of these robot middleware systems, make them better, and develop the robot operating systems of the future. Brief vocational coverage of how to use and develop robot controllers with ROS and/or LCM will be included in this offering of AutoRob.

KinEval code stencil and programming framework

AutoRob projects will use the KinEval code stencil that roughly follows conventions and structures from the Robot Operating System (ROS) and Robot Web Tools (RWT) software frameworks, as widely used across robotics. These conventions include the URDF kinematic modeling format, ROS topic structure, and the rosbridge protocol for JSON-based messaging. KinEval uses threejs for in-browser 3D rendering. Projects also make use of the Numeric Javascript external library for select matrix routines, although other math support libraries are being explored. Auxiliary code examples and stencils will often use the jsfiddle development environment.

You will use an actual robot (at least once)!

While AutoRob projects will be mostly in simulation, KinEval allows for your code to work with any robot that supports the rosbridge protocol, which includes any robot running ROS. Given a URDF description, the code you produce for AutoRob will allow you to view and control the motion of any mobile manipulation robot with rigid links. Your code will also be able to access the sensors and other software services of the robot for your continued work as a roboticist.

For Fall 2020, the course staff is exploring ways to enable students to access physical robots at least once during the course in a manner that is in compliance with safe research operations and public health guidelines.

Related Courses

AutoRob is a computing-friendly pathway into robotics. The course aims to provide broad exposure to autonomous robotics, but it does not cover the whole of robotics. The scope of AutoRob is robot modeling and control. AutoRob is well-suited as preparation for a Major Design Experience, such as in EECS 467 (Autonomous Robotics Laboratory). In complement to AutoRob, EECS 467 and ROB 550 (Robotics Systems Laboratory) provide more extensive hands-on experience with a small set of real robotic platforms. In contrast, AutoRob emphasizes the creation of a general modeling and control software stack in simulation, with interfaces to work with a diversity of real robots.

AutoRob is a computation-focused alternative to ME 567/ROB 510 (Robot Kinematics and Dynamics). ME 567 is a more in-depth mathematical analysis of dynamics and control with extensive use of Denavit-Hartenberg parameters for kinematics. AutoRob has a greater emphasis on algorithms for autonomous path and motion planning along with use of quaternions and matrix stacks for kinematics. In AutoRob, coding is believing.

AutoRob is well suited to complement courses that provide more depth in algorithmic robotics (EECS 498) and motion planning (EECS 598).

AutoRob is well suited to complement courses that provide more depth in embedded systems (EECS 367) and advanced topics in embedded system design (EECS 473).

AutoRob can be taken in parallel with ROB 502 (Programming for Robotics). AutoRob makes some accommodations for students with less programming experience than provided in common data structures courses, such as EECS 280. However, for students new to computer programming, it is recommended to take AutoRob after ROB 502, EECS 280, or EECS 402.

AutoRob is also a complement to courses covering computational perception (EECS 568 Mobile Robotics, EECS 442 Computer Vision), robot building (ME 552 Mechatronics, EECS 498 Hands-on Robotics), robot simulation (ME 543 Analytical and Computational Dynamics), controls systems (EECS 460 Control Systems Analysis and Design, EECS 461 Embedded Control Systems), and artificial intelligence (EECS 492 Introduction to Artificial Intelligence), as well as general graduate courses in robotics (ROB 501 Math for Robotics, ROB 550 Robotic Systems Laboratory).

Commitment to equal opportunity

We ask that all students to treat each other with respect. As indicated in the General Standards of Conduct for Engineering Students, this course is committed to a policy of equal opportunity for all persons and does not discriminate on the basis of race, color, national origin, age, marital status, sex, sexual orientation, gender identity, gender expression, disability, religion, height, weight, or veteran status. Please feel free to contact the course staff with any problem, concern, or suggestion. The University of Michigan Statement of Student Rights and Responsibilities provides greater detail about expected behavior and conflict resolution in our community of scholarship.

Accommodations for Students with Disabilities

If you believe an accommodation is needed for a disability, please let the course instructor know at your earliest convenience. Some aspects of this course, including the assignments, the in-class activities, and the way the course is usually taught, may be modified to facilitate your participation and progress. As soon as you make us aware of your needs, the course staff can work with the Services for Students with Disabilities (SSD, 734-763-3000) office to help us determine appropriate academic accommodations. SSD typically recommends accommodations through a Verified Individualized Services and Accommodations (VISA) form. Any information you provide is private and confidential and will be treated as such. For special accommodations for any academic evaluation (exam, quiz, project), the course staff will need to receive the necessary paperwork issued from the SSD office by September 20, 2020.

Student mental health and well being

The University of Michigan is committed to advancing the mental health and wellbeing of its students. If you or someone you know is feeling overwhelmed, depressed, and/or in need of support, services are available. For help, please contact one of the many resources offered by the University that are committed helping students through challenging situations, including: U-M Psychiatric Emergency (734-996-4747, 24-hour), Counseling and Psychological Services (CAPS, 734-764-8312, 24-hour), and the C.A.R.E. Center on North Campus. You may also consult University Health Service (UHS, 734-764-8320) as well as its services for alcohol or drug concerns. There is also a more comprehensive listing of mental health resources available on and off campus.

Remote Accessibility during COVID-19

The AutoRob Course Staff is committed to ensuring all students enrolled in the course has access to all course materials, regardless of their location. If special accommodations are needed for remote access of course information, please contact a member of the course staff via email or the discussion server. For locations were it is permitted, the University of Michigan Virtual Private Network is available for use by members of the university.

Course Staff

Faculty Instructor

Chad Jenkins
ocj addrsign umich
GitHub: ohseejay
Bitbucket: ohseejay
Office: Beyster 3644
Office Hours: Monday 2:30-5pm, Wednesday 2-3:15pm
Office hours will be virtual-only unless students are notified otherwise.

Graduate Student Instructors

Zheming Zhou
zhezhou addrsign umich
GitHub: zhezhou1993
Bitbucket: zhezhou
Office Hours: Tuesday 1-2pm

Elizabeth Goeddel
mamantov addrsign umich
GitHub: emgoeddel
Bitbucket: emgoeddel
Office Hours: Tuesday 5:30-6:30pm

Xiaotong Chen
cxt addrsign umich
GitHub: cxt98
Bitbucket: cxt98
Office Hours: Thursday 4-5pm

Course Meetings

The AutoRob course will have a flipped classroom hybrid format this Fall semester. At their discretion, a student will be able to complete the course remotely in its entirety. Course meetings, quizzes, and office hours will be held in-person only as needed and with consideration of the public health situation. All lectures will be pre-recorded and available online through this course website. The course staff will be available to students through regularly scheduled all-class interactive sessions twice per week, a laboratory section once per week, and office hours with small "study pods" that are scheduled as needed.

Course Lectures will be recorded and available on this site as listed in the course schedule.

Course Interactive Sessions will be dedicated to addressing general questions and comments regarding course concepts in relation to lectures and projects, quiz synchronization, and course administrivia.
Monday and Wednesday 1:30-2:00 MMT (Michigan Mean Time)
(remote location to be determined)

Laboratory Sections (EECS 367, optional for ROB 511) will provide guidance through the workflow of course projects.
Friday 2:30-3:20 MMT (Michigan Mean Time)
(remote location to be determined)

Interactive Study Pods

Small pods of students will be assigned to regularly meet together with a member of the course staff once per week. Pod assignments will be constant for the duration of the semester so that students may meet remotely with the same group of peers at a consistent time throughout the class. Course staff will make pod assignments with students' preferred time zones in mind.
(time and location to be assigned)

Students within a study pod are expected to actively engage and discuss with other members of the pod regarding course and project concepts, interactive debugging (without sharing code directly), and other topics of relevance to the course. Collaborative discussion and assistance is highly encouraged, within the bounds of the course collaboration policy.

Students are expected to be prepared for their study pod meetings with their most important question(s) and an optional status update for the course staff member joining the meeting, and students should also be prepared to demonstrate the current state of their project implementation during pod meetings.

Discussion servers

Microsoft Teams

The AutoRob Microsoft team will be used for course-related discussions and announcements. Microsoft Teams is a cloud-hosted online discussion and collaboration system with functionality that resembles Internet Relay Chat (IRC). Microsoft Teams clients are available for most modern operating systems as well as through the web. Microsoft Teams is FERPA compliant.

Students enrolled in AutoRob will receive an invitation to the AutoRob MS team. Upon accepting this invitation, you should be automatically subscribed to 9 channels:

Each student pod will also have a dedicated discussion channel.

Actively engaging in course discussions is a great way to become a better roboticist.

Discord

The AutoRob at Michigan Discord Server is optionally available for informal discussion for participants both at Michigan and beyond. As Discord is not necessarily FERPA compliant, there can be no discussion on this server regarding grades, and any formal student records, or any student who is not a member of the server.

Prerequisites

This course has recommended prerequisites of "Linear Algebra" and "Data Structures and Algorithms", or permission from the instructor.

Programming proficiency: EECS 281 (Data Structures and Algorithms), EECS 402 (Programming for Scientists and Engineers), ROB 502 (Programming for Robotics) or proficiency in data structures and algorithms should provide an adequate programming background for the projects in this course. Interested students should consult with the course instructor if they have not taken EECS 281, EECS 402, ROB 502, or its equivalent, but have some other notable programming experience.

Mathematical proficiency: Math 214, 217, 417, 419 or proficiency in linear algebra should provide an adequate mathematical background for the projects in this course. Interested students should consult with the course instructor if they have not taken one of the listed courses or their equivalent, but have some other strong background with linear algebra.

Recommended optional proficiency: Differential equations, Computer graphics, Computer vision, Artificial intelligence

The instructor will do their best to cover the necessary prerequisite material, but no guarantees. Linear algebra will be used extensively in relation to 3D geometric transforms and systems of linear equations. Computer graphics is helpful for under-the-hood understanding of threejs. Computer vision and AI share common concepts with this course. Differential equations are used to cover modeling of motion dynamics and inverse kinematics, but not explicitly required.

Textbook

AutoRob is compatible with both the Spong et al. and Corke textbooks (listed below), although only one of these books is needed. Depending on individual styles of learning, one textbook may be preferable over the other. Spong et al. is the listed required textbook for AutoRob and is supplemented with additional handouts. The Corke textbook provides broader coverage with an emphasis on intuitive explanation. A pointer to the Lynch and Park textbook is provided for an alternative perspective on robot kinematics that goes deeper into spatial transforms in exponential coordinates. Lynch and Park also provides some discussion and context for using ROS. This semester, AutoRob will not officially support the Lynch and Park book, but will make every effort to work with students interested in using this text.

Robot Modeling and Control
Mark W. Spong, Seth Hutchinson, and M. Vidyasagar
Wiley, 2005
Available at Amazon

Alternate textbooks

Robotics, Vision and Control: Fundamental Algorithms in MATLAB
Peter Corke
Springer, 2011

Modern Robotics: Mechanics, Planning, and Control
Kevin M. Lynch, Frank C. Park
Cambridge University Press, 2017

Optional texts

JavaScript: The Good Parts
Douglas Crockford
O'Reilly Media / Yahoo Press, 2008

Principles of Robot Motion
Howie Choset, Kevin M. Lynch, Seth Hutchinson, George A. Kantor, Wolfram Burgard, Lydia E. Kavraki, and Sebastian Thrun
MIT Press, 2005

Projects and Grading

The AutoRob course will assign 7 projects (6 programming, 1 oral) and 5 quizzes. Each project has been decomposed into a collection of features, each of which is worth a specified number of points. AutoRob project features are graded as "checked" (completed) or "due" (incomplete). Prior to its due date, the grading status of each feature will be in the "pending" state. In terms of workload, each project is expected to take approximately 4 hours of work on average (as a rough estimate). Each quiz will consist of 4 short questions that will be within the scope of previously graded projects. In other words, each quiz question should be readily answerable given knowledge from correctly completing projects on time.

Individual final grades are assigned based on the sum of points earned from coursework (detailed in subsections below). The timing and due dates for course projects and quizzes will be announced on an ongoing basis. The official due date of a project is listed with its project description, such as for Assignment 1: Path Planning. Due dates listed in the course schedule are tentative. All project work must be checked by the final grading deadline to receive credit.

EECS 367: Introduction to Autonomous Robotics

In the undergraduate section, each fully completed project is weighted as 12 points and each correctly answered quiz question is weighted as 1 point. Based on this sum of points from coursework, an overall grade for the course is earned as follows: An "A" grade in the course is earned if graded coursework sums to 95 points or above; a "B" grade in the course is earned if graded coursework sums to 83 points or above; a "C" grade in the course is earned if graded coursework sums to 73 points or above. The instructor reserves the option to assign appropriate course grades with plus or minus modifiers.

ROB 511: Robot Operating Systems

In the graduate section, each fully completed project is weighted as 18 points, each correctly answered quiz question is weighted as 1 point. Students in this section have the opportunity to earn 4 additional points through an advanced extension of a course project. Examples of advanced extensions include implementation of an LU solver for linear systems of equations, inverse kinematics by Cyclic Coordinate Descent, one additional motion planning algorithm, point cloud segmentation, and a review of a current research publication in robotics. Advanced extensions are due by the course final grading deadline and do not need to be completed for the deadlines of each assignment.

Based on the sum of points earned from coursework, an overall grade for the course is earned as follows: An "A" grade in the course is earned if graded coursework sums to 138 points or above; a "B" grade in the course is earned if graded coursework sums to 120 points or above; a "C" grade in the course is earned if graded coursework sums to 105 points or above. The instructor reserves the option to assign appropriate course grades with plus or minus modifiers.

Project Rubrics (tenative and subject to change)

The following project features are planned for AutoRob this semester. Students enrolled in the graduate section will complete all features. Students in the undergraduate section are not expected to implement features for the graduate section.

Points
Sections
Feature
Assignment 1: 2D Path Planning
4All Heap implementation
8All A-star search
2Grad BFS
2Grad DFS
2Grad Greedy best-first
Assignment 2: Pendularm
4All Euler integrator
4All Velocity Verlet integrator
4All PID control
1Grad Verlet integrator
2Grad RK4 integrator
3Grad Double pendulum
Assignment 3: Forward Kinematics
2All Core matrix routines
8All FK transforms
2All Joint selection/rendering
2Grad Base offset transform
4Grad New robot definition
Assignment 4: Dance Controller
6All Quaternion joint rotation
2All Interactive base control
2All Pose setpoint controller
2All Dance FSM
2Grad Joint limits
2Grad Prismatic joints
2GradFetch rosbridge interface
Assignment 5: Inverse Kinematics
6All Manipulator Jacobian
3All Gradient descent with Jacobian transpose
3All Jacobian pseudoinverse
6Grad Euler angle conversion
Assignment 6: Motion Planning
4All Collision detection
2All 2D RRT-Connect
6All Configuration space RRT-Connect
6Grad 2D RRT-Star
Participation
4All Participation in assigned pod and course discussion
Optional
4Grad Selected Advanced Extensions

Project Submission and Regrading

Git repositories will be used for project implementation, version control, and submission for grading. The implementation of your project is submitted through an update to the master branch of your designated repository. Updates to the master branch must be committed and pushed prior to the due date for each assignment for any consideration of full credit. Your implementation will be checked out and executed by the course staff. Through your repository, you will be notified by the course staff whether your implementation of assignment features is sufficient to receive credit.

Continuous Integration Project Grading

For the Fall 2020 semester, AutoRob will begin it use of "continous integration grading" for student project implementations. The "CI grader" will automatically pull code from your repository, run tests for all assignments that are due to the current time, and push the results of grading back to your repository. Please remember to not break the functionality of project features that are already working in your code. The CI grader will run at regularly scheduled intervals each day. The CI grader is new aspect of the AutoRob course, as an innovation for scaling the course. Thus, grades automatically generated by the CI grader will be considered tentative and reviewable by the course staff. Your feedback, understanding, and help to improve the CI grader will be greatly appreciated by the course staff.

Late Policy

Do not submit assignments late. The course staff reserves the right to not grade late submissions. The course instructor reserves the right to decline late submissions and/or adjust partial credit on regraded assignments.

If granted by the course instructor, late submissions can be graded for partial credit, with the following guidelines. Submissions pushed within two weeks past the project deadline will be graded for 80% credit. Submissions pushed within four weeks of the project deadline will be graded for 60% credit. Submissions pushed at any time before the semester project submission deadline (December 8, 2020) will be considered for 50% credit. As a reminder, the course instructor reserves the right to decline late submissions and/or adjust partial credit on regraded assignments.

Regrading Policy

The regrading policy allows for submission and regrading of projects up through the final grading of projects, which will be December 8 for the Fall 2020 Semester. This regrading policy will grant full credit for project submissions pushed to your repository before the corresponding project deadline. If a feature of a graded project is returned as not completed (or "DUE"), your code can be updated for consideration at 80% credit. This code update must be pushed to your repository within two weeks from when the originally graded project was returned. Regrades of projects updated beyond this two week window can receive at most 60% credit. The course staff will allow one regrade for each grading iteration.

Completed Features Policy

All checked features must continue to function properly in your repository up through the final grading deadline (December 8, 2020). Checked features that do not function properly for subsequent projects will be treated as a new submission and subject to the regrading policy.

Final Grading

All grading will be finalized on December 8, 2020. Regrading of specific assignments can be done upon request during office hours. No regrading will be done after grades are finalized.

Repositories

You are expected to provide a private git repository for your work in this course with the course instructor added as a read/write collaborator. If needed, the course staff can assist in the setup of an online git repository through providers such as github or bitbucket. All Michigan Engineering students have access to an account on the internal EECS gitlab server at no additional cost.

There are many different tutorials for learning how to use git repositories. For those new to version control, we realize git has a significant startup overhead and learning curve, but it is definitely worth the effort. The first laboratory discussion in AutoRob will be dedicated to installing and using git. The AutoRob course site also has its own basic quick start tutorial. The EECS gitlab server has a basic quick start tutorial. The Pro Git book provides an in-depth introduction to git and version control. As different people often learn through different styles, the Git Magic tutorial has also proved quite useful when a different perspective is needed. git: the simple guide has often been a great and accessible quick start resource.

We expect students to use these repositories for collaborative development as well as project submission. It is the responsibility of each student group to ensure their repository adheres to the Collaboration Policy and submission standards for each assignment. Submission standards and examples will be described for each assignment as needed.

IMPORTANT: Do not modify the directory structure in the KinEval stencil. For example, the file "home.html" should appear in the top level of your repository. Repositories that do not follow this directory structure will not be graded.

Code Maintenance Policy and Branching

This section outlines expectations for maintenance of source code repositories used by students for submission of their work in this course. Repositories that do not maintain these standards will not be graded at the discretion of the course staff.

Code submitted for projects in this course must reside in the master branch of your repository. The directory structure provided in the KinEval stencil must not be modified. For example, the file "home.html" should appear in the top level directory of your repository.

The master branch must always maintain a working (or stable) version of your code for this course. Code in the master branch can analyzed at any time with respect to any assignment whose due date has passed. Improperly functioning code on the master branch can affect the grading of an assignment (even after the assignment due date) up to the assignment of final grades.

The master branch must always be in compliance with the Michigan Honor Code and Michigan Honor License, as described below in the course Collaboration Policy. To be considered for grading, a commit of code to your master branch must be signed with your name and the instructor name at the bottom of the file named LICENSE with an unmodified version of the Michigan Honor License. Without a properly asserted license file, a code commit to your repository will be considered an incomplete submission and will be ineligible for grading.

If advanced extension features have been implemented and are ready for grading, such features must be listed in the file "advanced_extensions.html" in the top level directory of the master. Advanced extension features not listed in this file may not be graded at the discretion of the course staff.

Branching

Sutdents are encouraged to update their repository often with the help of branching. Branching spawns a copy of code in your master branch into a new branch for development, and then merging integrates these changes back into master once they are complete. For example, you can create an Assignment-2 branch for your work on the second project while it is under development and any changes may be experimental, which will keep your master branch stable for grading. Once you are confident in your implementation of the second project, you can merge your Assignment-2 branch back into the master branch. The master branch at this point will have working stable versions of the first and second projects, both of which will be eligible for grading. Similarly, an Assignment-3 branch can be created for the next project as you develop it, and then the Assignment-3 branch can be merged into the master branch when ready for grading. This configuration allows your work to be continually updated and built upon such that versions are tracked and grading interruptions are minimized.

Collaboration Policy

This collaboration policy covers all course material and assignments unless otherwise stated. All submitted assignments for this course must adhere to the Michigan Honor License (the 3-Clause BSD License plus two academic integrity clauses).

Course material, concepts, and documentation may be discussed with anyone. Discussion during quizzes is not allowed with anyone other than a member of the course staff. Assignments may be discussed with the other students at the conceptual level. Discussions may make use of a whiteboard or paper. Discussions with others (or people outside of your assigned project group) cannot include writing or debugging code on a computer or collaborative analysis of source code. You may take notes away from these discussions, provided these notes do not include any source code.

The code for your implementation may not be shown to anyone outside of your assigned project group, including granting access to repositories or careless lack of protection. For example, you do not need to hide the screen of your computer from anyone, but you should not attempt to show anyone your code. When you are done using any robot device such that another group may use it, you must remove all code you have put onto the device. You may not share your code with others outside of your group. At any time, you may show others the implemented program running on a device or simulator, but you may not discuss specific debugging details about your code while doing so.

This policy applies applies to collaboration during the current semester and any past or future instantiations of this course. Although course concepts are intended for general use, your implementation for this course must remain private after the completion of the course. It is expressly prohibited to share any code previously written and graded for this course with students currently enrolled in this course. Similarly, it is expressly prohibited for any students currently enrolled in this course to refer to any code previously written and graded for this course.

IMPORTANT: To acknowledge compliance with this collaboration policy, append your name to the file "LICENSE" in the main directory of your repository with the following text. This appending action is your attestation of your compliance with the Michigan Honor License and the Michigan Honor Code statement:

 
"I have neither given nor received unauthorized aid on this course project implementation, nor have I concealed any violations of the Honor Code."  

This attestation of the honor code will be considered updated with the current date and time of each commit to your repository. Repository commits that do not include this attestation of the honor code will not be graded at the discretion of the course instructor.

Should you fail to abide by this collaboration policy, you will receive no credit for this course. The University of Michigan reserves the right to pursue any means necessary to ensure compliance. This includes, but is not limited to prosecution through The College of Engineering Honor Council, which can result in your suspension or expulsion from the University of Michigan. Please refer to the Engineering Honor Council for additional information.

Course Schedule (tentative and subject to change)

Preview slides from lectures during the Fall 2019 offering of AutoRob are provided. These preview slides will be replaced with recorded lectures for Fall 2020 as the videos become available.

Date
Topic
Reading
Project
Aug 31 Initialization: "So, where is my robot?", "What is a Robot OS?", Course administration and logistics
[Session recording (UM only)]
Spong Ch.1
Corke Ch.1
Setup git repository
Out: Path Planning
What is a robot?: Brief history and definitions for robotics
Sep 2 Path Planning: Navigation as graph search, DFS, BFS, Dijkstra shortest paths, A-star, Greedy best first, Priority queues and binary heaps
[Lecture Video]
Wikipedia
Sep 4 Lab Session: Git-ing started with git, JavaScript, and KinEval
[Session recording (UM only)]
Week 2
Sep 7 No course meeting - Labor Day
Sep 9 #ScholarStrike Presentation
That Ain't Right: AI Mistakes and Black Lives
(Zoom Version)
False AI arrest of Robert Williams
BlackInComputing.org
BlackInEngineering.org
Sep 10 JavaScript and AutoRob workflow: Project workflow with git, JS/HTML5 tutorial, Document Object Model, Version Control, LaTeX math mode, Licensing, Michigan Honor License
[Lecture Video]
Crockford,
HTML Sandbox,
hello.html,
JavaScript by Example,
hello_anim,
hello_anim_text
Sep 11 367 Lab: KinEval Path Planning code overview
[Session recording (UM only)]
Week 3
Sep 14 Dynamical Simulation: Simple pendulum, Lagrangian equation(s) of motion, Initial value problem, Explicit integrators: Euler, Verlet, and Runge-Kutta 4, Double pendulum
[Lecture Video]
[errata]
[Session recording (UM only)]
Spong Ch.7 | Corke Ch.9
Euler's Method
Verlet Integration,
Runge-Kutta;
Witkin&Baraff 1998: Dynamics
Witkin&Baraff 1998: Integrators
Sep 16 Quiz 1 Due: Path Planning
Out: Pendularm
Sep 18 367 Lab: pendularm1.html code overview
[Session recording (UM only)]
Week 4
Sep 21 Motion Control: Cartesian vs. generalized coordinates, open-loop vs. closed-loop control, PID control; Rigid body dynamics
[Lecture Video]
[Session recording (UM only)]
Spong 6.3,
Vondrak+ 2012

Astrom Ch. 6
Sep 23 Linear Algebra Refresher: Systems of linear equations, vector spaces and operations, least squares approximations
Spong A-B
Corke D
Sep 25 367 Lab: Pendularm Support
[Session recording (UM only)]
Week 5
Sep 28 Forward Kinematics: Kinematic chains, URDF, homogeneous transforms, matrix stack traversal, D-H convention
[Lecture Video]
Spong 2, 3.1, 3.2
Corke 7.1-2
Sep 30 Quiz 2 Due: Pendularm
Out: Forward Kinematics
Oct 2 367 Lab: KinEval and urdf.js code overview
[Session recording (UM only)]
Week 6
Oct 5 Axis-angle Rotation and Quaternions: Motors, Euler angles, gimbal lock, Rodrigues rotation, rotation in complex spaces, Dual quaternions and screw coordinates
[Lecture Video]
Handout 1, 2
Daniilidis 1999
Corke 2.2-3
Oct 7 Reactive Controllers: Reactive and Deliberative Decision Making, Finite State Machines, Subsumption Architecture
Brooks 1986, Mataric 1992, Platt+ 2004, Cunningham+ 2015
Oct 9 367 Lab: Quaternions in KinEval Extended office hours
Week 7
Oct 12 Robot Middleware: Hardware Abstraction, ROS, LCM, Publish-subscribe messaging, rosbridge, Client-server messaging
Quigley+ 2009, Huang+ 2010, Toris+ 2015
Oct 14 Inverse Kinematics 1 - Closed-form: Joint vs. Endeffector control, Planar 2-link arm, Closed form solutions

[Lecture recording (UM only)]
Spong 3.3
Corke 7.3
Due: Forward Kinematics
Out: Dance Contest
IK robot game
Oct 16 367 Lab: KinEval pose parameters and HTML5 audio
[Session recording (UM only)]
Week 8
Oct 19 No course meeting - !Fall Study Break
Oct 21 Inverse Kinematics 2 - Non-linear Optimization: Gradient descent, Manipulator Jacobian, Jacobian transpose and pseudoinverse, Cyclic Coordinate Descent
[Lecture recording (UM only)]
Spong 4, Wang&Chen 1991, Buss 2009, Beeson+ 2015
Corke 8
Oct 23 367 Lab: rosbridge/FK: connect your code to a real robot
Extended office hours
Week 9
Oct 26 Bug Algorithms: Reaction vs. Deliberation revisted, Bug[0-2], Tangent Bug Lumelsky+ 1986, Kamon+ 1996
Corke 5
Oct 28 Quiz 3
Oct 30 367 Lab: KinEval IK control flow and parameters
[Session recording (UM only)]
Due: Forward Kinematics, Dance Contest
Out: Inverse Kinematics
Week 10
Nov 2 Configuration Spaces: Curse of dimensionality, Configuration space vs. Workspace, Minkowski planning, Costmaps, Holonomicity Spong 5
Corke 4, 5
Nov 3 Election Day
Nov 4 ROS/catkin tutorial session Example tutorial result
Nov 6 367 Lab: ROS tutorial
[Session recording (UM only)]
Tutorial code,
ROS tutorials
Week 11
Nov 9 Sampling-based Planning: Probabilistic roadmaps, RRT-based motion planning Kavraki+ 1996, Kuffner+ 2000, McMahon+ 2018
Potential fields: Gradient descent revisited, local search, downhill simplex, Wavefront planning Khatib 1986, Jarvis 1993, Zelinsky 1992
Nov 11 Quiz 4
Due: Inverse Kinematics
Out: Motion Planning
Collision Detection: 3D Triangle-Triangle Testing, Oriented Bounding Boxes, Axis-Aligned Bounding Boxes, Separating Axis Theorem Gottschalk+ 1996, Moller 1997
Veterans Day
Nov 13 367 Lab: search_canvas.html revisited for 2D RRT
Extended office hours
Week 12
Nov 16 Collision Detection: 3D Triangle-Triangle Testing, Oriented Bounding Boxes, Axis-Aligned Bounding Boxes, Separating Axis Theorem
Robotics Pathways Speaker: Dr. Kimberly Hambuchen
[Session recording (UM only)]
Nov 18 Quiz 4 Due: Inverse Kinematics
Out: Motion Planning,
Best Use of Robotics
Robotics Pathways Speaker: Shiva Ghose
[Session recording (UM only)]
Nov 20 367 Lab: KinEval RRT stencil and AABB collision detection
[Session recording (UM only)]
Week 13
Nov 23-27 Course meeting cancelled - Thanksgiving Recess
Week 14
Nov 30 Robotics Pathways Speaker: Professor Maja Mataric
Dec 2 Quiz 5 Due: Motion Planning
Selected topics for further study
3D Point Cloud Segmentation: Point cloud segmentation, Principal Components Analysis, Connected components PCA,
Rusu+ 2008,
ten Pas+ 2014,
Task Planning Overview: Decision making revisited, declarative programming, axiomatic state, planning operators Fikes+ 1971,
Laird+ 1987,
Trafton+ 2013,
Zeng+ 2018
Localization and Mapping Overview: Bayes rule, Bayesian filtering, Monte Carlo Localization, Factor Graphs, SGD-SLAM, Scene estimation Dellaert+ 1999,
Olson+ 2006,
Sui+ 2017
Dec 4 367 Lab: Extended office hours Due: Best Use of Robotics
Week 15
Dec 7 Quiz 5 Due: Motion Planning
Best Use of Robotics Highlights Screening
Dec 11 Grading finalized

Slides from this course borrow from and are indebted to many sources from around the web. These sources include a number of excellent robotics courses at various universities.

Assignment 1: Path Planning

Due 11:59pm, Wednesday, September 16, 2020

The objective of the first assignment is to implement a collision-free path planner in JavaScript/HTML5. Path planning is used to allow robots to autonomously navigate in environments from previously constructed maps. A path planner essentially finds a set of waypoints (or setpoints) for the robot to traverse and reach its goal location without collision. As covered in other courses (EECS 467, ROB 550, or EECS 568), such maps can be estimated through methods for simultaneous localization and mapping. Below is an example from EECS 467 where a robot performs autonomous navigation while simultaneously building an occupancy grid map:

For this assignment, you will implement the planning part of autonomous navigation as an A-star graph search algorithm. Unlike in the above video, where the map is built as the robot explores, you will be given a complete map of the robot's world to run A-star on. A-star infers the shortest path from a start to a goal location in an arbitrary 2D world with a known map (or collision geometry). This A-star implementation will consider locations in a uniformly spaced, 4-connected grid. A-star requires an admissible heuristic, which can be the Euclidean distance to the goal in your implementation. You will implement a heap data structure as a priority queue for visiting locations in the search grid.

If properly implemented, the A-star algorithm should produce the following path (or path of similar length) using the provided code stencil:

Features Overview

This assignment requires the following features to be implemented in the corresponding files in your repository:

Points distributions for these features can be found in the project rubric section. More details about each of these features and the implementation process are given below.

Cloning the Stencil Repository

The first step for completing this project (and all projects for AutoRob) is to clone the KinEval stencil repository. The appended git quick start below is provided those unfamiliar with git to perform this clone operation, as well as commiting and pushing updates for project submission. IMPORTANT: the stencil repository should be cloned and not forked.

Throughout the KinEval code stencil, there are markers with the string "STENCIL" for code that needs to be completed for course projects. For this assignment, you will write code where indicated by the "STENCIL" marker in "tutorial_heapsort/heap.js" and "project_pathplan/graph_search.js".

Heap Sort Tutorial

The recommended starting point for this assignment is to complete the heap sort implementation in the "tutorial_heapsort" subdirectory of the stencil repository. In this directory, a code stencil in JavaScript/HTML5 is provided in two files: "heapsort.html" and "heap.js". Comments are provided throughout these files to describe the structure of JavaScript/HTML5 and its programmatic features.

If you are new to JavaScript/HTML5, there are other tutorial-by-example files in the "tutorial_js" directory. Any of these files can be run by simply opening them in a web browser. Note that these are examples only, and there are no assignment requirements in the "tutorial_js" files.

Opening "heapsort.html" will show the result of running the incomplete heap sort implementation provided by the code stencil:

To complete the heap sort implementation, complete the heap implementation in "heap.js" at the locations marked "STENCIL". In addition, the inclusion of "heap.js" in the execution of the heap sort will require modification of "heapsort.html".

A successful heap sort implementation will show the following result for a randomly generated set of numbers:

Graph Search Stencil

For the path planning implementation, a JavaScript/HTML5 code stencil has been provided in the the "project_pathplan" subdirectory. The main HTML file, "search_canvas.html", includes JavaScript code from "draw.js", "infrastructure.js", and "graph_search.js". Of these files, students must only edit "graph_search.js", although you may want to examine the other files to understand the available helper functions. Opening "search_canvas.html" in a browser should display an empty 2D world displayed in an HTML5 canvas element.

There are five planning scenes that have been provided within this code stencil: "empty", "misc", "narrow1", "narrow2", and "three_sections". The choice of planning_scene can be specified from the URL given to the browser, described in the usage in the file. For example, the URL "search_canvas.html?planning_scene=narrow2" will bring up the "narrow2" planning world shown above. Other execution parameters, such as start and goal location, can also be specified through the document URL. A description of these parameters is provided in "search_canvas.html".

This code stencil is implemented to perform graph search iterations interactively in the browser. The core of the search implementation is performed by the function iterateGraphSearch(). This function performs a graph search iteration for a single location in the A-star execution. The browser implementation cannot use a while loop over search iterations, as in the common A-star implementation. Such a while loop would keep control within the search function, and cause the browser to become non-responsive. Instead, the iterateGraphSearch() gives control back to the main animate() function, which is responsible for updating the display and user interaction.

Within the code stencil, you will complete the functions initSearchGraph() and iterateGraphSearch() as well as add functions for heap operations. Locations in "graph_search.js" where code should be added are labeled with the "STENCIL" string.

The initSearchGraph() function creates a 2D array over graph cells to be searched. Each element of this array contains various properties computed by the search algorithm for a particular graph cell. Remember, a graph cell represents a square region of space in the 2D planning scene. The size of each cell is specified by the "eps" parameter, as the lengths of the square sides. initSearchGraph() must determine the start node for accessing the planning graph from the start pose of the robot, specified as a 2D vector in parameter "q_init". The visit queue is initialized as this start node.

The iterateGraphSearch() function should perform a search iteration towards the goal pose of the robot, specified as a 2D vector in parameter "q_goal". The search must find a goal node that allows for departure from the planning graph without collision. iterateGraphSearch() makes use of three provided helper functions. testCollision([x, y]) returns a boolean of whether a given 2D location, as a two-element vector [x, y], is in collision with the planning scene. draw_2D_configuration([x, y], type) draws a square at a given location in the planning world to indicate that location has been visited by the search (type = "visited") or is currently in the planning queue (type = "queued"). Once the search is complete, drawHighlightedPathGraph(l) will render the path produced by the search algorithm between location l and the start location. The global variable search_iterate should be set to false when the search is complete to end animation loop.

Graduate Section Requirement

In addition to the A-star algorithm, students in the graduate section of AutoRob must additionally implement path planning by Depth-first search, Breadth-first search, and Greedy best-first search. An additional report is required in "report.html" (you will need to create this file) in the "project_pathplan" directory. This report must: 1) show results from executing every search algorithm with every planning world for various start and goal configurations and 2) synthesize these results into coherent findings about these experiments.

For effective communication, it is recommended to think of "report.html" like a short research paper: motivate the problem, set the value proposition for solving the problem, describe how your methods can address the problem, and show results that demonstrate how well these methods realize the value proposition. Visuals are highly recommended to complement this description. The best research papers can be read in three ways: once in text, once in figures, and once in equations. It is also incredibly important to remember that writing in research is about generalizable understanding of the problem more than a specific technical accomplishment.

Advanced Extensions

Advanced extensions can be submitted anytime before the final grading is complete. Concepts for several of these extensions will not be covered until later in the semester. Any new path planning algorithm must be implemented within its own ".js" file under the "project_pathplan" directory, and invoked through a parameter given through the URL. For example, the Bug0 algorithm must be invoked by adding the argument "?search_alg="Bug0" to the URL. Thus, a valid invocation of Bug0 for the Narrow2 world could use the URL:

"file:///project_pathplan/source_search_canvas.html?planning_scene=narrow2?search_alg="Bug0"

The same format must be used to invoke any other algorithm (such as Bug1, Bug2, TangentBug, Wavefront, etc.). Note that you will need to update the animate loop in draw.js to include new planning algorithms and update the main HTML file to include your new scripts, along with implementing the algorithms in their own files.

Of the 4 possible advanced extension points, two additional points for this assignment can be earned by implementing the "Bug0", "Bug1", "Bug2", and "TangentBug" navigation algorithms. The implementation of these bug algorithms must be contained within the file "bug.js" under the "project_pathplan" directory.

Of the 4 possible advanced extension points, two additional points for this assignment can be earned by implementing navigation by "Potential" fields and navigation using the "Wavefront" algorithm. The implementation of these potential-based navigation algorithms must be contained within the file "field_wave.js" under the "project_pathplan" directory.

Of the 4 possible advanced extension points, one additional point for this assignment can be earned by implementing a navigation algorithm using a probabilistic roadmap ("PRM"). This roadmap algorithm implementation must be contained within the file "prm.js" under the "project_pathplan" directory.

Of the 4 possible advanced extension points, one additional point for this assignment can be earned by implementing costmap functionality using morphological operators. Based on the computed costmap, the navigation routine would provide path cost in addition path length for a successful search. The implementation of this costmap must be contained within the file "costmap.js" under the "project_pathplan" directory.

Of the 4 possible advanced extension points, one additional point for this assignment can be earned by implementing a priority queue through an AVL tree or a red-black tree. The implementation of this priority queue must be contained within the file "balanced_tree.js" under the "project_pathplan" directory.

Of the 4 possible advanced extension points, one additional point for this assignment can be earned by adapting the search canvas to plan betwen any locations in the map "bbb2ndfloormap.png" (provided in the stencil repository) when the "planning_scene" parameter is invoked as "BeysterFloor2".

Project Submission

For turning in your assignment, ensure your completed project code has been committed and pushed to the master branch of your repository.

To ensure proper submission of your assignments, please do the following:

If you are paying attention, you should also add a directory to your repository called "me". This "me" directory should include a simple webpage in the file "me.html". The "me.html" file should have a title with your name, an h1 tag with your name, an img tag includes a picture of you from the file "me.png", body with a brief introduction about you, and a script tag that prints the result of Array(16).join("wat"-1)+" Batman!" to the console.

Assignment 2: Pendularm

Due 11:59pm, Wednesday, September 30, 2020

Physical simulation is widely used across robotics to test robot controllers. Testing in simulation has many benefits, such as avoiding the risk of damaging a (likely expensive) robot and faster development of controllers. Simulation also allows for consideration of environments not readily available for testing, such as interplanetary exploration (as in the example below for the NASA Space Robotics Challenge). We will now model and control our first robot, the Pendularm, to achieve an arbitrary desired setpoint state.

As an introduction to building your own robot simulator, your task is to implement a physical dynamics and servo controller for a simple 1 degree-of-freedom robot system. This system is 1 DOF robot arm as a frictionless simple pendulum with a rigid massless rod and idealized motor. A visualization of the Pendularm system is shown below. Students in the graduate section will extend this system into a 2-link 2-DOF robot arm, as an actuated double pendulum.

Features Overview

This assignment requires the following features to be implemented in the corresponding files in your repository:

Points distributions for these features can be found in the project rubric section. More details about each of these features and the implementation process are given below.

Implementation Instructions

The code stencil for the Pendularm assignment is available within the "project_pendularm" subdirectory of KinEval.

For physical simulation, you will implement several numerical integrators for a pendulum with parameters specified in the code stencil. The numerical integrator will advance the state of the pendulum (angle and velocity) in time given the current acceleration, which your pendulum_acceleration function should compute using the pendulum equation of motion. Your code should update the angle and velocity in the pendulum object (pendulum.angle and pendulum.angle_dot) for the visualization to access. If implemented successfully, this ideal pendulum should oscillate about the vertical (where the angle is zero) and with an amplitude that preserves the initial height of the pendulum bob.

Students enrolled in the undergraduate section will implement numerical integrators for:

For motion control, students in both undergraduate and sections will implement a proportional-integral-derivative controller to control the system's motor to a desired angle. This PID controller should output control forces integrated into the system's dynamics. You will need to tune the gains of the PID controller for stable and timely motion to the desired angle for a pendulum with parameters: length=2.0, mass=2.0, gravity=9.81. These default values are also provided directly in the init() function.

For user input, you should be able to:

Graduate Section Requirement

Students enrolled in the graduate section will implement numerical integrators for:

to simulate and control a single pendulum (in "update_pendulum_state.js"). Then, students in the graduate section will implement one of the above integrators for a double pendulum (in "update_pendulum_state2.js"). Any of the integrators may work as your choice for the double pendulum implementation, although the Runge-Kutta integrator is recommended. The double pendulum is allowed to have a smaller timestep than the single pendulum, within reasonable limits. A working visualization for the double pendularm will look similar to the following:

Advanced Extensions

Of the 4 possible advanced extension points, one additional point for this assignment can be earned by generating a random desired setpoint state and using PID control to your Pendularm to this setpoint. This code must randomly generate a new desired setpoint and resume PID control once the current setpoint is achieved. A setpoint is considered achieved if the current state matches the desired state upto 0.01 radians for 2 seconds. The number of setpoints that can be achieved in 60 seconds must be maintained and reported in the user interface. The invocation of this setpoint trial must be enabled a user pressing the "t" key in the user interface.

Of the 4 possible advanced extension points, two additional points for this assignment can be earned by implementing a simulation of a planar cart pole system. This cartpole system should have joint limits on its prismatic joint and no motor forces applied to the rotational joint. This cart pole implementation should be contained within the file "cartpole.html" under the "project_pendularm" directory.

Of the 4 possible advanced extension points, two additional points for this assignment can be earned by implementing a single pendulum simulator in maximal coordinates with a spring constraint enforced by Gauss-Seidel optimization. This maximal coordinate pendulum implementation should be contained within the file "pendularm1_maximal.html" under the "project_pendularm" directory. An additional point can be earned by extending this implementation to a cloth simulator in the file "cloth_pointmass.html".

Project Submission

For turning in your assignment, push your updated code to the master branch in your repository.

Assignment 3: Forward Kinematics

Due 11:59pm, Wednesday, October 14 Friday, October 30, 2020

Forward kinematics (FK) forms the core of our ability to purposefully control the motion of a robot arm. FK will provide us a general formulation for controlling any robot arm to reach a desired configuration and execute a desired trajectory. Specifically, FK allows us to predict the spatial layout of the robot in our 3D world given a configuration of its joints. For the purposes of grasping and dexterous tasks, FK gives us the critical ability to predict the location of the robot's gripper (also known as its "endeffector"). As shown in our IROS 2017 video below, such manipulation assumes a robot has already perceived its environment as a scene estimate of objects and their positions and orientations. Given this scene estimate, a robot controller uses FK to evaluate and execute viable endeffector trajectories for grasping and manipulating an object.

In this assignment, you will render the forward kinematics of an arbitrary robot, given an arbitrary kinematic specification. A collection of specifications for various robots is provided in the "robots" subdirectory of the KinEval code stencil. These robots include the Rethink Robotics' Baxter and Sawyer robots, the Fetch mobile manipulator, and a variety of example test robots, as shown in the "Desired Results" section below. To render the robot properly, you will compute matrix coordinate frame transforms for each link and joint of the robot based on the parameters of its hierarchy of joint configurations. The computation of the matrix transform for each joint and link will allow KinEval's rendering support routines to properly display the full robot. We will assume the joints will remain in their zero position, saving joint motion for the next assignment.

Features Overview

This assignment requires the following features to be implemented in the corresponding files in your repository:

Points distributions for these features can be found in the project rubric section. More details about each of these features and the implementation process are given below.

Just Starting Mode

While previous assignments were implemented within self-contained subsections of the kineval_stencil repository, with this project, you will start working with the KinEval part of the stencil repository that also supports all future projects in the course. This KinEval stencil allows for developing the core of a modeling and control computation stack (forward kinematics, inverse kinematics, and motion planning) in a modular fashion.

If you open "home.html" in this repository, you should see the disconnected pieces of a robot bouncing up and down in the default environment. This initial mode is the "starting point" state of the stencil to help build familiarity with JavaScript/HTML5 and KinEval.

Your (optional) first task is to make the bouncing robot in starting point mode responsive to keyboard commands. Specifically, the robot pieces will move upward, stop/start jittering, move closer together, and further apart (although more is encouraged). To do this, you will modify "kineval/kineval_startingpoint.js" at the sections marked with "STENCIL". These sections also include code examples meant to be a quick (and very rough) introduction to JavaScript and homogeneous transforms for translation, assuming programming competency in another language.

Brief KinEval Stencil Overview

Within the KinEval stencil, the functions my_animate() and my_init() in "home.html" are the principal access points into the animation system. my_animate() is particularly important as it will direct the invocation of functions we develop throughout the AutoRob course. my_animate() and my_init() are called by the primary process that maintains the animation loop: kineval.animate() and kineval.init() within "kineval/kineval.js".

IMPORTANT: "kineval/kineval.js", kineval.animate(), kineval.init(), and any of the given robot descriptions should not be modified.

For Just Starting Mode, my_animate() will call startingPlaceholderAnimate() and startingPlaceholderInit(), defined in "kineval/kineval_startingpoint.js". startingPlaceholderInit() contains JavaScript tutorial-by-example code that initializes variables for this project. startingPlaceholderAnimate() contains keyboard handlers and code to update the positioning of each body of the robot. By modifying the proper variables at the locations labed "STENCIL", this code will update the transformation matrix for each geometry of the robot (stored in the ".xform" attribute) as a translation in the robot's world. The ".xform" transform for each robot geometry is then used by kineval.robotDraw() to have the browser render the robot parts in the appropriate locations.

Forward Kinematics Files

Assuming proper completion of Just Starting Mode, you are now ready for implementation of robot forward kinematics. The following files are included (within script tags) in "home.html". You will modify these files for implementing FK:

Core Matrix Routines

A good place to start with your FK implemetation is writing and testing the core matrix routines. "kineval/kineval_matrix.js" contains function stencils for all required linear algebra routines. You will need to uncomment and fill in all the functions provided in this file except matrix_pesudoinverse and matrix_invert_affine. You do not need to implement the pseudoinverse calculation for this assignment; you should leave the matrix_pseudoinverse function commented out. You can implement the affine inverse function, but it will not be used in or tested for this assignment.

It is good practice to test these functions before continuing with your FK implementation. Consider writing a collection of tests using example matrix and vector calculations from the lecture slides or other sources.

Robot Examples

Each file in the "robots" subdirectory contains code to create a robot data object. This data object is initialized with the kinematic description of a robot (as well as some meta information and rendering geometries). The kinematic description defines a hierarchical configuration of the robot's links and joints. This description is a subset of the Unified Robot Description Format (URDF) converted into JSON format. The basic features of URDF are described in this tutorial.

IMPORTANT (seriously): The given robot description files should NOT be modified. Code that requires modified robot description files will fail tests used for grading. You are welcomed and encouraged to create new robot description files for additional testing.

The selection of different robot descriptions can occur directly in the URL for "home.html". As a default, the "home.html" in the KinEval stencil assumes the "mr2" robot description in "robots/robot_mr2.js". Another robot description file can be selected directly in the URL by adding a robot parameter. This parameter is segmented by a question mark and sets the robot file pointer to a given file local location, relative to "home.html". For example, a URL with "home.html?robot=robots/robot_urdf_example.js" will use the URDF example description. Note that to see the selected robot model in your visualization, you will need to turn off Just Starting Mode and have your FK methods implemented; see the Invoking Forward Kinematics section for more details.

Initialization of Kinematic Hierarchy

In addition to the various existing initialization functions, you should extend the robot object to complete the kinematic hierarchy to specify the parent and children joints for each link. This modification should be made in the kineval.initRobotJoints() function in "kineval/kineval_robot_init_joints.js". The children array of a link should be defined for all links except the leaves of the kinematic tree, in which case the ".children" property should be left undefined. For the KinEval user controls to work properly, the children array should be named the ".children" property of the link.

Note: KinEval refers to links and joints as strings, not pointers, within the robot object. robot.joints (as well as robot.links) is an array of data objects that are indexed by strings. Each of these objects stores relevant fields of information about the joint, such as its transform (".xform"), parent (".parent") and child (".child") in the kinematic hierarchy, local transform information (".origin"), etc. As such, robot.joints['JointX'] refers to an object for a joint. In contrast, robot.joints['JointX'].child refers to a string ('LinkX'), that can then be used to reference a link object (as robot.links['LinkX']). Similarly, robot.links['LinkX'].parent refers to a joint as a string 'JointX' that can then then be used to reference a joint object in the robot.joints array.

Invoking Forward Kinematics

The function kineval.robotForwardKinematics() in "kineval/kineval_forward_kinematics.js" will be the main point of invocation for your FK implementation. This function will need to call kineval.buildFKTransforms(), which is a function you will add to this file. kineval.buildFKTransforms() will update matrix transforms for the frame of each link and joint with respect to the global world coordinates. The computed transform for each frame of the robot needs to be stored in the ".xform" field of each link or joint. For a given link named 'LinkX', this xform field can be accessed as robot.links['LinkX'].xform. For a given joint named 'JointX', this xform field can be accessed as robot.joints['JointX'].xform. Once kineval.robotForwardKinematics() completes, the updated transforms for each frame are used by the function kineval.robotDraw() in the support code to render the robot.

A matrix stack recursion can be used to compute these global frames, starting from the base of the robot (specified as a string in robot.base). This recursion should use the provided local translation and rotation parameters of each joint in relation to its parent link in its traversal of the hierarchy. For a given joint 'JointX', these translation and rotation parameters are stored in the robot object as robot.joints['JointX'].origin.xyz and robot.joints['JointX'].origin.rpy, respectively. The current global translation and rotation for the base of the robot (robot.base) in the world coordinate frame is stored in robot.origin.xyz and robot.origin.rpy, respectively.

To run your FK routine, you must toggle out of starting point mode. This toggle can be done interactively within the GUI menu or by setting kineval.params.just_starting to false. The code below in "home.html" controls starting point mode invocation, where a single line can be uncommented to use FK mode by default:

 
// set to starting point mode is true as default
//   set to false once starting forward kinematics project
//kineval.params.just_starting = false;

if (kineval.params.just_starting == true) {
    startingPlaceholderAnimate();
    kineval.robotDraw();
    return;
}

Note: The stencil in "kineval/kineval_forward_kinematics.js" states that the user interface reuqires "robot_heading" and "robot_lateral", but these are for Assignment 4. You do not need these variables for this assignment.

Desired Results

The "robots/robot_mr2.js" example should produce the following:

If implemented properly, the "robots/robot_urdf_example.js" example should produce the following rendering:

The "robots/robot_crawler.js" example should produce the following (shown with joint axes highlighted):

Interactive Hierarchy Traversal

Additionally, a correct implementation will be able to interactively traverse the kinematic hierarchically by changing the active joint. The active joint has focus for user control, which will be used in the next assignment. For now, we are using the active joint to ensure your kinematic hierarchy is correct. You should be able to move up and down the kinematic hierarchy with the "k" and "j" keys, respectively. You can also move between the children of a link using the "h" and "l" keys.

Orienting Joint Rendering Cylinders

The cylinders used as rendering geometries for joints are not aligned with joint axes by default. The support code in KinEval will properly orient joint rendering cylinders. To use this functionality, simply ensure that the vector_cross() function is correctly implemented in "kineval/kineval_matrix.js". vector_cross() will be automatically detected and used to properly orient each joint rendering cylinder.

Undergraduate Advanced Extension

Students in the AutoRob Undergraduate Section can earn one additional point by creating a robot description for the RexArm 4-DOF robot arm, which can be used later in EECS 467 (Autonomous Robotics Laboratory). Rexarm link geometries are provided in STL format. RoBob Ross is an example of a RexArm project from 467 in Winter 2017. Below is a snapshot of a RexArm in KinEval created by mattdr:

Graduate Section Requirement

Students in the AutoRob Graduate Section must: 1) implement the assignment as described above to work with all given examples, which includes the Fetch, Baxter, and Sawyer robot descriptions, and 2) create a new robot description that works with KinEval.

The files "robots/fetch/fetch.urdf.js", "robots/baxter/baxter.urdf.js", and "robots/sawyer/sawyer.urdf.js" contain the robot data object for the Fetch, Baxter, and Sawyer kinematic descriptions. The Fetch robot JavaScript file is converted from the Fetch URDF description for ROS. A similar process was also done for the Baxter URDF description.

ROS uses a different default coordinate system than threejs, which needs to be taken into account in the FK computation for these three robots. ROS assumes that the Z, X, and Y axes correspond to the up, forward, and side directions, respectively. In contrast, threejs assumes that the Y, Z, and X axes correspond to the up, forward, and side directions. The variable robot.links_geom_imported will be set to true when geometries have been imported from ROS and set to false when geometries are defined completely within the robot description file. You will need to extend your FK implementation to compensate for the coordinate frame difference when this variable is set to true.

A proper implementation for fetch.urdf.js description should produce the following (shown with joint axes highlighted):

The "robots/sawyer/sawyer.urdf.js" example should produce the following:

Your newly created robot description should be placed in the "robots" directory with a filename with your username in the format "robot_uniqueid.js" if no external geometries are used for this robot (similar to the MR2 or Crawler robots). If external geometries are imported (similar to the Fetch and Baxter), the robot description should be in a new subdirectory with the robot's name. The robot's name should also be used to name the URDF file, such as "robots/newrobotname/newrobotname.urdf.js". It is requested that geometries for a new robot go into this directory within a "meshes" subdirectory, such as "robots/newrobotname/meshes". Guidance can be provided during office hours about creating or converting URDF-based robot description files to KinEval-compliant JavaScript and importing Collada, STL, and Wavefront OBJ geometry files.

Students are highly encouraged to port URDF descriptions of real world robot platforms into their code. Such examples of real world robot systems include the Kinova Movo, NASA Valkyrie and Robonaut 2, Boston Dynamics Atlas, Universal Robots UR10, and Willow Garage PR2.

The following KinEval-compatiable robot descriptions were created by students in past offerings of the AutoRob course. These descriptions are available for your use:

Advanced Extensions

Of the 4 possible advanced extension points, two additional points for this assignment can be earned by generate a proper Denavit-Hartenberg table for the kinematics of the Fetch robot. This table should be placed in the "robots/fetch" directory in the file "fetchDH.txt".

Of the 4 possible advanced extension points, three additional points for this assignment can be earned by implementing LU decomposition (with pivoting) routines for matrix inversion and solving linear systems. These functions should be named "matrix_inverse" and "linear_solve" and placed within the file containing your matrix routines.

Of the 4 possible advanced extension points, three additional points for this assignment can be earned by implementing rigid body transformations as dual quaternions (Kenwright 2012), in addition to the products of exponentials method described in class. Use of dual quaternion transformations must be selectable from the KinEval user interface.

Project Submission

For turning in your assignment, push your updated code to the master branch in your repository.

Assignment 4: Robot FSM Dance Contest

Due 11:59pm, Friday, October 30, 2020

Executing choreographed motion is the most common use of current robots. Robot choreography is predominantly expressed as a sequence of setpoints (or desired states) for the robot to achieve in its motion execution. This form of robot control can be found among a variety of scenarios, such as robot dancing (video below), GPS navigation of autonomous drones, and automated manufacturing. General to these robot choreography scenarios is a given setpoint controller (such as our PID controller from Pendularm) and a sequence controller (which we will now create).

For this assignment, you will build your own robot choreography system. This choreography system will enable a robot to execute a dance routine by adding motor rotation to its joints and creating a Finite State Machine (FSM) controller over pose setpoints. Your FK implementation will be extended to consider angular rotation about each joint axis using quaternions for axis-angle rotation. The positioning of each joint with respect to a given pose setpoint will be controlled by a simple P servo implementation (based on the Pendularm assignment). You will implement an FSM controller to update the current pose setpoint based on the robot's current state and predetermined sequence of setpoints. For a single robot, you will choreograph a dance for the robot by creating an FSM with your design of pose setpoints and an execution sequence.

This controller for the "mr2" example robot was a poor attempt at robot Saturday Night Fever (please do better):

This updated dance controller for the Fetch robot is a bit better, but still very far from optimal:

Features Overview

This assignment requires the following features to be implemented in the corresponding files in your repository:

Points distributions for these features can be found in the project rubric section. More details about each of these features and the implementation process are given below.

Joint Axis Rotation and Interactive Joint Control

Going beyond the joint properties you worked with in Assignment 3, each joint of the robot now needs several additional properties for joint rotation and control. These joint properties for the current angle rotation (".angle"), applied control (".control"), and servo parameters (".servo") have already been created within the function kineval.initRobotJoints(). The joint's angle will be used to calculate a rotation about the joint's (normal) axis of rotation vector, specified in the ".axis" field. To complete an implementation of 3D rotation due to joint movement, you will need to first implement basic quaternion functions in "kineval/kineval_quaternion.js" then extend your FK implementation in "kineval/kineval_forward_kinematics.js" to account for the additional rotations.

If joint axis rotation is implemented correctly, you should be able to use the 'u' and 'i' keys to move the currently active joint. These keys respectively decrement and increment the ".control" field of the active joint. Through the function kineval.applyControls(), this control value effectively adds an angular displacement to the joint angle.

Interactive Base Movement Controls

The user interface also enables controlling the global position and orientation of the robot base. In addition to joint updates, the system update function kineval.applyControls() also updates the base state (in robot.origin) with respect to its controls (specified in robot.controls). With the support function kineval.handleUserInput(), the 'wasd' keys are purposed to move the robot on the ground plane, with 'q' and 'e' keys for lateral base movement. In order for these keys to behave properly, you will need to add code to update variables that store the heading and lateral directions of the robot base: robot_heading and robot_lateral. These vectors need to be computed within your FK implementation in "kineval/kineval_forward_kinematics.js" and stored as global variables. They express the directions of the robot base's z-axis and x-axis in the global frame, respectively. Each of these variables should be a homogeneous 3D vector stored as a 2D array.

If robot_heading and robot_lateral are implemented properly, the robot should now be interactively controllable in the ground plane using the keys described in the previous paragraph.

Pose Setpoint Controller

Once joint axis rotation is implemented, you will implement a proportional setpoint controller for the robot joints in function kineval.robotArmControllerSetpoint() within "kineval/kineval_servo_control.js". The desired angle for a joint 'JointX' is stored in kineval.params.setpoint_target['JointX'] as a scalar by the FSM controller or keyboard input. The setpoint controller should take this desired angle, the joint's current angle (".angle"), and servo gains (specified in the ".servo" object) to set the control (".control") for each joint. All of these joint object properties are initialized in the function kineval.initRobotJoints() in "kineval/kineval_robot_init_joints.js". Note that the "servo.d_gain" is not used in this assignment; it is for advanced extensions.

Once you have implemented the control function described above, you can enable the conroller by either holding down the 'o' key or selecting 'persist_pd' from the UI. With the controller enabled, the robot will attempt to reach the current setpoint. One setpoint is provided with the stencil code: the zero pose, where all joint angles are zero. Pressing the '0' key sets the current setpoint to the zero setpoint.

Besides the zero setpoint, up to 9 other arbitrary pose setpoints can be stored by KinEval (in kineval.setpoints) for pose control. You can edit kineval.setpoints in your code for testing and/or for the FSM controller (see below), but the current robot pose can also be interactively stored into the setpoint list by pressing "Shift+number_key" (e.g., "Shift+1" would store the current robot pose as setpoint 1). You can then select any of the stored setpoints to be the current control target by pressing one of the non-zero number keys [1-9] that corresponds to a previously-stored setpoint. At any time, the currently stored setpoints can be output to the console as JavaScript code using the JSON.stringify function for the setpoint object: "JSON.stringify(kineval.setpoints);". Once you have found the setpoints needed to implement your desired dance, this setpoint array can be included in your code as part of your dance controller.

Since you will need to implement your setpoint controller before your FSM controller, for additional testing of your setpoint controller, a "clock movement" FSM controller has been provided as the function setpointClockMovement() in "kineval/kineval_servo_control.js". This function can be invoked by holding down the 'c' key or from the UI. This controller goes well with this song.

FSM Controller

Once your pose setpoint controller is working, an FSM controller should be implemented in the function kineval.setpointDanceSequence() in "kineval/kineval_servo_control.js". The reference implementation switches between the pose setpoints in kineval.setpoints based on two additional pieces of data: an array of indices (kineval.params.dance_sequence_index) and the current pose index (kineval.params.dance_pose_index). kineval.params.dance_sequence_index will tell your FSM the order in which the setpoints in kineval.setpoints should be selected to be the control target. Note that using this convention allows you to easily select the same setpoint multiple times to produce repetition in your dance. kineval.params.dance_pose_index is used to keep track of the current index within the dance pose sequence.

If this recommended variable convention is not used, the following line in "kineval/kineval_userinput.js" will require modification:


if (kineval.params.update_pd_dance)
    textbar.innerHTML += "executing dance routine, pose " + kineval.params.dance_pose_index + " of " + kineval.params.dance_sequence_index.length;

To complete your dance controller, choreograph a dance by initializing kineval.setpoints with the poses for your dance and kineval.params.dance_sequence_index with the pose ordering. You should initialize these data structures within the my_init() function in "home.html". Once you have the poses and sequence for your dance initialized, when you select both "persist_pd" and "update_pd_dance" in the UI, you should see the robot move through the setpoints of your dance.

Graduate Section Requirements

Students in the graduate section of AutoRob must implement the assignment as described above for the Fetch and Baxter robots with two additional requirements: 1) proper implementation of all joint types in the robot descriptions and 2) proper enforcement of joint limits for the robot descriptions.

The urdf.js files for these robots, included in the provided code stencil, contain joints with with various types that correspond to different types of motion:

Joints are considered to be continuous as the default. Joints with undefined motion types must be treated as continuous joints. The graduate section features for this assignment will be complete when your implementation correctly handles the direction of motion (rotation or translation) and limits of all of the above types of joints.

Note: Rosbridge feature cancelled due to COVID-19. Your code can interface with any robot (or simulated robot) running rosbridge/ROS using the function kineval.rosbridge() in "kineval/kineval_rosbridge.js". This code requires that the rosbridge_server package is running in a ROS run-time environment and listening on a websocket port, such as for ws://fetch7:9090. If your FK implementation is working properly, the model of your robot in the browser will update along with the motion of the robot based on the topic subscription and callback. This functionality works seamlessly between real and simulated robots. Although this will not be done for this class, to control the robot arm, a rosbridge publisher must be written to update the ROS topic "/arm_controller/follow_joint_trajectory/goal" with a message of type "control_msgs/FollowJointTrajectoryActionGoal".

Machines running rosbridge, ROS, and Gazebo for the Fetch will be available during special sessions of the class. Students are encouraged to install and run the Fetch simulator on their own machines based on this tutorial.

Advanced Extensions

Of the 4 possible advanced extension points, one additional point for this assignment can be earned by adding the capability of displaying laser scans from a real or simulated Fetch robot.

Of the 4 possible advanced extension points, four additional points for this assignment can be earned by adding the capability of displaying 3D point clouds from a real or simulated Fetch robot and computing surface normals about each point.

Of the 4 possible advanced extension points, four additional points for this assignment can be earned by implementing dynamical simulation through the recursive Newton-Euler algorithm (Spong Ch.7). This dynamical simulation update be implemented as function kineval.updateDynamicsNewtonEuler() in the file "kineval/kineval_controls.js". In "home.html", the call to kineval.updateDynamicsNewtonEuler() should replace the call purely kinematic update in kineval.applyControls().

Of the 4 possible advanced extension points, four additional points for this assignment can be earned by developing and implementing a dynamical simulation of biped hopper. Permission by the course staff must be granted first before attempting this advanced extension.

Project Submission

For turning in your assignment, push your updated code to the master branch in your repository.

Assignment 5: Inverse Kinematics

Due 11:59pm, Wednesday, November 11 Wednesday, November 18, 2020

Although effective, robot choreography in configuration space is super tedious and inefficient. This difficulty is primarily due to posing each joint of the robot at each setpoint. Further, changing one joint often requires updating several other joints due to the nature of kinematic dependencies. Inverse kinematics (IK) offers a much easier and efficient alternative. With IK implemented, we only need to pose the endeffector in a common workspace, and the states of the joints in configuration space are automatically inferred. IK is also important when we care about the "tool tip" of an instrument being used by a robot. One such example is a robot using marker to draw a picture, such as in the PR2 Portrait Bot Project below:

For this assignment, you will now control your robot to reach to a given point in space through inverse kinematics for position control of the robot endeffector. Inverse kinematics will be implemented through gradient descent optimization with both the Jacobian Transpose and Jacobian Pseudoinverse methods, although only one will be invoked at run-time.

As shown in the video below, if successful, your robot will be able to continually place its endeffector (indicated by the blue cube) exactly on the reachable target location (indicated by the green cube), regardless of the robot's specific configuration:

Features Overview

This assignment requires the following features to be implemented in the corresponding files in your repository:

Points distributions for these features can be found in the project rubric section. More details about each of these features and the implementation process are given below.

Matrix Pseudoinverse Function

You will need to implement one additional matrix helper function in "kineval/kineval_matrix.js" for this assignment: matrix_pseudoinverse. This method will be necessary for the pseudoinverse version of gradient descent (see below). For this helper function, you are allowed to use a library function for matrix inversion, which can be invoked by using the provided routine numeric.inv(mat), available through numericjs.

Core IK Function

The core of this assignment is to complete the kineval.iterateIK() function in the file kineval/kineval_inverse_kinematics.js. This function is invoked within the function kineval.inverseKinematics() with three arguments:

From these arguments and the current robot configuration, the kineval.iterateIK() function will compute controls for each joint. Upon update of the joints, these controls will move the configuration and endeffector of the robot closer to the target.

Important: Students enrolled in EECS 398 will implement inverse kinematics for only the position, not the orientation, of the endeffector.

kineval.iterateIK() should also respect global parameters for using the Jacobian pseudoinverse (through boolean parameter kineval.params.ik_pseudoinverse) and step length of the IK iteration (through real-valued parameter kineval.params.ik_steplength). Note that these parameters can be changed through the user interface (under Inverse Kinematics). KinEval also maintains the current endeffector target information in the kineval.params.ik_target parameter.

IK iterations can be invoked through the user interface (Inverse Kinematics->persist_ik) or by holding down the 'p' key. Further, the 'r'/'f' keys will move the target location up/down. You can also move the robot relative to the target using the robot base controls. When performing IK iterations, the endeffector and its target pose will be rendered as cube geometries in blue and green, respectively.

For your code to work with the CI grader, you will need to set three global variables in kineval.iterateIK(): robot.dx, robot.jacobian, and robot.dq. There is a comment in "kineval/kineval_inverse_kinematics.js" that specifies what each of these variables should hold. Please note that robot.dx and robot.jacobian should both have six rows, even if you are doing position-only IK.

In implementing this IK routine, please also remember the following:

IK Random Trial

All students in the AutoRob course are expected to run their IK controller with the random trial feature in the KinEval stencil. The IK random trial is executed through the function kineval.randomizeIKtrial() in the file "kineval/kineval_inverse_kinematics.js". This function is incomplete in the provided stencil. Code for this function to properly run the random trial will be made available in the assignment 5 discussion channel. Once you have copied the necessary code into this function, you will be able to test your code on random trials by first selecting persist_ik (under Inverse Kinematics) then selecting execute (under Inverse Kinematics->IK Random Trial) from the user interface.

Undergraduate Advanced Extension

Students in the AutoRob Undergraduate Section can earn one additional point by implementing a closed-form inverse kinematics solution for the RexArm 4-DOF robot arm, which can be used later projects in EECS 467 (Autonomous Robotics Laboratory).

Graduate Section Requirement

Students enrolled in the graduate section of AutoRob will implement inverse kinematics for both the position and orientation of the endeffector, namely for the Fetch robot. The default IK behavior will be position-only endeffector control. Both endeffector position and orientation should be controlled when the boolean parameter kineval.params.ik_orientation_included is set to true, which can be done through the user interface (Inverse Kinematics->ik_orientation_included).

In order to handle the orientation of the endeffector in your IK implementation, you will need to calculate the orientation part of the error term, which will require you to implement a conversion from a rotation matrix to Euler angles. You may find an online reference to inform your implementation of this conversion (please cite it in a comment in your code) or develop your own approach to the conversion calculation. Completing this conversion is a necessary step for including orientation in your IK implementation, and it also fulfills the "Euler angle conversion" feature.

Advanced Extensions

Of the 4 possible advanced extension points, one additional point for this assignment can be earned by reaching to 100 targets in a random trial within 60 seconds. A video of this execution must be provided to demonstrate this achievement. This video file should be in the repository root directory with the name "IK100in60" and appropriate file extension.

Of the 4 possible advanced extension points, three additional points for this assignment can be earned by implementing the Cyclic Coordinate Descent (CCD) inverse kinematics algorithm by Wang and Chen (1991). This function should be implemented in the file "kineval/kineval_inverse_kinematics.js" as another option within the function kineval.iterateIK().

Of the 4 possible advanced extension points, three additional points for this assignment can be earned by implementing downhill simplex optimization to perform inverse kinematics. This function should be implemented in the file "kineval/kineval_inverse_kinematics.js" as another option within the function kineval.iterateIK().

Of the 4 possible advanced extension points, four additional points for this assignment can be earned by implementing resolved-rate inverse kinematics with null space constraints to respect joint limits. This function should be implemented in the file "kineval/kineval_inverse_kinematics.js" as another option within the function kineval.iterateIK().

Of the 4 possible advanced extension points, four additional points for this assignment can be earned by extending your IK controller to use potential fields to avoid collisions.

Of the 4 possible advanced extension points, one additional point for this assignment can be earned by implementing a search mechanism to automatically find appropriate PID gains for the Pendularm. This implementation should be placed in the file "project_pendularm/pendularm1_gainsearch.html" and allow for arbitrary initial PID gains for the search to be set in the variable "initial_gains".

Project Submission

For turning in your assignment, ensure your completed project code has been committed and pushed to the master branch of your repository.

Assignment 6: Motion Planning

Due 11:59pm, Monday, December 7, 2020

Our last programming project for AutoRob returns to search algorithms for generating navigation setpoints, but now for a high-dimensional robot arm. The A-star graph search algorithm in Assignment 1 is a good fit for path planning when the space to explore is limited to the two degrees-of-freedom of a robot base. However, as the number of degrees-of-freedom of our robot increases, our search complexity will grow exponentially towards intractability. For such high-dimensional search problems, an exhaustive overview of the majority of the space is not an option. Instead, we now look to sampling-based search algorithms, which will introduce randomness to our search process. These sampling-based algorithms trade off the guarantees and optimality of exhaustive graph search for viably tractable planning in complex environments. The example below shows one example of sampling-based planning navigating to move a rod through a narrow passageway:

and such planning is also used in simple tabletop scenarios:

For this assignment, you will now implement a collision-free motion planner to enable your robot to navigate from a random configuration in the world to its home configuration (or "zero configuration"). This home configuration is where every robot DOF has a zero value. For your planning implementation, the configuration space includes the state of each joint and the global orientation and position of the robot base. Thus, the robot must move to its original state at the origin of the world. A visual explanation of this desired behavior is below:

For both the undergraduate and graduate sections, motion planning will be implemented through the RRT-Connect algorithm (described by Kuffner and LaValle). The graduate section will additionally implement the RRT-Star (alternate paper link via IEEE) motion planner of Karaman et al. (ICRA 2011).

Features Overview

This assignment requires the following features to be implemented in the corresponding files in your repository:

Points distributions for these features can be found in the project rubric section. More details about each of these features and the implementation process are given below.

2D RRT-Connect

To gain familiarity with the RRT-Connect algorithm, you can start this assignment by returning to the 2D world from Assignment 1. If needed, refer back to the Assignment 1 description for a description of the search canvas environment and its parameters. You can enable RRT-Connect as the search algorithm through the URL parameter search_alg: "search_canvas.html?search_alg=RRT-connect".

You will implement the 2D version of RRT-Connect in project_pathplan/rrt.js by completing the iterateRRTConnect() function. Its signature and desired return values are provided in the code stencil. Note that there are other function stencils provided in this file as well, but your 2D RRT-Connect implementation should involve only iterateRRTConnect() and any helper functions you choose to add. You do not need to implement iterateRRT(), and only students in the graduate section need to implement iterateRRTStar() (see description of graduate section requirements below).

A few other details to be aware of when implementing 2D RRT-Connect:

If properly implemented, your RRT-Connect implementation should produce results similar to the image below, although the inherent randomness of the algorithm will mean that the sampled states and final path will be slightly different:

Getting Started in Configuration Space

The core of this assignment is to complete the robot_rrt_planner_init() and robot_rrt_planner_iterate() in kineval/kineval_rrt_connect.js. This file and the collision detection file kineval/kineval_collision.js have already been included in home.html for you:


    <script src="kineval/kineval_rrt_connect.js"></script>
    <script src="kineval/kineval_collision.js"></script>

The code stencil will automatically load a default world. A different world can be specified as an appended parameter within the URL: "home.html?world=worlds/world_name.js". The world file specifies the global objects "robot_boundary", which describes the min and max values of the world along the X, Y, and Z axes, and "robot_obstacles", which contains the locations and radii of sphere obstacles. To ensure the world is rendered in the display and available for collision detection, the geometries of the world are included through the provided call to kineval.initWorldPlanningScene() in kineval/kineval.js.

Collision Detection Setup

In the search canvas world, a collision detection function was provided for you. For RRT-Connect in robot configuration space, you will need to start by completing the collision detection feature yourself. The main collision detection function used by configuration-space RRT-Connect is kineval.robotIsCollision() (in kineval/kineval_collision.js), which detects robot-world collisions with respect to a specified world geometry.

Worlds are specified as a rectangular boundary and sphere obstacles. A collection of worlds are provided in the "worlds/" subdirectory of kineval_stencil. The collision detection system performs two forms of tests: 1) testing of the base position of the robot against the rectangular extents of the world, which is provided by default, and 2) testing of link geometries for a robot configuration against spherical objects, which depends on code you will write.

Collision testing for links in a configuration should be performed by AABB/Sphere tests that require the bounding box of each link's geometry in the coordinates of that link. This bounding box is computed for you by the following code within the loop inside kineval.initRobotLinksGeoms() in kineval.js:


    // For collision detection,
    // set the bounding box of robot link in local link coordinates
    robot.links[x].bbox = new THREE.Box3;
    // setFromObject returns world space bbox
    robot.links[x].bbox = robot.links[x].bbox.setFromObject(robot.links[x].geom);
  

As you write the collision test, you can thus access the AABB for any robot link as robot.links[x].bbox. This object contains two elements, max and min, that contain the maximum and minimum corners of the link's bounding box, specified in the link's local coordinate frame.

Even before your planner is implemented, you can use the collision system interactively with your robot. The provided kineval.robotIsCollision() function is called for you during each iteration from my_animate() in home.html:


    // determine if robot is currently in collision with world
    kineval.robotIsCollision();

Completing Collision Detection

To complete the collision system, you will need to modify the forward kinematics calls in kineval/kineval_collision.js. Specifically, you will need to perform a traversal of the forward kinematics of the robot for an arbitrary robot configuration within the function kineval.poseIsCollision(). kineval.poseIsCollision() takes in a vector in the robot's configuration space and returns either a boolean false for no detected collision or a string with the name of a link that is in collision. As a default, this function performs base collision detection against the extents of the world. For collision detection of each link, this function will make a call to function that you create called robot_collision_forward_kinematics() to recursively test for collisions along each link. Your collision FK recursion should use the link collision function, traverse_collision_forward_kinematics_link(), which is provided in kineval/kineval_collision.js, along with a joint traversal function that properly positions the link and joint frames for the given configuration.

Some pointers about your collision FK traversal:

If successful to this point, you should be able to move the robot around the world and see the colliding link display a red wireframe when a collision occurs. There could be many links in collision, but only one will be highlighted, as shown in the following examples:

Implementing and Invoking the Planner

Your motion planner will be implemented in the file kineval/kineval_rrt_connect.js through the functions kineval.robotRRTPlannerInit() and robot_rrt_planner_iterate(). This implementation can be a port of your 2D RRT-Connect, but it will require some updates to work with in the configuration space of KinEval robots. The kineval.robotRRTPlannerInit() function should be modified to initialize the RRT trees and other necessary variables. The robot_rrt_planner_iterate() function should be modified to perform a single RRT-Connect iteration based on the current RRT trees.

Basic RRT tree support functions are provided for initialization, adding configuration vertices (which renders "breadcrumb" indicators of base positions explored), and adding graph edges between configuration vertices. This function should not use a for loop to perform multiple planning iterations, as this will cause the browser to block and become unresponsive. Instead, the planner will be continually called asynchronously by the code stencil until a motion plan solution is found.

Important: Your planner should be constrained such that the search does not consider configurations where the base is outside the X-Z plane. Specifically, the base should not translate along the Y axis, and should not rotate about the X and Z axes.

Once implemented, your planner will be invoked interactively by first moving the robot to an arbitrary non-colliding configuration in the world and then pressing the "m" key. The "m" key will request the generation of a motion plan. The goal of a motion plan will always be the home configuration, as defined in the introduction to this assignment. While the planner is working, it will not accept new planning requests. Thus, you can move the robot around while the planner is executing.

Planner Output

The output of your planner will be a motion path in a sequentially ordered array (named kineval.motion_plan[]) of RRT vertices. Each element of this array contains a reference to an RRT vertex with a robot configuration (.vertex), an array of edges (.edges), and a threejs indicator geometry (.geom). Once a viable motion plan is found, this path can be highlighted by changing the color of the RRT vertex "breadcrumb" geom indicators. The color of any configuration breadcrumb indicator in a tree can be modified, such as in the following example for red:


  tree.vertices[i].geom.material.color = {r:1,g:0,b:0};

The user should should be able to interactively move the robot through the found plan. Stencil code in user_input() within kineval_userinput.js will enable the "n" and "b" keys to move the robot to the next and previous configuration in the found path, respectively. These user key presses will respectively increment and decrement the parameter kineval.motion_plan_traversal_index such that the robot's current configuration will become:


  kineval.motion_plan[kineval.motion_plan_traversal_index]

Note: we are NOT using robot.joints[...].control to execute the found path of the robot. Although this can be done, the collision system does not currently test for configurations that occur due to the motion between configurations.

The result of your RRT-Connect implementation in configuration space should look similar to this path found in the worlds/world_s.js world:

Testing

Make sure to test all provided robot descriptions from a reasonable set of initial configurations within all of the provided worlds, ensuring that:

Warning: Respect Configuration Space

The planner should produce a collision-free path in configuration space (over all robot DOFs) and not just the movement of the base on the ground plane. If your planner does not work in configuration space, it is sure to fail tests used for grading.

Graduate Section Requirement

In addition to the requirements above, students in the graduate section must also implement the RRT-Star motion planning algorithm for the 2D search canvas. You will need to complete the iterateRRTStar() function stencil in project_pathplan/rrt.js for this feature. Part of this assignment is an exercise in how to conceptualize implementation details of an algorithm from a robotics paper. Because of this, you will need to refer to the linked paper for details on how to implement the RRT-Star algorithm. Note: The course staff will not provide assistance with RRT-Star, so we strongly encourage high-level discussion of the algorithm among students on the assignment channel or in pod meetings.

Advanced Extensions

Of the 4 possible advanced extension points, one additional point for this assignment can be earned by adding the capability of motion planning to an arbitrary robot configuration goal.

Of the 4 possible advanced extension points, two additional points for this assignment can be earned by using the A-star algorithm for base path planning in combination with RRT-Connect for arm motion planning.

Of the 4 possible advanced extension points, one additional point for this assignment can be earned by writing a collision detection system for two arbitrary triangles in 2D using a JavaScript/HTML5 canvas element.

Of the 4 possible advanced extension points, two additional points for this assignment can be earned by writing a collision detection system for two arbitrary triangles in 3D using JavaScript/HTML5 and threejs or a canvas element.

Of the 4 possible advanced extension points, four additional points for this assignment can be earned by implementation of triangle-triangle tests for collision detection between robot and planning scene meshes.

Of the 4 possible advanced extension points, three additional points for this assignment can be earned by implementation of cubic or quintic polynomial interpolation (Spong Ch. 5.5.1 and 5.5.2) across configurations returned in a computed motion plan.

Of the 4 possible advanced extension points, four additional points for this assignment can be earned by implementing an approved research paper describing a motion planning algorithm.

Project Submission

For turning in your assignment, ensure your completed project code has been committed and pushed to the master branch of your repository.

Assignment 7: The best use of robotics?

Video Presentation due 11:59pm, Friday, December 4, 2020

Scenario: An investor is considering giving you 20 million dollars (cold hard USD cash, figuratively). This investor has been impressed by your work with KinEval and other accomplishments while at the University of Michigan. They are convinced you have the technical ability to make a compelling robot technology... but, they are unsure how this technology could produce something useful. Your task is to make a convincing pitch for a robotics project that would yield a high return on investment, as measured by some metric (financial profit, good for society, creation of new knowledge, etc.).

You will get 2 minutes to make a pitch to develop something useful with robots. Consider the instructor and your classmates as the people that need to be convinced. As a guideline, your pitch should address an opportunity (presented by a need or a problem), your planned result (as a system, technology, product, and/or service), and how you will measure successful return on investment. Return on investment can be viewed as financial profit (wrt. venture capital), good for society (wrt. a government program), creation of new knowledge or capabilities (wrt. a grant for scientific research). Remember, the purpose is to convince and inspire about what is possible, rather than dive into specifics.

The last scheduled class period and a little more (December 7, 1:30-4:30pm) will be dedicated to a screening of student video presentations to pitch ideas on the best use of robotics.

Please post a link to your presentation video on the "#asgn7-best-use" discussion channel before 11:59pm on Friday December 4th. Your first slide must include the title of your presentation, your name, and your Michigan unique name.

The pitch judged to be the most convincing will get first dibs.



Additional Materials

Appendix: Git-ing Started with Git

Using version control effectively is an essential skill for both the AutoRob course and, more generally, contributing to advanced projects in robotics research and development. Git is arguably the most widely used version control system at the current time. Examples of the many robotics projects using Git include: Lightweight Communications and Marshalling, the Robot Operating System, Robot Web Tools, Fetch Robotics, and the Rethink Baxter. To help you use Git effectively, the course staff has added the tutorials below for getting started with Git. This is meant to be a starting guide to using Git version control and the bash command shell. For a more complete list of commands and features of Git, you can refer to the following guides: The Git Pro book or The Basic Git command line reference for windows users. An interactive tutorial for Git is available at LearnGitBranching.

Installing Git

The AutoRob course assumes Git is used from a command line terminal to work with a Git hosting service, such as GitHub, Bitbucket, or EECS GitLab. Such terminal environments are readily available in Linux and Mac OSX through their respective terminal programs. For MS Windows, we recommend Git Bash, which can be downloaded from the Git for Windows project. Several other viable alternatives Git clients exist, such as the GUI-based GitKraken.

Git can be installed on Linux through the package managment system used by your Linux distro, likely with one of the following commands:


    sudo yum install git-all

    sudo apt install git

For Mac OSX, Git can be installed on its own using the Git-OSX-Installer or as part of larger set of Xcode build tools.

If you open the "Git Bash" program on Windows or the "Terminal" program on Mac OSX or Linux, you should see a shell environment that looks something like this (screenshot from an older version of Windows Git Bash):

If you have Git installed, you should should be able to enter the "git" command and see the following usage information printed (screenshot from OSX):

Cloning your repository

The most common thing that you will need to do is pull and push files from and to your Git hosting service. Upon opening Git Bash or the terminal, you will need to go to the location of both your GitHub/Bitbucket/GitLab repository on the web and your Git workspace on your local computer. Our first main step is to clone your remote repository onto your local computer. Towards this end, the next step is to determine your current directory, assuming you will use this directory to create a workspace. For Linux and OSX, the terminal should start in your home directory, often "/home/username" or "/Users/username". For Git Bash on Windows, the default home directory location could be the Documents in your user directory, or the general user folder within "C:\Users".

From your current directory, you can use Bash commands to view and modify the contents of directories and files. You can see a list of the files and folders that can be accessed using ls (list) and change the folder using the command cd (change directory) as shown below. If you believe that the directory has files in addition to folders, but would like a list of just the folders, then the command ls –d */ can be used instead of ls. Below is a quick summary of relevant Bash commands (or reference the cheat sheet here):

Once you have navigated to the directory where you want to create your workspace, you are ready to clone a copy of your remote repository and populate it with files for AutoRob projects. It assumed that you have already created a repository on your Git hosting service, given the course staff access to this repository, and provided a link of your repository to the course staff. You will need the repository link in the form of "https://github.com/user_name/repository_name.git" if you are using HTTPS (default) or "git@github.com:user_name/repository_name.git" if you are using SSH (only if you have set up your SSH keys). You'll use this link to clone a copy of your remote repository onto your local machine using the following git command below. This command will clone the repository contents to a subdirectory labeled with the name of the repository:


    git clone [repository URL link]

This directory should be listed and inspected to ensure it has been cloned with the contents of the repository, matching the remote repository from your Git hosting service. If this is a new repository, it is not problem for this directory to be empty:


    ls [repository_name]

After cloning has finished, you can also check for differences between the files on your computer and the remote repository by running the "git status" command in from within your newly-created directory as shown below. If you receive the message shown in the example below, then there are no differences. If there are differences, then it will have the number of files which are different highlighted in red.


    $ git status
    On branch master
    Your branch is up-to-date with 'origin/master'.
    nothing to commit, working directory clean

Important: workspace is not the same as repository

You should now have a local copy of your repository. It is critical to note that your local repository is different than your current workspace. Your workspace is not automatically tracked by the version control system and considered ephemeral. Any changes made to your workspace must be committed into the local repository to be recognized by the version control system. Further, any changes committed to your local repository must also be pushed remotely to be recognized by your Git hosting service. Thus, any changes made to your workspace can be lost if not committed and pushed, which will be discussed more in later sections.

Populating your repository with project stencil code

Use the "git remote" command to add a second remote repostiory to your new local repository:


    cd [repository_name]
    git remote add upstream https://github.com/autorob/kineval-stencil.git

Now pull the files from the autorob remote with the following command:


    git pull upstream master

Note: The above command may not work if your local repository has preexisting commits in it. ONLY if you are having trouble with the pull command, you can try the following command instead:


    git reset --hard upstream/master

The "reset --hard" command will erase files and history, so be extra careful that you are not overwriting an important repository here!

Testing out the stencil code

Your folder should now be populated with the KinEval files. Open "home.html" in a web browser and ensure you see the starting point page pictured below:

If your browser throws an error when loading "home.html", one potential cause is that this browser disallows loading of local files. In such cases, the browser will typically report a security error in the console. This security issue is avoided by serving the KinEval files from an HTTP server. Such a HTTP server is commonly available within distributions for modern operating systems. Assuming Python is installed on your computer, you can start a HTTP with the following command from your workspace directory, and then view the page at localhost:8000:


    python -m SimpleHTTPServer

Alternatively, if you have nodejs installed, you can install and start a HTTP with the following command from your workspace directory, and then view the page at localhost:3000:


    npm install simple-server
    node simple-server

Commit and push to update your repository

Whenever you make any significant changes to your repository, these changes should be committed to your local repository and pushed to your remote repository. Such changes can involve adding new files or modifying existing files in your local workspace. For such changes, you will first commit changes from your workspace to your local repository using the "git add" then "git commit" commands:


    git add [FileName]
    git commit -m "message describing changes"

and then pushing these changes from your local repository to a synced repository on your git hosting service:


    git push

This commit will occur to the "master" branch of your repository.

Note: the change files must be located in the correct repository folder on your local computer and these commands should be performed in the local workspace directory. Below is a more detailed summary of git commands for adding files from your workspace to your repository:

Note: If you are unsure about the options to use with these commands or any other git command, "-h" is your friend. Try the following commands:


    git add -h
    git commit -h
    git push -h

Once you have committed and pushed, your changes have been safely stored and tracked remotely. The local workspace could now be deleted without concern. This local workspace can also be updated with changes to the remote repository by pulling.

Pulling remote changes

Changes can be made to your remote repository, potentially by other collaborators, without being continuously tracked by your local repository. This can lead to potential versioning conflicts when committed changes contradict each other. For the AutoRob course, versioning conflicts should not be a problem because commits to your repository, other than those from the course staff, should be yours alone. That said, one good practice is to ensure your workspace, local repository, and remote repository are synced before making any changes. A brute force method for doing this is to re-clone your repository each time you begin to make changes. Another option is to pull remote changes into your local repository and workspace using the git pull command:


    cd [repository_name]
    git pull

Below is a more detailed summary of git commands for pulling and fetching:

Branching

Branching is an effective mechanism for work in a repository to be done in parallel with changes merged at a later point. A branch essentially creates a copy of your work at a particular version. Branches are independently tracked by the version controller and can be merged together when requested (which can result in a "pull request" when you're working on a repository in collaboration with others). The larger story for branching and merging is outside the scope of AutoRob.

The working version of your code, which you submit for grading, is expected to be in the master branch of your repository. When working on a new assignment, it is recommend that you create a branch for this new work. This allows your stable code in the master branch to be undisturbed while you continue to modify your code. Once your work for this assignment is done, you can then update your master by merging in your assignment branch. Stylistically, it is helpful to use a branch name like Assignment-X for your assignment branch for project number X.

The simplest means for branching in this context is to use the branching feature from the webpage of your remote repository. From GitHub, simply select the master branch from the "Branch: " button and enter the name of the branch to be created. From Bitbucket, select the "Branches" icon from the left hand toolbar and follow the instructions for branch creation. If successful, you should see a list of branches that can each be inspected for their respective contents. Branches can also be deleted from this interface. You will need to pull from your remote repository after creating any branches from this interface to see them in your local repository.

A branch can also be created from the command line by the following, which will create a copy of the current branch:


    git branch [branch_name]

You can switch between branches with the following command:


    git checkout [branch_name]

as well as clone a specific branch from a repository:


    git clone -b [branch_name] [repository URL link]

Good luck and happy hacking!