188
N° d’ordre : 4249 ANNÉE 2010 THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université Européenne de Bretagne pour le grade de DOCTEUR DE L’UNIVERSITÉ DE RENNES 1 Mention : Informatique Ecole doctorale Matisse présentée par Mohammed Abdel-Rahman MAREY préparée à l’INRIA Rennes – Bretagne Atlantique & IRISA. Equipe d’accueil : LAGADIC Contributions to control modeling in visual servoing, task redundancy, and joint limits avoidance. Thèse soutenue à Rennes le 16 Décembre 2010 devant le jury composé de : Seth HUTCHINSON Professeur, University of Illinois at Urbana- Champaign, USA / rapporteur Philippe MARTINET Professeur, Institut Français de Mécanique Avancée - Clermont-Ferrand, France / rapporteur Wisama KHALIL Professeur, Ecole Centrale de Nantes, France / examinateur Patrick RIVES Directeur de Recherche, INRIA Sophia Antipolis - Méditerranée, France / examinateur Eric MARCHAND Professeur, Université de Rennes 1, France / examinateur François CHAUMETTE Directeur de Recherche, INRIA Rennes - Bretagne Atlantique, France/ directeur de thèse

THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

  • Upload
    others

  • View
    2

  • Download
    0

Embed Size (px)

Citation preview

Page 1: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

N° d’ordre : 4249 ANNÉE 2010

THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université Européenne de Bretagne

pour le grade de

DOCTEUR DE L’UNIVERSITÉ DE RENNES 1 Mention : Informatique

Ecole doctorale Matisse

présentée par

Mohammed Abdel-Rahman MAREY préparée à l’INRIA Rennes – Bretagne Atlantique & IRISA.

Equipe d’accueil : LAGADIC

Contributions to control modeling in visual servoing, task redundancy,and joint limits avoidance.

Thèse soutenue à Rennes le 16 Décembre 2010 devant le jury composé de :

Seth HUTCHINSON Professeur, University of Illinois at Urbana-Champaign, USA / rapporteurPhilippe MARTINET Professeur, Institut Français de Mécanique Avancée - Clermont-Ferrand, France / rapporteur

Wisama KHALIL Professeur, Ecole Centrale de Nantes, France / examinateur

Patrick RIVES Directeur de Recherche, INRIA Sophia Antipolis -Méditerranée, France / examinateur

Eric MARCHAND Professeur, Université de Rennes 1, France / examinateur

François CHAUMETTE Directeur de Recherche, INRIA Rennes - Bretagne Atlantique, France/ directeur de thèse

Page 2: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Page 3: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

111

111

111

111

111

111

111

111

111

111

111

To my brother Wael ...

Page 4: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne
Page 5: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Acknowledgment

The research described in this thesis was carried out from October 2006 to December 2010 at

the Institut National de Recherche en Informatique et Automatique INRIA, centre Rennes -

Bretagne Atlantique and at the Institut de Recherche en Informatique et Systèmes Aléatoires

IRISA in Rennes. The thesis was undertaken in the Lagadic group directed by François

Chaumette. The four years of my thesis work were financially supported by the Egyptian

Government through a scientific mission from the Ministry of High Education and Scientific

Research, in addition to a supplementary support from the Lagadic group during the last year.

First of all, I am deeply indebted to my supervisor François Chaumette, INRIA Senior

Researcher, for his encouragement, helpful guidance, great care, valuable advices, continu-

ous and indispensable assistance, and his great help in the interpretation of the results of this

work. I would like to thank him for his confidence in me and his availability through my the-

sis. The numerous discussions and his motivating investment in the research surrounding my

thesis is more than greatly appreciated. It is really my pride to perform my Ph.D. research

under his supervision and it is a fact that this thesis would not have been possible unless his

supervision.

I would like to thank the members of the jury for having accepted to assess my thesis:

Seth HUTCHINSON, Professor at the University of Illinois at Urbana-Champaign, USA,

for the fruitful discussions that we had, for his time spent in particularly rigorous reading,

analyzing the thesis, and writing his report.

Philippe MARTINET, Professor at the Institut Français de Mécanique Avancée - Clermont-

Ferrand, France, for his interest in my work and time spent in writing his report.

Wisama KHALIL, Professor at Ecole Centrale de Nantes, France, to have honored me by

accepting to be a member of the jury and for his interest in my work.

Patrick RIVES, Senior Researcher at INRIA Sophia Antipolis-Méditerranée, France, for

accepting to be a member of the jury and for his useful comments.

Page 6: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Eric MARCHAND, Professor at the Université de Rennes 1, France, for presiding the jury.

I would like to thank again Prof. Seth HUTCHINSON for his valuable and useful com-

ments about this work through several discussions with him either when he visited our group

in Rennes or at Beckman Institute at Champaign, during the week I spent in his lab which

was a great opportunity for me to have several long and deep discussions with him about my

work. It was a great chance to know the ongoing research in his lab in high level control

system, artificial intelligence and control systems security by his wonderful team members

James Davidson, Sal Candido and the others. My sincere thanks are due to Seth for his de-

tailed review and excellent advice while performing this work.

I would like to thank Prof. Abderrahmane Kheddar, director of the Joint Robotics Labora-

tory at the National Institute of Advanced Industrial Science and Technology (CNRS-AIST

JRL), Tsukuba, Japan, for facilitating my visit to his lab. I would like also to express my

deepest thanks to Olivier Stasse for our discussions and for his help and for his demonstration

of the HRP2, I woud like also to thank all researchers, Ph.D. students and master students

in the lab for having many useful discussions about their work in the field of humanoid robot.

I would like to thank Prof. Farrokh J. Sharifi, director of the Robotics and Manufactur-

ing Automation Laboratory in Ryerson University, Toronto, Ontario, Canada, who visited

our lab for six months during which we had several useful discussions and works concerning

classical Kalman filters and a new one for pose estimation in visual servoing.

I would also like to thank our perfect engineer Fabien Spindlier for his continuous and great

support during experimental phases that allows me to get accurate results thanks to the ac-

curate calibrated camera and/or robot system; Prof. Eric Marchand for valuable discussion

about ViSP library and joint limits avoidance; Dr. Alexandre Krupa for useful discussions

and his last medical robot demonstration; Prof. El Mustapha Mouaddib for his useful com-

ments about the final PhD presentation.

I would like to thank my friends and colleagues for their friendship and support including

those who have shared their office with me giving me the chance to discuss several issues

with them: Fabien Servant who was the first person I met in the train station in Rennes,

Mahindra the first person I went out with in Rennes, Albert Diosi for several useful discus-

sions, Anthony Saunier for his help, support, remarkable kindness and deeply understand-

ing, Nicolas Melchior for his continuous help and of course for his friendship as well as

Andrea Cherubini, Guillaume Fortier and Laurent Coutard. And those who complete the

shape of Lagadic life including Christophe Collewet, Nicolas Mansard, Claire Dune, Roméo

Tatsambon-Fomena, Odile Bourquardez, Rafik Mebarki, Muriel Pressigout, Thi Thanh Hai

Tran, Amaury Dame, Céline Teulière, David Folio, Xiang Wang, Olivier Kermorgant, Tao

Li, Pierre Martin, Caroline Nadeau, Antoine Petit, Guillaume Caron, Deukhee Lee, Hideaki

Uchiyama, Manikandan Bakthavatchalam, Filip Novotny, Romain Tallonneau, Jean Laneu-

Page 7: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

rit, Ravo and Emilio.

I would like to thank Claire Dune for her efforts in correcting my French paper; Nicolas

Mansard for his advices at the first time I tried to implement a new control law and his

encouragements; Roméo Tatsambon-Fomena for his helping me at the first time I tried to

use the robot; Andrea Cherubini for several useful discussions about mobile robot, Caro-

line Nadeau for her nice demonstration of medical robot. I would like also to thank Ryuta

Ozawa, professor at the Ritsumeikan University, Japon, for his friendship and several useful

discussions during the year he spent in our group.

I would like to thank our assistant Céline Ammoniaux-Gharsalli for her perfect and great

responsibility about the administrative work that saves a lot of time for performing this re-

search, as well as Odile Mazuret and Angélique Jarnoux. I would like also to thank the

Library staff for making relevant literature easily accessible during my Ph.D. work.

I would like also to thank all my friends in Rennes, especially Jennifer, Sofian, Alberto,

Ciara, Patricia, Salome, Alexandro, Duygu, Arnaud, Maher, Bilal, Ghehan, Gawad, Adel,

Ahmed and Mohammed Ghraba, Gehad, Mohammed El Agan, Rafik Zouari, Dr. Hany,

Samar and Mahmoud for their support and accompany during my staying in France. I would

like to thank Emma Chaironi without her I will not choose to come to study my PhD in

France.

I would like also to thank Prof. Camelia Sobhi, the Egyptian cultural and scientific con-

sul in the Egyptian embassy, for her supports, continuous care and encouragements as well

as all members in the Egyptian cultural office in Paris.

I would like also to thank Prof. Mohammed F. Tolba and Prof Ashraf Saad my Masters

supervisors in the faculty of Computer and Information Sciences (FCIS), Ain Shams Uni-

versity, Cairo for their continuous support and encouragements.

I believe that in the life we are results of each passed moment we had lived in and that

our mentality is created and formed based on difficulties we have to deal with. The more

the difficulties we had and succeeded to overcome the more the ability to deal with other

difficulties that may appear in the future even if they are not belonging to the same sort of

difficulties we had experienced before. We just gain the logic of thinking how to avoid diffi-

culties

In my opinion, the only important point is to have the ability to free our mind from any con-

straints that may set any limits to our way of thinking about those problem issues considered.

In this view, I would like to think of every moment in my life, and every person who had

a positive signature kept in my mentality and my life. In fact they are a lot, I will try to

mention those who come too fast to my mind, and I hope that the persons I did not mention

Page 8: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

there names will forgive me because I have a bad memory to remember all of you so please

accept my excuse and forgive me

First, my primary school, I would like to thank Mr. Mohammed Zakaria who created

the hope to be an engineer inside my mind and my heart. Also, I would like to thank Mr.

Mohammed Shedid, Miss. Zenab, Awatef, Ragaa and Khayria for their care about me and

encourage me as I was 6 to 12 years old which allowed me to be one of the best students

in Al-Shahed Adnan Al-Madany primary school, not for any thing but as a result of their

efforts and encouragements. As for my friends during this time I would like to thank Adel

Telib, Osama Salah, Mohammed Barakat, Mohammed Yousif, Mohammed Ryad, Wael Has-

san and Mohammed Mostafa.

Secondary, my intermediate schools, I would like to thank all teachers who had taught me a

lot of things and encouraged me to keep working hard. First Mr. Abd Alhamed in Al-Gahra

intermediate school in Kiewit. As for my friends during this time I would like to thank Ryad

Edham and Refaat Yaakoob. Then, Fathala Basha Barakat intermediate school in Menyet

Al-Mourshed, Kafr Al-Sheekh, Egypt. I would like to thank Mr. Khaled Marey and Mm.

Seham Al-Somody for their encouragements. As for my friends during this time, I would

like to thank Mohammed Afify, Ola Marey and all other friends.

In the secondary schools, I would like to thank many teachers especially Mr. Mohammed

Abdal-Rahman in Ismaeil Al-Kabany Secondary School and Mr. Amr and Mostafa and all

my teachers without any exception in Al-Ahram Secondary School. As for my friends during

this time I would like to thank my best friends Husam Al-Talyawy and Magdy Al-Bagoury.

In the faculty of sciences, where I obtained my B. Sc. degree in Pure Mathematics and

Computer Science, I would like to thank all professors and doctors and teaching assistants.

Prof. Khayrat, Prof. Nashat, Prof. Hatem Al-Henawy, Prof. Al-zahar, Prof. Bayoumy, Prof.

Souwd, Prof. Lobna, Prof. Entesarat, Prof. Al-Herbety and Prof. Ilham Al-Sheref. In addi-

tion to all teaching assistant in that time who became great doctors including Dr. Ahmed, Dr.

Yaser. Dr. Hatem, Dr. Christen, Dr. Maged, Dr. Khaled, Dr. Ghada, Dr. Sayed and all other

doctors. And my friends Mohammed Abd el Megeed, Rania Adel, Ahmed Gawesh, Yassen,

Ossama Terra, Mohammed Amen, Hala Mosher and all other friends.

I would like also to thank all staff members in FCIS including Prof. Saied Abd Al wa-hab, Prof. Essam Khalefa, Prof. Hatem Al Ayady, Prof. Saied Al ghonemy, Prof. AhmedHamad, Prof. Sayed Fadel, Prof. Ali Al Naem, Prof. Taha el Areef , Prof. Mahmoud Algamal and Prof. Mohammed Roshdy as well as all my friends and colleagues.

I would like also to thank those persons who have great impacts in my personality, first

of all the person who I will never forget Eng. Magdy Ashour as well as Mr. Abdel Megeed

and Eng. Aktham for all what I had learned from them.

Page 9: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

I owe my loving thanks to my parents. They have lost a lot due to my research abroad.

Without their encouragement and understanding it would have been impossible for me to

finish this work. My special gratitude is due to my sister Walaa and my brothers Tamer, Alaa

and before all to my brother Wael who passed away one year after I started this work. For

him I present this work.

Mohammed Abd Al Rahman Marey

Page 10: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne
Page 11: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Contents

Table of contents 1

1 Introduction 51.1 Motivations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7

1.2 Contributions and thesis Outline . . . . . . . . . . . . . . . . . . . . . . . 8

I Visual servoing 11

2 State of the art of visual servoing 132.1 Visual servoing . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13

2.2 Camera-robot configurations . . . . . . . . . . . . . . . . . . . . . . . . . 14

2.2.1 Eye-in-hand configuration . . . . . . . . . . . . . . . . . . . . . . 14

2.2.2 Eye-to-hand configuration . . . . . . . . . . . . . . . . . . . . . . 14

2.3 Task functions in visual servoing . . . . . . . . . . . . . . . . . . . . . . . 16

2.3.1 Camera space control . . . . . . . . . . . . . . . . . . . . . . . . . 16

2.3.2 Joint space control . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.4 Selection of visual features . . . . . . . . . . . . . . . . . . . . . . . . . . 17

2.4.1 Geometric features . . . . . . . . . . . . . . . . . . . . . . . . . . 18

2.4.2 Photometric features . . . . . . . . . . . . . . . . . . . . . . . . . 21

2.4.3 Velocity field features . . . . . . . . . . . . . . . . . . . . . . . . 22

2.5 Visual servoing control laws . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.5.1 Classical approaches . . . . . . . . . . . . . . . . . . . . . . . . . 22

2.5.2 Enhanced visual servoing approaches . . . . . . . . . . . . . . . . 25

2.5.3 Hybrid and switching strategies in visual servoing . . . . . . . . . 27

2.5.4 Visual servoing as a minimization problem . . . . . . . . . . . . . 32

2.5.5 Problems in visual servoing . . . . . . . . . . . . . . . . . . . . . 34

2.6 Performance and robustness . . . . . . . . . . . . . . . . . . . . . . . . . 36

2.7 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 37

Page 12: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

2 Contents

3 New first order control scheme 413.1 Classical image-based control matrix . . . . . . . . . . . . . . . . . . . . . 41

3.2 New controller with a behavior parameter . . . . . . . . . . . . . . . . . . 42

3.3 Pseudo-Gas control law . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.3.1 Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43

3.3.2 Stability analysis . . . . . . . . . . . . . . . . . . . . . . . . . . . 44

3.4 Motion along and around the optical axis . . . . . . . . . . . . . . . . . . . 44

3.4.1 Case 1: tz = (Z → Z∗) . . . . . . . . . . . . . . . . . . . . . . . 45

3.4.2 Case 2: rz = 90o and tz = (Z → Z∗) . . . . . . . . . . . . . . . . 47

3.4.3 Case 3: rz = 180o & tz = (Z → Z∗) . . . . . . . . . . . . . . . . 50

3.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52

3.5.1 Simulation results: Motion along and around camera optical axis . . 52

3.5.2 Experimental results: Singularities . . . . . . . . . . . . . . . . . . 56

3.5.3 Simulation results: Local minima . . . . . . . . . . . . . . . . . . 58

3.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 63

4 New second order control schemes 654.1 introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 65

4.1.1 Different singular configurations in IBVS . . . . . . . . . . . . . . 66

4.1.2 Avoiding singularities in IBVS . . . . . . . . . . . . . . . . . . . . 67

4.2 Analysis of classical control schemes . . . . . . . . . . . . . . . . . . . . . 68

4.3 New second order control schemes . . . . . . . . . . . . . . . . . . . . . . 69

4.3.1 Halley’s Method for scalar case . . . . . . . . . . . . . . . . . . . 69

4.3.2 Modeling the new control schemes . . . . . . . . . . . . . . . . . . 70

4.4 Hessian Matrices of an image point . . . . . . . . . . . . . . . . . . . . . . 72

4.4.1 Modeling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72

4.4.2 Positiveness ofHx andHy . . . . . . . . . . . . . . . . . . . . . . 73

4.5 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 74

4.6 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 75

II Redundancy formalism 79

5 State of the art in redundancy 815.1 Redundancy . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 81

5.2 Projection operators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82

5.2.1 Classical projection operator Pe . . . . . . . . . . . . . . . . . . . 82

5.2.2 Bidirectional projection operator Pz . . . . . . . . . . . . . . . . . 82

5.3 Redundancy-based task-priority formalism . . . . . . . . . . . . . . . . . . 84

5.3.1 Inverse-based projection methods (successive, augmented) . . . . . 84

5.3.2 Transpose-based projection methods (successive, augmented) . . . 86

5.3.3 Unified formula for redundancy . . . . . . . . . . . . . . . . . . . 86

5.3.4 Efficient task-priority using the classical operator Pe . . . . . . . . 87

Page 13: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Contents 3

5.3.5 Tasks priority and tasks sequencing . . . . . . . . . . . . . . . . . 90

5.4 Joint limit avoidance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 90

5.4.1 Potential field approach . . . . . . . . . . . . . . . . . . . . . . . . 91

5.4.2 Gradient projection method . . . . . . . . . . . . . . . . . . . . . 91

5.4.3 Redundancy-based iterative approach . . . . . . . . . . . . . . . . 94

5.4.4 Manipulability measure approach . . . . . . . . . . . . . . . . . . 95

5.4.5 Weighted least norm solution approach . . . . . . . . . . . . . . . 96

5.4.6 General weighted least norm solution approach . . . . . . . . . . . 97

5.4.7 Constrained Newton optimization methods . . . . . . . . . . . . . 97

5.4.8 Constrained quadratic optimization method . . . . . . . . . . . . . 98

5.4.9 Improved damped least square approach . . . . . . . . . . . . . . . 98

5.4.10 Fuzzy logic-based approach . . . . . . . . . . . . . . . . . . . . . 99

5.4.11 Neural network-based approach . . . . . . . . . . . . . . . . . . . 99

5.4.12 Comparison between different JLA methods . . . . . . . . . . . . 100

5.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 101

6 New large projection operator for the redundancy framework 1056.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 105

6.2 New projection operator P‖e‖ . . . . . . . . . . . . . . . . . . . . . . . . . 106

6.2.1 Analytical study of P‖e‖ . . . . . . . . . . . . . . . . . . . . . . . 107

6.2.2 Switching based projection operator . . . . . . . . . . . . . . . . . 108

6.2.3 Stability analysis of qη . . . . . . . . . . . . . . . . . . . . . . . . 109

6.2.4 Studying qη and Pη as e→ 0 (t→∞) . . . . . . . . . . . . . . . 110

6.2.5 Main task function . . . . . . . . . . . . . . . . . . . . . . . . . . 112

6.3 Efficient task-priority using the new operator P‖e‖ . . . . . . . . . . . . . . 113

6.3.1 Two tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

6.3.2 Several tasks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 114

6.3.3 Redundancy-based articular-task-priority . . . . . . . . . . . . . . 115

6.4 Experimental Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 116

6.4.1 New Projection operator using obj1 . . . . . . . . . . . . . . . . . 117

6.4.2 Comparison betweenPλ andPe, classical and efficient task-priority,

using obj2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122

6.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 126

7 New joint limits avoidance strategy 1277.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 127

7.2 New joint limits avoidance strategy . . . . . . . . . . . . . . . . . . . . . . 128

7.2.1 Activation and sign function . . . . . . . . . . . . . . . . . . . . . 128

7.2.2 Tuning function . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

7.2.3 Adaptive gain function . . . . . . . . . . . . . . . . . . . . . . . . 130

7.2.4 Behavior analysis of qia . . . . . . . . . . . . . . . . . . . . . . . . 130

7.2.5 Additional secondary tasks . . . . . . . . . . . . . . . . . . . . . . 132

7.3 Integrating the new joint limits avoidance with kernel approach . . . . . . . 133

Page 14: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

4 Contents

7.3.1 Gain vector a . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133

7.3.2 Gain vector a and additional secondary tasks . . . . . . . . . . . . 135

7.3.3 Transition switching inner loop to solve the discontinuity problem . 136

7.4 Results . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 137

7.4.1 Case A1: Single joint limits avoidance . . . . . . . . . . . . . . . . 137

7.4.2 Case A2: Additional secondary task sent to revolute joint, classical

task-priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 138

7.4.3 Case A3: Additional secondary task sent to revolute joint, efficient

task-priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 139

7.4.4 Case A4: Additional secondary task sent to prismatic joint, classical

task-priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 140

7.4.5 Case A5: Additional secondary task sent to prismatic joint, efficient

task-priority . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 141

7.4.6 Case B1: Basic kernel approach . . . . . . . . . . . . . . . . . . . 144

7.4.7 Case B2: Basic kernel approach with gamma coefficients . . . . . . 145

7.4.8 Case B3: New joint limits avoidance strategy integrated to kernel

approach . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 146

7.5 Conclusion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 147

8 General conclusions 149

List of publications 157

Bibliography 158

Page 15: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Chapter 1

Introduction

Many researches in robotics aim at the involvement of exteroceptive additional sensors which

is strongly contributing to the ever growing demand to improve autonomy, robustness, flexi-

bility and precision. Nevertheless good sensing abilities are essential to gain a higher flexibil-

ity and autonomy of robots. When robots operate in unstructured environments, the sensory

information is included in the control loop. Many different sensors have been developed

over the past years to fit the requirements of different but very specific tasks. Measurements

gained by employing these sensors, that reflect variation in some phenomena, are used in the

control loop to adapt the task execution to these variations. The most common sensors in

robotics are force/torque sensors, laser range finders, ultrasonic sensors and camera sensors.

Among all sensors, vision is the most complementary one with respect to the sensory infor-

mation it provides to be included in the control loop since visual sensors are powerful means

for a robot to perceive its environment. In particular, the use of visual feedback from sensor

camera, when it is used in a correct manner, guarantees accurate positioning, robustness to

calibration uncertainties, and reactivity to environmental changes.

Visual servoing is a viable method for robotic control based on the utilization of visual

information extracted from images to close the robot control loop. Various areas are in-

volved in visual servoing as shown in Fig. 1.1. Visual information obtained from the image

processing can be used to extracting 2D features. It can also be used to estimating pose

parameters by employing a pose estimation algorithm from computer vision. The estimated

pose is transformed into the 3D features. These 2D and/or 3D features are then used in the

control scheme. Optimization techniques, robot dynamic and/or robot kinematics are used

in the modeling phase of the control schemes. Advances in the power and availability of

image processing have made possible the computing required at frame rate. This makes that

the use of real time video information for robotic guidance is increasingly becoming a more

attractive proposition and is the subject of much research. It enables visual servoing with

Page 16: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

6 Introduction

Figure 1.1 – Visual servoing related areas

sufficient accuracy and many useful tasks can be accomplished. This makes visual servo-

ing, that is the closed loop control of motion on the basis of image analysis, more and more

recognized as an important tool in modern robotics control. Two types of digital imaging

sensors are usually used in computer vision and can be used for visual servoing. First type

encodes light intensity to give intensity images acquired by cameras while the second type

encodes shape and distance to give range images acquired by laser or sonar scanners.

With a vision sensor, which provides 2D measurements, the nature of the potential visual

features is extremely rich, since it is possible to design visual servoing using both 2D fea-

tures, such as the coordinates of characteristic points in the image, and 3D features, provided

by localization module operating on the extracted 2D measurements. The wide range of pos-

sibilities is the reason behind the major difficulty in visual servoing, that is to build, select

and design as best as possible the visual data needed and the control scheme used for obtain-

ing a suitable behavior of the system, based on all the available measurements.

Using vision in robot control makes it possible to solve different problems, that can be han-

dled safely based on the sensory visual data without any contact with the environment, for

example the obstacle avoidance problem when the 3D models of the obstacles are known.

However, some issues should be considered when vision sensors are used within robot sys-

tem. These issues include local or global stability, robustness behavior, suitable trajectories

for the robot and for the measurement in the image, a maximum decoupling between the

visual information and the controlled degrees of freedom, avoiding singularity and local

minima for the control (interaction) matrix, keeping features in the camera field of view,

occlusion avoidance and avoiding robot joint limits.

Page 17: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

1.1 Motivations 7

1.1 Motivations

Figure 1.2 – Thesis scopes: problem domains and contributions

Visual servoing methods can be employed in almost all robotic systems that perform po-

sitioning and interactions tasks (for example space robot, mobile robot, robot arm, humanoid

robot,..., (see Fig. 1.2). It involves working with different spaces in order to acquire the input

signal in addition to transform a motion from one space to another in order to execute the

required tasks. Visual servoing encounters different sorts of problems like local minima, sin-

gularities and loosing features from the camera field of view that may be avoided either by

changing the feature set or by modeling suitable control schemes. These problems motivate

the research in modeling new visual servoing control schemes. Another type of problem that

requires defining avoidance tasks and selecting a mechanism to ensure its execution is clas-

sified as constrained problems, for example joint limit avoidance, obstacle avoidance, and

occlusion avoidance. Avoiding this kind of problems is usually realized by the utilization of

gradient projection methods. That is why it is important to have a gradient projection method

that allows better performing of these avoidance tasks. These sorts of problems motivated us

to accomplish the present work.

Page 18: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

8 Introduction

1.2 Contributions and thesis Outline

The thesis is written in two parts after the introductory chapter (see Fig. 1.3). The first part is

about visual servoing control schemes and contains chapters 2, 3 and 4. It starts by reviewing

the state of the art in visual servoing control schemes, then new control schemes are proposed

and their experimental evaluations are presented. The second part is about redundancy and

joint limits avoidance. It starts by presenting the state of the art in redundancy formalisms

and joint limits avoidance strategies in robot control. A new large projection operator and a

new joint limit avoidance strategy are then proposed. This part contains chapter 5, 6 and 7.

Finally, the conclusions of these two parts are collected in chapter 8.

Figure 1.3 – Thesis structure

Part one: Visual servoing (chapter 2 to 4)

Chapter 2: State of the art in visual servoing: In this chapter, state of the art in

visual servoing in the light of visual features used, control schemes designed and problems

that could appear is presented.

Chapter 3: New first order control scheme: In this chapter we analyze different

control schemes by considering the most usual and simple features, that are the Cartesian

coordinates of image points. As for the control schemes, we consider three classical control

laws and we propose in this chapter two new ones. The first new control law follows an

Page 19: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

1.2 Contributions and thesis Outline 9

hybrid strategy. It is based on a behavior parameter that can be used to tune the weight of

the current and the desired interaction matrix in the control law. We will see that in some

configurations where all other control schemes fail due to local minima or singularity prob-

lems, this new control law allows the system to converge. The second control law that we

propose is an attempt towards global asymptotic stability (GAS). Unfortunately, if GAS can

be obtained in the space of the chosen task function, we will see that it is not ensured in the

configuration space when redundant image point coordinates are used. This control scheme

is indeed subject to attractive local minima. The analysis of the control laws included in this

chapter is performed with respect to translational motion along and rotational motion around

the optical axis. As we will see, a singularity of the control law proposed in [Malis 04] will

be exhibited thanks to this analysis.

Chapter 4: New second order control schemes: In this chapter, we propose new

control schemes to try to reach a singular configuration. It is based on Halley’s method,

which uses a second order minimization step. After analyzing the behavior of the classi-

cal control schemes in a singular configuration, we present the new control schemes. They

are based on the Hessian matrices of the selected features. We thus determine the Hessian

matrices of the Cartesian coordinates of an image point. Finally, experimental results are

presented.

Part two: Redundancy (chapter 5 to 7)

Chapter 5: State of the art in task redundancy and joint limits constraints:In this chapter, we present the state of arts of the utilization of gradient projection methods

(GPM) in robot control to cope with injecting constraints and additional tasks to the main

task. The state of arts about avoidance frameworks and joint limits avoidance strategies are

also presented.

Chapter 6: New large projection operator for the redundancy framework:In this chapter, we propose a new projection operator. Instead of considering all the com-

ponents of the main task, only the norm of this task is used. Considering the norm of the

errors allows the corresponding projection operator to enlarge the permitted motions, at least

when the errors are still large. As we will see, using this new projection operator leads to a

less constrained problem since the task based on the norm is of rank one at maximum. Our

analytical studies show that this operator has to switch to the classical projection operator

when the norm of the total error approaches zero. A switching strategy for the projection

operator has thus been developed. Finally, an adaptive gain is also proposed to slow down

the convergence of the main task. It allows the secondary tasks to be active for longer, which

may be useful in practice when the secondary tasks have really to be taken into account (for

obstacles or joint limits avoidance for instance). The recursive augmented projection oper-

ator for redundancy-based task-priority is presented and its efficiency is compared with the

classical one.

Page 20: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

10 Introduction

Chapter 7: New joint limits avoidance strategy: In this chapter, we use the projec-

tion operator proposed in chapter 6 for injecting a joint limits avoidance task into the main

task. We first present a new avoidance technique that uses the gradient projection method.

This avoidance technique is based on three functions proposed: an activation function that

activates the avoidance task and sets its direction; an adaptive gain function that controls

the magnitude of the avoidance task; and a tuning function that ensures the smoothness of

the injection of the avoidance task into the main task. Then, the problem of adding addi-

tional secondary tasks to the main task to be performed simultaneously while ensuring the

joint limits avoidance is solved. These additional tasks could be used for moving the robot

away from the neighborhood of the joint limits, avoiding occlusion and obstacles, perform-

ing emergency needed motion, or keeping object features in the camera field of view. Finally,

the new avoidance strategy is integrated with the iterative approach to ensure the smoothness

of the injecting of the avoidance task into the main task and a solution to the discontinuity

problem of this approach is proposed by performing an inner switching loop inside the visual

servoing loop.

Page 21: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Part I

Visual servoing

11

Page 22: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne
Page 23: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Chapter 2

State of the art of visual servoing

2.1 Visual servoing

Visual servoing is a well known flexible and robust technique to increase the accuracy and

the versatility of a vision-based robotic system [Weiss 87, Feddema 89a, Hutchinson 96,

Chaumette 06, Chaumette 07]. It consists in using vision in the feedback loop of a robotic

system control with fast image processing to provide reactive behavior. Single or multiple

cameras can be used to track visual information in order to control a robot with respect to a

target [Shirai 73, Weiss 87, Hashimoto 93a, Hutchinson 96, Kragic 02]. Robotic task in vi-

sual servoing is specified in terms of image features extracted from a target object and their

use for controlling the robot/camera motion through the scene [Espiau 92, Hashimoto 93a].

The positioning task is expressed as the regulation to zero of a task function. The task func-

tion in visual servoing is defined to control the pose of the robot’s end-effector relative to

either a world coordinate frame or a frame of a manipulated object. Visual features extracted

in real time from the image are used to define a task function that depends on the robot con-

figuration and the time [Samson 91].

Several methods have been developed for controlling the robot using visual information.

Two basic approaches have been proposed namely position-based visual servoing (PBVS)

and image-based visual servoing (IBVS) [Sanderson 80, Weiss 87, Hutchinson 96]. Both

approaches have been used as bases for developing other schemes such as 2 1/2 D visual

servoing [Malis 99]. In position-based visual servoing (PBVS or 3D VS), the image mea-

sures are processed in order to estimate the relative 3D pose between the camera and the

target, which is then used as an error signal for controlling the motion of the robot/camera

system toward its desired goal [Martinet 97] [Wilson 96]. 3D coordinates of points can also

be used [Martinet 96]. In image-based visual servoing (IBVS or 2D VS), the error is directly

computed in terms of features expressed in image [Weiss 87, Feddema 89b, Espiau 92]. Two

Page 24: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

14 State of the art of visual servoing

main aspects have a great impact on the behavior of any visual servoing scheme: the selection

of the visual features used as input of the control law and the designed form of the control

scheme. On one hand, the same feature set gives different behavior when employed in dif-

ferent control schemes and on the other hand, the same control law gives different behavior

when it considers different feature sets. The behavior obtained is not always the required

one: selecting a specific feature set or a specific control scheme may lead to some stability

and convergence problems. In the following, a review in visual servoing is presented by

focusing on modeling issues.

2.2 Camera-robot configurations

Two main configurations exist to combine camera(s) and robot for visual servoing applica-

tions. The first configuration called eye-in-hand, consists in camera(s) mounted on the end

effector of the robot. A constant transformation between the camera frame and the end ef-

fector frame is defined and used for transforming the motions from the camera frame into

the end effector frame, the robot frame, or any other reference frame attached to the robot. In

the second configuration, which is called eye-to-hand, one or several cameras are placed in

the workspace to monitor a target, end effector or both. This configuration needs to compute

the transformation between cameras and robot frames at each iteration. The former config-

uration is commonly used in visual servoing since it allows keeping target(s) in the field of

view (for example when a grasping task requires to ensure monitoring the object and a grasp-

ing tool attached to the end effector [Allen 93]). Hybrid configurations can be constructed

when eye-to-hand and eye-in-hand configurations are used, [Flandin 00]. Within the work

presented in this thesis, we use the eye-in-hand camera-robot configuration.

2.2.1 Eye-in-hand configuration

In the eye-in-hand configuration (see Fig. 2.1), the camera is attached to the end-effector of

the robot and visual servoing control schemes are designed such that the velocity vectors vor q are defined to control the required motion in the camera space or in the robot’s joints

space respectively. This velocity vector is sent to the robot controller in order to perform the

required motion. Thus camera calibration, robot calibration and the end-effector to camera

calibrations are first needed in order to have the set of camera intrinsic parameters, the robot

Jacobian eJq and the transformation matrix cMe between the camera frame Fc and the end-

effector frame Fe [Tsai 89]. Let us note that visual servoing is robust with respect to the

calibration errors.

2.2.2 Eye-to-hand configuration

In the eye-to-hand configuration, the camera is fixed in the workspace (see Fig. 2.2). Un-

like the eye-in-hand configuration where the image of the target changes based on both the

Page 25: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

2.2 Camera-robot configurations 15

Figure 2.1 – Eye-in-hand camera-robot configuration

movement of the target and the movement of the end effector, in eye-to-hand configura-

tion, the image of the target changes based on the movement of the target. The transfor-

mation matrix cMb between the camera frame Fc and the base coordinate system of the

robot frame Fb is constant and is computed one time. The relative position between the

robot end effector frame and the camera frame can be computed using transformations be-

tween different frames. More details concerning camera/robot configurations can be found

in [Hutchinson 96] [Kragic 02] [Chaumette 07].

Figure 2.2 – Eye-to-hand camera-robot configuration

Page 26: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

16 State of the art of visual servoing

2.3 Task functions in visual servoing

Tasks in visual servoing are defined to describe how the robot can interact with its envi-

ronment using vision sensors. All attained positions by the robot end effector’s tools are

belonging to a configuration space called task space e. In three dimensional workspace

when translation motions, rotation motions or translation motions combined with rotation

motions are required to be performed respectively by tasks, the corresponding task space is

defined respectively by e ∈ R3 where R3 is the Cartesian space, e ∈ SO3 where SO3 is

the spatial orthogonal space or by e ∈ SE3 where SE3 = R3 × SO3, the special Euclidean

space denoted . In order to apply these motions with respect to a specified reference frame

(for example camera frame, end-effector frame, tool frame, manipulated object frame, or

robot frame), a transformation between different frames is required to transform the motion

from one frame to another [Dombre 07][Spong 06]. In order to perform a task, the object

must exhibit visual features which can be extracted from different points of view depending

on the object appearance.

2.3.1 Camera space control

Typically, when s∗ is the desired feature vector, s is the current feature vector, and cp(t) isthe relative pose between the camera and the object at instance t, the regulation scheme of

task function e is defined as:

e = s(cp(t))− s∗ (2.1)

When the control variables are defined in the camera frame, the variation of the visual feature

s related to the relative movements between the camera and the scene is given by:

s =∂s

∂cp

dcp

dt+

∂s

∂t= Ls

cv +∂s

∂t(2.2)

where ∂s∂t

is the variation of s due to the object own motion, Ls ∈ Rk×n is the visual features

interaction matrix that represents the differential relationship between the features s and the

camera frame [Hutchinson 96] [Dombre 07] , cv = vc − vo is the relative instantaneous

velocity between the camera frame Fc and the object frame Fo expressed in the camera

frame Fc, vc is the instantaneous camera velocity and vo is the instantaneous object velocity.

When the object is motionless, ∂s∂t= 0 and we get:

s = Lsv (2.3)

where v = cv.

Page 27: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

2.4 Selection of visual features 17

2.3.2 Joint space controlFor an eye-in-hand system, the relation between s and the velocity of the joint variables qcan be obtained as follows [Dombre 07]:

s =∂s

∂cp

∂cp

∂ep

∂ep

∂qq+

∂s

∂t= Ls

cVeeJqq+

∂s

∂t(2.4)

where eJq is the Jacobian of the robot expressed in the end effector frame Fe and cVe is

the twist transformation matrix between the end effector frame Fe and the camera frame Fc

defined by [Dombre 07]:

cVe =

[cRe [cte]×cRe

03cRe

](2.5)

where cRe and cte are the rotation matrix and the translation vector from the end effector

frameRe to the camera frameRc. The matrix cVe remains constant for eye-in-hand config-

urations, while it has to be estimated at each iteration for eye-to-hand configurations.

If the object is not moving ∂s∂t= 0, from (2.4)we get:

s = Jsq (2.6)

where Js = LscVe

eJq is the Jacobian of the visual features.

2.4 Selection of visual features

Visual features observed by the vision sensor define the inputs to the control scheme. A

feature can be any property that represents a part of the scene and can be extracted from the

image. Vision sensor can be conventional camera (as usually used in visual servoing), 2-D

ultrasound camera [Mebarki 10] or omni-directional cameras that is motivated in robotics

applications to avoid visibility problems due to the restricted field of view of conventional

camera [OkamotoJr 02] [HadjAbdelkader 07]. Selection of good visual features is a crucial

aspect of visual servoing as it is necessary for achieving optimal speed, accuracy, and reli-

ability of image measurements, consequently, affects performance and robustness of visual

servoing [Feddema 91] [JanabiSharifi 97] [Chaumette 98]. Imaging measurements are ei-

ther used directly in the control loop or used for relative pose estimation of a workpiece with

respect to a camera. The number of degrees of freedom (DOF) to be controlled by the em-

ployed control scheme determines the minimum number of independent features required.

Therefore, it is desirable to choose features which will be highly correlated with movements

of the camera. In the following, we classify image features, that define the input signal to

visual servoing control scheme, into three classes named geometric features, photometric

features, and velocity field features.

Page 28: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

18 State of the art of visual servoing

2.4.1 Geometric featuresGeometric features are defined to describe geometrically contents involved in a scene (2D

visual features) or to relate a frame attached to a robot system with a frame attached to

an object in a scene (3D visual features). Both 2D and 3D visual features can be used

simultaneously in a hybrid form as we will see in Section 2.5.3.1.

2D visual features

Visual features can be selected in 2D image space as point coordinates, parameters rep-

resenting straight lines or ellipses, region of interest, contours, [Feddema 91] [Espiau 92]

[JanabiSharifi 97] [Corke 01] [Gans 03c]. These features are defined from image measure-

ments. In case of image points, Cartesian coordinates are generally used but it may be also

possible to use their polar and cylindrical coordinates [Iwatsuki 05]. In all cases, the param-

eters defining the internal camera calibration are required.

Image moments can also be used in visual servoing [Chaumette 04] [Tahri 05]. Using image

moments, the improvements with respect to classical visual servoing seem to be significant,

since it allows a generic representation not only able to handle simple geometrical primi-

tives, but also complex objects with unknown shapes. It is shown in [Tahri 05] that moment

invariants can be used to design a decoupled 2D visual servoing scheme and to minimize the

nonlinearity of the interaction matrix related to the selected visual features.

3D visual features

Visual features can also be selected in 3D Cartesian space such as pose or coordinates of 3D

points [Martinet 96] [Wilson 96] [Deng 03]. Usually, object model and image measurements

are used to compute or to estimate the relative pose between object and camera frames in the

Cartesian space or to reconstruct the 3D coordinates. In [Cervera 03], the 3D coordinates of

the points of the object are used as the feature vector. A priori knowledge about the camera

calibration parameters is required. In PBVS, orientation in pose vector can be represented

by the total format, roll-pitch-yaw or axis-angle formats, [Wang 92] or quaternion formulate

[Guoqiang 10].

Hybrid visual features

Several combinations between visual feature types can also be considered: for example, a

mixture composed of both kinds of 2D and 3D features is presented in [Malis 99] [Deng 02a]

[Cervera 03] [Marchand 05a], and finally polar and Cartesian parameterizations of image

points coordinates as presented in [Corke 09].

Gaussian mixture features

In [Hafez 08a], an approach that does not require tracking nor matching has been presented.

Collectively features points extracted from the image are modeled as a mixture of Gaussian.

Page 29: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

2.4 Selection of visual features 19

The error function is defined as the distance between the current and desired Gaussian mix-

ture models. As presented in [Hafez 08a], three degrees of freedom can be controlled using

this approach while six degrees of freedom can be considered under the assumptions that a

depth distribution is available.

Redundant features

Utilizing redundant features can improve the performance of visual servoing control and

increase the positioning accuracy by improving the corresponding minimum singular value

of the extended image Jacobian [Hashimoto 96]. However, processing a large feature set can

sometimes be computationally infeasible, and therefore the focus must be on the selection of

an information-intensive and reliable set that possesses a minimum number of features.

Automatic feature selection

A methodology for automatic visual feature selection among a set of features is presented

in [Feddema 91] [Papanikolopoulos 93] [JanabiSharifi 97]. This approach selects the more

reliable combination from a given feature set depending on some control criteria based on a

weighted criteria function of robustness, completeness, uniqueness and computational time

complexity of the image features.

Interaction matrix

The analytical form of the interaction matrix is based on the type of the used camera (conven-

tional camera, 2D or 3D ultrasound cameras or omni-directional camera) and the projection

model used [Hutchinson 96][TatsambonFomena 09]. The most common geometric model

for usual cameras is the perspective projection model [Horn 86], (see Fig. 2.3). In this

model, the center of projection is considered at the origin of the camera frame Fc and the

image plane is at Z = f , where f is the camera focal length. By considering a 3D point with

coordinates X = (X, Y, Z) in the camera frame and using a perspective projection model

[Forsyth 03], the point X is projected on a 2D point x of coordinates (x, y) in the image

plane such that: [xy

]=

[X/ZY/Z

]=

[(u− cu)/fα(v − cv)/f

](2.7)

where xp = (u, v) is the image point coordinates in pixel unit, c = (cu, cv) is the coordinates

of the principle point, f is the focal length of the camera lens and α is the ratio of pixel

dimension.

Interaction matrix of image feature points (2D): As for the interaction matrix Ls of an

image feature point it can be obtained as following: By taking the time derivative of (2.7) we

get: [xy

]=

[X/Z −XZ/Z2

Y /Z − Y Z/Z2

](2.8)

Page 30: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

20 State of the art of visual servoing

Figure 2.3 – Camera model

If the spatial velocity of the camera is given by v = (v, ω) where v = (vx, vy, vz) and

ω = (ωx, ωy, ωz) are the instantaneous linear and angular velocities of the origin of the

camera frame both expressed in Fc, then the velocity of the 3D pointX related to the camera

velocity is defined using the fundamental kinematic equation X = −v − ω ×X such that:⎡⎣XYZ

⎤⎦ =⎡⎣vx − ωyZ + ωzY

vy − ωzX + ωxZvz − ωxY + ωyX

⎤⎦ (2.9)

By injecting the values of X , Y and Z from (2.9) in (2.8) and grouping for v, and ω we get

the classical result [Weiss 87]:[xy

]=

[−1Z

0 xZ

xy −(1 + x2) y0 −1

ZyZ1 + y2 −xy −x

] [vω

](2.10)

which can be written as

x = Lx v (2.11)

where Lx is the interaction matrix related to x. If there is a set of k feature points x =(x1, . . . ,xk), the interaction matrix Lx of the set x is obtained by stacking Lxi

for all xi ∈ xto get:

Lx =

⎡⎢⎣Lx1

...

Lxk

⎤⎥⎦ (2.12)

Interaction matrix of θu parameterization (3D): If the rotation matrix c∗Rc ∈ SO(3)relating the two views of the camera is defined in θu parameterization, the error function

Page 31: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

2.4 Selection of visual features 21

is given by e = (c∗tc, θu) where c∗tc ∈ Rk is the translation vector. The camera velocity

related to the pose error is given by [Malis 99]:

e =

[c∗Rc 00 Lω(u, θ)

]v (2.13)

where

Lω(u, θ) = I3 +θ

2[u]× +

(1− sinc(θ)

sinc2( θ2)

)[u]2x (2.14)

with [u]× is the skew symmetric matrix associated with u and I3 ∈ R3×3 is the identity

matrix.

2.4.2 Photometric featuresRecently, different researches focus on the utilization of photometric features computed from

pixel intensities. The utilization of the photometric features does not rely on complex image

processing such as feature extraction, matching, and tracking process, contrary to utilizing

geometric visual features such as points, straight lines, pose, homography, etc. Moreover, it

is not very sensitive to partial occlusions and to coarse approximations of the depths required

to compute the interaction matrix. This approach is employed by considering the whole im-

age as a feature set from which the control input signal is defined. The input to the controller

can belong to eigenspace or kernel of image pixel or can also be defined as the set of all

image pixels itself.

Eigenspace-based photometric features

In [Nayar 96], image intensity is not used directly but an eigenspace decomposition is per-

formed first to reduce the dimensionality of image data. Then the control is performed in the

eigenspace. This method requires off-line computation of this eigenspace and the projection

of the image on this subspace for each new frame. No analytical form is available for the

interaction matrix, indeed it is learned during an off-line step. The learning process has to be

done for each new object and requires the acquisition of many images of the scene at various

camera positions.

Kernel-based photometric features

Another approach that considers the pixels intensity is based on the use of kernel methods.

It is presented in [Kallem 07]. This method, valid for 4 DOFs, leads to a decoupled control

law that decouples translation and rotation around the optical axis.

Pixel-based photometric features

It is also possible to use the luminance of all the pixels in the image as visual feature set. In

that case, s = I(r) where I is a vector of the same size Mi ×Ni as the image, [Collewet 08]

Page 32: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

22 State of the art of visual servoing

[Collewet 09b]. In the same manner, instead of using the luminance I(r), color attributes

C = (R,G,B) can be used as visual features by selecting the three components R, G or B,[Collewet 09a]. A combination of different components can also be performed.

2.4.3 Velocity field featuresIn [Crétual 01], the visual features are specified with dynamic criteria which is homogeneous

to speed in the image and the relation between the variations of velocity field features and the

camera velocity is modeled. The camera motions are controlled so that the current velocity

field in the image becomes equal to motion field in the desired configuration. This approach

is used for positioning a camera parallel to a plane and following trajectory. In [Kelly 04],

the application of the velocity field control is applied to visual servoing of robot manipulator

under fixed camera configuration. In that case, the task to be accomplished by the robot

is coded by means of a smooth desired velocity vector field defined in the image space by

v(x) : R2 → R2, where x ∈ R2 is an image point and v is assumed to be bounded. This

desired velocity field v defines a tangent vector that represents the desired image feature

velocity x at every point of the image space. The velocity field error e is defined as the

difference between the desired velocity field v(x) and the image feature velocity x. Thus,

instead of requiring the image feature to be at a specific location at each instant time as it is

imposed in trajectory tracking control, in velocity field control the image feature is guided

towards the desired flow defined by the desired velocity field v(x) i.e. the image feature

will match with the flow lines of the desired velocity field. The velocity field is also used in

[Kelly 06], for controlling wheeled nonholonomic mobile robots by a fixed camera.

2.5 Visual servoing control lawsAs for the choice of the control law [Espiau 92] [Wilson 96] [Hutchinson 96] [Chaumette 98]

[Deng 02b] [Malis 04] [Chaumette 06] [Chaumette 07], it affects the behavior of the system.

As described in the previous section, the nature of the potential visual features is extremely

rich. In the control design phase, a number of properties should be considered such as

local and global stability, robust behavior with respect to measurement and modeling errors,

local or global exponential decrease, order of convergence, absence of local minima and

singularities, obtaining suitable robot trajectory, and finally the degree of decoupling between

the visual information and the controlled degrees of freedom.

2.5.1 Classical approaches

2.5.1.1 Position-based visual servoing

In Position-based visual servoing (PBVS) or 3D visual servoing, the task function is ex-

pressed in the Cartesian space. The translation and the rotation of the camera in Carte-

sian space are explicitly reconstructed using pose estimation, [Hutchinson 96] [Wilson 96]

[Martinet 97] [Basri 99] [Taylor 00] [Deng 03]. PBVS is known to have global asymptotic

Page 33: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

2.5 Visual servoing control laws 23

stability referring to the ability of a controller to stabilize the pose of the camera from any

initial condition if 3D estimation is perfect. The analytical proof is evident if the pose is

perfect, otherwise it is impossible. When accurate 3D estimation is employed, decoupling

rotation and translation is obtained. Errors in calibration propagate to errors in the 3D world,

so it is required to ensure robustness of PBVS [Kyrki 04a].

No mechanism in PBVS ensures keeping the features visible within the camera field of view

(FOV) when the translation is defined in the desired fixed end effector frame [Wilson 96].

While, if the translation is expressed in the camera frame, the trajectory in the image plane

is improved under large camera rotation motion and features can be kept in the image plane

for small rotation, [Deng 03]. Several control schemes have been proposed to overcome the

latter problem (for example 2.5 D visual servoing presented in [Malis 02] or nonlinear ap-

proach using a new 3D translation features in the control loop as presented in [Martinet 99]

[Thuilot 02]). In PBVS, the task function to be regulated is usually defined as the error be-

tween current and desired poses. The pose can also be selected as the pose of the camera or

the end effector with respect to the object or any other reference frame in the world space.

When the pose between the camera and the object is considered, the task function is given

by e = c∗Pc. After executing the task, the camera reaches the desired position and the task

function e = c∗Pc∗ = 0.

Pose estimation

One of the central problems in position-based visual servoing is the determination of the

relative position and orientation of the observed object, that is the pose with respect to

the camera. For real time pose estimation of the object, image measurements are com-

bined with the known object CAD description. The pose can be estimated using image

points [Haralick 89] [Dementhon 95][Liu 99] [Ansar 03] [Chesi 09a], using point and line

correspondence [Dornaika 99], using point to region correspondence [Mcinroy 08], using

curves [Safaee-Rad 92], or using other different geometrical features as in virtual visual

servoing [Marchand 02]. For obtaining more accurate pose estimation, different filters are

usually used to estimate its translational and rotational parameters [Myers 76] [Wang 92]

[Ficocelli 01] [Lippiello 04] [Shademan 05] and recently [JanabiSharifi 10b].

Partial pose estimation

Camera translation (up to a scale factor) and camera rotation can be estimated through the

Essential matrix [LonguetHiggins 81][Faugeras 93] [Hartley 97]. However, when the target

is planar or when the motion performed by the camera between the desired and the current

pose is a pure rotation, essential matrix cannot be estimated and it is more appealing to

estimate the pose using a homography matrix [Malis 00]. Indeed, if all points belong to

a plane π, the partial pose can be extracted from the homography matrix H that relates

homogeneous coordinates of points in the desired and current image planes π using the

Page 34: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

24 State of the art of visual servoing

relation:

H = c∗Rc +c∗tcn

∗�

d∗(2.15)

where c∗Rc is the rotation matrix and c∗tc is the translation vector between the two camera

frames, n∗ is the unit normal vector of the plane π expressed in the desired camera frame,

and d∗ is the distance from the camera origin at the desired position to the plane π.

2.5.1.2 Image-based visual servoing

IBVS uses feature extracted directly from the image as input of the control law, without any

supplementary estimation step. Different from PBVS, IBVS does not need to estimate the

pose at each iteration which helps to provide a robust positioning control against calibration

and modeling errors. For example, lens distortions can be neglected when the set of desired

image feature locations is obtained via an off-line teach-by-showing procedure by moving

the robot end-effector to a desired location and storing the camera measurements at this lo-

cation.

IBVS is characterized by several advantages. Firstly, when the set of features is composed

of the Cartesian coordinates of image points, these image points are controlled in the image

plane to move approximately along straight lines. Therefore the target can be constrained

to remain visible during the execution of a task if both the initial and desired image feature

locations are within the camera field of view [Chaumette 98]. Another advantage of IBVS is

that the positioning accuracy is insensitive to camera and target modeling errors [Espiau 93]

[Hutchinson 96] [Hager 97] [Mezouar 02]. It is essentially a model-free method without ex-

plicit requirement of the target model in practical applications when the depths of the feature

points are available. IBVS systems are usually preferred to position-based systems since they

are usually less sensitive to image noise. However, some knowledge of the transformation

between the sensor and the robot frame [Tsai 89] is still required.

IBVS is however subject to some shortcomings. Firstly, it is hard to predict the trajectory of

the end effector and robot may reach its joint limits. Secondly, the end-effector translational

and rotational motions are not directly controlled and the usual coupling existing between

these motion makes it difficult to plan a pure rotation or a pure translation (for example, due

to the camera retreat problem [Chaumette 98]). Also, since the system is usually highly cou-

pled, the analytical domain of system stability is impossible to determine in the presence of

camera modeling errors. Furthermore, usual IBVS is only locally asymptotically stable and

may fail in the presence of large displacement to realize [Chaumette 98] [Cervera 99], which

necessitates a path planning step to split a large displacement up in smaller local movements

[Mezouar 02]. Finally, potential failure occurs when IBVS is subject to image singularities

or local minima [Chaumette 98].

Page 35: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

2.5 Visual servoing control laws 25

Depth estimation

In IBVS, providing some information about the depth of the object in the camera frame is

usually necessary for the computations required to obtain the interaction matrix. Since the

stability region for the error in depth estimation is not very large [Malis 10], it is necessary

to accurately estimate the depth. For static objects, this estimation of the depth value can be

obtained from the measurement of the current values of the feature points x and y and their

image motion x and y, and of the camera velocity [Matthies 89] [Marcé 87]. The depth pa-

rameters of planar and volumetric parametric primitives like points, lines, cylinders, spheres,

etc. can be also obtained [Chaumette 96]. Another depth estimation method for static points

without the explicit need for image motion estimation can be found in [DeLuca 08b]. In

[Xie 09], a laser pointer is used and the depth estimation can be achieved through a triangu-

lation method.

In response to the difficulties appearing in IBVS and PBVS, several methods that do not

rely solely on the interaction matrix have been devised to improve the behavior of the visual

servoing controls.

2.5.2 Enhanced visual servoing approaches

In order to overcome drawback of visual servoing control schemes, different modeling ap-

proaches have been considered such as sliding approaches [Zanne 00], partitioning approaches

[Corke 01] [Kyrki 04b] [Pages 06], trajectory planning approaches [Mezouar 02], varying-

feature-set approaches as presented in [Comport 06] [GarciaAracil 05] [Mansard 09b], and

hybrid and switching approaches which will be discussed in a particular section (Section

2.5.3).

Sliding approaches

Sliding mode control theory has been used to design a controller which is robust to bounded

parametric estimation errors (uncertainties) [Zanne 00]. Two sources of error are considered,

the control input error due to uncertainties in the positioning of the camera relative to the

robot end-effector, and the estimation error of the pose reconstruction algorithm used in the

control loop. The formulation of the sliding mode controller is based on the quaternion

representation of rotations. The controller is designed such that the estimated state precisely

tracks the desired trajectory in the presence of uncertainties.

Partitioning approaches

A partitioned approach had been proposed for decoupling the rotation and the translation

around and along the z-axis from the rotation and the translations regarding x-axis and y-axis,

[Corke 01]. This decoupling is used to solve the problem corresponding to a pure rotation by

180 degrees around the camera optical axis which leads to a singularity [Chaumette 98]. In

Page 36: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

26 State of the art of visual servoing

[Corke 01], a controller is designed for the motion along and around the optical axis which

can avoid the backward motion of the camera.

s = Jxyrxy + Jz rz ⇐⇒ rxy = J+xy(s− Jz rz) (2.16)

where rxy = (vx, vy, ωx, ωy) and rz = (vz, ωz). Selecting suitable vz and ωz, the partitioned

control scheme eliminates the retreat problem. This partitioning approach is also used to

keep the feature points in the camera field of view by establishing a repulsive potential func-

tion at the boundary of the image to enforce the camera to move in the negative direction of

its optical z-axis to keep the feature points within the image plane.

In [Pages 06], a structured light sensor is designed for image-based visual servoing such

that the projected pattern on the object is invariant to depth and the corresponding image

is symmetric. This approach, similar to a position-based approach, decouples the rotational

part from the translational one. Object plane is not required to be reconstructed by using

triangulation or a non-linear minimization algorithm. This method requires a coarse cal-

ibration of camera and lasers location with respect to the robot end effector. To improve

the robustness with respect to misalignment between the camera and the lasers, an image

transformation is defined.

Trajectory planning

In [Mezouar 02], rather than the utilization of direct point-to-point image-based visual ser-

voing when a large displacement has to be realized, a coupling between a path planning

step based on the potential field approach [Khatib 85] and a purely image-based control is

performed to extend the local robustness and stability of image-based control. The required

trajectory of the camera is planned with respect to stationary desired camera end-effector

frame. This ensures the shortest straight line in the Cartesian space in the absence of con-

straints. This method considers the joint limits problems and keeping the features in the field

of view of the camera.

In [Chesi 09b], a new framework solution is presented for avoidance in visual servoing using

homogeneous forms and linear matrix inequality. This method makes it possible to consider

different kind of constraints by imposing them as a positively conditions on homogeneous

form into the proposed general parameterization of the trajectory planned from the initial to

the desired position. Convex optimization is used in this framework in order to determine

the solution and maximize the performance of these path planning constraints.

Origin-shift in cylindrical coordinates

As already said in section 2.4.1, in [Iwatsuki 05], a formulation of visual servoing based on

the cylindrical coordinate system has been proposed to avoid the camera backward motion

when the motion from the initial to the desired poses is a pure rotation of 180 around the

optical axis. In that paper, a decision method of a shifted position of the origin by estimating

Page 37: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

2.5 Visual servoing control laws 27

the camera rotational motion is also proposed. Simulations and experimental results have

shown the effectiveness of the proposed approach by comparing the conventional approaches

and the new one.

Varying-feature-set control schemes

In [Comport 06], a weighted error function ew is used in the control law. It is defined as

ew =We where e = s− sd is the usual error between the current and desired visual features

andW = Diag(w1, w2, ..., wk) is a diagonal weighting matrix. The corresponding control

scheme is:

v = −λ(WJ)+We (2.17)

where (WJ)+ is the Moore-Penrose pseudo inverse of WJ. The values of wi ∈ [0, 1] issmoothly changing to a value that indicates the level of confidence in the corresponding fea-

ture (wi with 1 refers to the highest confidence level). This removes the outliers from the

feature set s. Errors in feature extraction, tracking and matching are rejected at the low level

control which improves the behavior of the positioning task. In [GarciaAracil 05], to ensure

the continuity of the control law when some features leave the camera field of view, i.e.,

when the number of features changes, a diagonal weighted matrix is also introduced into the

definition of the task function.

In [Mansard 09b], a theoretical study of using the weighted matrix when a classical pseudo

inverse is used shows that the continuity of the control scheme is not ensured when the ac-

tivated feature set is non redundant except when the activated feature is fully decoupled. A

control law based on a new inverse operator that is able to keep the continuity of the control

law regardless the change of the rank of the Jacobian is presented.

2.5.3 Hybrid and switching strategies in visual servoingTo combine the advantages of both image-based (2D) and position-based (3D) visual servo-

ing, numerous approaches have been proposed to model control schemes based on the utiliza-

tion of hybrid and switching strategies. As for the hybrid approaches when both 2D and 3D

features are used simultaneously in the same control scheme, we find 2 1/2 visual servoing

[Malis 99] [Chaumette 00] [Deguchi 98]. Switching approaches between PBVS and IBVS

are used to capture the strength of each one when the other being in a weakness configuration,

[Gans 03a] [Gans 03b] [Gans 07]. Hybrid and switching approaches are used together by in-

tegrating PBVS and IBVS in a switching hybrid scheme, [Deng 05] [Hafez 06] [Hafez 07b]

[Hafez 08b]. A hybrid approach in feature parameterization is presented in [Corke 09]. And

finally, planning step-switching using laser spot is presented in [Xie 09].

2.5.3.1 Hybrid approaches

A 2 1/2 D visual servoing approach developed in [Malis 99] [Chaumette 00] combines vi-

sual features expressed in 2D image and 3D Cartesian spaces. This hybrid controller tries

Page 38: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

28 State of the art of visual servoing

to take advantage of both approaches by decoupling rotation and translation and keeping the

features in the field of view. The task function is expressed in both Cartesian space and in

the image space. These approaches make it possible to demonstrate also the global asymp-

tomatically stability and the robustness of the control law when a minimal number of visual

features remain in the camera field of view [Malis 02]. The camera displacement between

two views is estimated using the correspondence between several points in two images to

extract the relative rotation between the two images and the translation up to a scale fac-

tor [Malis 00] [Benhimane 07]. This decoupling between the orientation and the translation

makes it possible to design two task functions: ev for the translation and eω for the rotation

which improves the system behavior in the 3D space. A very interesting aspect in 2 1/2 D

visual servoing is that, thanks to projective reconstruction, the knowledge of the 3D structure

of the considered targets is no more necessary. However, the use of projection reconstruction

implies that the corresponding control laws are more sensitive to image noise than classical

image-based visual servoing.

In [Kyrki 04b], a solution similar to [Chaumette 00] is presented. A hybrid control scheme

between 3D and 2D visual servoing is presented based on controlling the translation using

the estimated 3-D translation between the current and desired poses while the rotation is

controlled using a single feature point that is driven towards its desired location.

Another hybrid system between IBVS and PBVS is presented in [Hafez 07b]. Instead of

partially combining 2D and 3D visual information, a hybrid objective function to be opti-

mized is defined by concatenating both 2D and 3D errors. The solution thus minimizes the

errors in both image space and pose space simultaneously. This is obtained by defining a

cost function as:

E(s(p)) =λ2d

2e2d(p)

�e2d(p) +λ3d

2e3d(p)

�e3d(p) (2.18)

where e2d(p) = s2d(p)− s2d(p∗) is the image error vector, e3d(p) = s3d(p)− s3d(p∗) is thepose error vector, and λ2d and λ3d are positive scalar factors to define the integration weight

of 2D and 3D spaces. Using the classical approach, the control scheme is given by:

v = −λ

[L2d(p)L3d(p)

]+ [Γ2d

Γ3d

] [e2d(p)e3d(p)

](2.19)

whereL2d andL3d are respectively the interaction matrices to e2d and e3d and Γ∗ = diag(λ∗), ∗ =2d, 3d. This cost function has common global minima as 2D and 3D approaches. This hybrid

control scheme improves the performance in image and Cartesian spaces by decreasing the

probability of losing image feature and enhances the camera trajectory in the Cartesian space.

In [Hafez 07a], a boosted visual control that considers 2D and 3D control as weak controls

is presented. A linear combination of these two control produce a strong one with satisfac-

tory performance. Weights which are used in the combination are computed based on error

function associated with each control.

Page 39: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

2.5 Visual servoing control laws 29

2.5.3.2 Switching approaches

In [Gans 03a, Gans 03b, Gans 07], switching is performed in order to avoid the problems of

loosing feature points using PBVS and reaching robot joint limits when IBVS is considered.

In this switching system, a threshold region is defined as a circle in the image space to start

switching to IBVS if the features nears the image boundaries. Another threshold region is

defined in the robot configuration space in which the object is visible so that the switching to

PBVS starts when the robot nears its joint limits. Two metric functions are defined to decide

when switching should start. Finally, for the position-based and image-based visual servoing

given by v3d = −λ3dA3de3d and v2d = −λ2dA2de2d (where λ3d and λ2d are positive gains,

e3d and e2d are control input signals and A3d and A2d are the control matrices), a common

state space is defined by concatenating the state vectors such that:[e3de2d

]= λσAσ

[e3de2d

], σ ∈ 3d, 2d (2.20)

where

A3d = −[I6 0

L2dL−13d 0

], A2d = −

[0 L3dL

+2d

0 I6

](2.21)

The hybrid switching system presented in [Gans 03a] [Gans 03b] is defined by a convex

combination:

Ac = αA3d + (1− α)A2d, α ∈ [0, 1] (2.22)

where α is a time-based switching parameter. The system control is thus partitioned along

the time while in [Gans 07] a state-based switching parameter is used. Stability analysis pre-

sented in [Gans 07] shows that under arbitrary switching within a sufficiently small neigh-

borhood of the origin, a hybrid switching visual servo system is asymptotically stable in the

sense of Lyapunov. Moreover, the state based switching local stability is proved using a tech-

nique for establishing the stability of switched control systems [Branicky 97] [Wicks 94].

The switching method proposed in [Gans 07] is used for mobile robots navigation in cor-

ridors in [Toibero 09] to enhance the performance of the navigation system. The switching

helps to deal with adverse initial conditions that could be difficult to handle when only PBVS

or IBVS is considered. The switching helps also to handle the sudden change on the posi-

tion of the features due to disturbances. By considering Multiple Lyapunov Functions, the

stability of the controller used has been proved.

In [Deng 05], a system was designed to resolve the problems of local minima and singu-

larities. This is achieved by performing the switching from IBVS to PBVS as soon as the

end-effector motion nears the neighborhood of image singularity or image local minima con-

figurations. The evaluation of the image singularity is performed by comparing the image

motion perceptibility measure with image singularity threshold values, [Sharma 97]. As for

the local minima, to avoid the exhaustive searching for image local minima, they evaluate

the local minima only on the planned image trajectories.

Page 40: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

30 State of the art of visual servoing

In [Xie 09], a switching approach to decouple camera rotation and camera translation mo-

tions is proposed to apply IBVS to a robotic assembly system composed of a 6-DOF robot

with a camera and a laser pointer mounted on the robot end effector. The laser pointer is

used in order to decouple the camera rotation and translation. A required motion to be per-

formed is considered as a composition of two steps, rotation step and translation step. In the

first step, an imaginary image feature defined by laser-spot is used to control the rotational

motion of the end effector to reach the configuration where the laser spot is projected on

the object. In the second step, the interaction matrix constructed from the features of the

object and laser spot is used to control the end-effector translational motion with respect to

the object. This control scheme was used for eye-in-hand robotic manufacturing system to

detect, grasp, and assemble a planar object on a main body.

2.5.3.3 Hybrid switching approaches

An integration between PBVS and IBVS proposed in [Hafez 06] is performed by computing

the weighted sum of the velocity values obtained from each controller individually. The

integrated controller given by this framework is defined by

v = ωv2d + (1− ω)v3d (2.23)

where v2d and v3d are the velocity vectors of the IBVS and PBVS controllers and are given

by v2d = −λ2dL+2de2d, and v3d = −λ3dL

−13d e3d, respectively. ω is defined as a function of the

constraints in image and Cartesian spaces and the energy of the task function. This function

is designed such that near to image local minima and singularities its value decreases to

switch to PBVS while when one of the image features nears the image boundary, its value

increases to switch to IBVS. A probabilistic general formula can represent this framework

by defining v as:

v =∑αi

v(αi)p(αi|x) (2.24)

where p(αi|x) is the discrete probability for visual servoing control law parameters αi, i =1, .., k conditioned on the image measurement x and

∑αi

p(αi|x) = 1.

In [Hafez 08b], the same methodology used in [Hafez 06] is used for avoiding joint lim-

its and keeping features in the camera field of view. By considering the hybrid approach as

a boosting which produces a strong algorithm from two weak algorithms represented by the

IBVS and PBVS ones. In that case, the weight ω introduced in (2.23) is given by ω = α2d

α2d+α3d

where the thresholds of joint limits and the threshold representing the nearest image border

are used to compute the weights α2d and α3d that give the current importance to each algo-

rithm.

Page 41: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

2.5 Visual servoing control laws 31

2.5.3.4 Hybrid and/or switching strategies involving other approaches

Switching approaches previously mentioned combine two different visual servoing approaches.

Now, we address some cases when switching is performed between visual servoing and

other approaches: for example switching between visual servoing and artificial potential

[Hashimoto 00a], hybrid between visual and kinematic measurements [Taylor 04] and fi-

nally switching between PBVS and other selected rotational or translational control as in

[Chesi 04a].

In order to avoid local minima problems in IBVS, in [Hashimoto 00a], a switching method

based on defining artificial potential va using interpolation between the initial and the de-

sired images is proposed. When a visual servoing system nears a neighborhood of the local

minima, switching from the visual servoing velocity v to va is performed to reach a local

maximum. Then the system switches again to the original v.

A switching approach between PBVS and some selected rotational or translational motions

is presented in [Chesi 04a] for keeping features in the field of view. It consists of realizing

a visual servoing when all points lie inside the image. When at least one point lies on the

image boundary, system control switches to the other selected controls in order to push such

a point inside the image and to guarantee the final convergence. Only rotational control or

translational control is applied when the corresponding produced motions moves the points

lying on the image boundary inside the image. Otherwise, a backward translational motion

along the optical axis is applied to sent the camera away from the observed object.

To increase the accuracy and robustness of controlling humanoid robot having a grasping

system and working in an unstructured domestic environment, a hybrid approach that com-

bine kinematic measurements and visual measurements is presented in [Taylor 04]. In this

approach, using kinematic measurements increases the robustness to visual distractions and

occlusion while visual measurements increase the positioning accuracy. They introduced

online estimation of the eye-in-hand transformation to enhance the influence of calibration

errors on camera and kinematic models. Iterative extended Kalman filter (IEKF) was used

to fuse the visual and kinematic measurements obtained from the camera-robot system to

overcome the problems of initial pose error in long term operations and to decrease the in-

fluences of the calibration errors.

A hybrid controller for robust mobile manipulation is developed in [Wang 10] by integrating

the classical IBVS controller and a machine-learning approach called reinforcement learning

or Q-learning controller through a rule-based arbitrator which defines the control behavior

policy. This integration autonomously improves its performance by ensuring visibility of

visual features. This is thought to be the first paper that integrates Q-learning with visual

servoing to achieve robust operation. Experimental validation carried out to this approach

shows that the hybrid controller possesses the capabilities of self-learning and fast response,

and provides a balanced performance with respect to robustness and accuracy.

Page 42: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

32 State of the art of visual servoing

2.5.4 Visual servoing as a minimization problemVisual servoing can be viewed as a minimization problem that finds a camera displacement

vectorΔp to minimize a cost functionE(s(p)) of the error vector s(p)where p ∈ R3×SO3

is the camera pose [Malis 04] [Hafez 07b]. By considering the problem as a nonlinear least

squares minimization such that E(s(p)) = 12Δs�Δs, where Δs = (s(pi) − s(pd)) with

pi and pd the initial and the desired pose, Taylor expansion performed on the gradient of

E(s(p)), that is on Δs(p), can be used to obtain first and second order minimizations.

2.5.4.1 Steepest decent minimization method (SDM)

Using first order Taylor expansion of Δs(p) evaluated at pd around pi we get:

s(pd) = s(pi) +∂s(p)

∂p

∣∣∣∣p=pi

Δp (2.25)

where∂s(p)∂p

= Ls(p) andΔp = pd − pi. In the gradient decent minimization, the movement

is in the opposite direction to the gradient and the change in the pose is given by [Malis 04]:

Δp = −λL�s(p)Δs (2.26)

where λ is a positive gain and Ls = JsP such that p = Pv. The method presented above

is called Jacobian transpose method JTM and has been used in [Hashimoto 93b, Kelly 00,

Hafez 07b]. This method shows linear convergence rate.

2.5.4.2 Newton minimization method (NM)

In order to obtain quadratic convergence rate, second order Taylor expansion is used for

evaluating the cost function at the desired pose pd:

s(pd) = s(pi) +∂s(p)

∂p

∣∣∣∣p=pi

Δp+1

2Δp�

∂2s(p)

∂p2

∣∣∣∣p=pi

Δp (2.27)

where the second order derivative is a function of Hessian matrixH(s(p)) given by:

∂2s(p)

∂p2= L�

s(p)Ls(p) +n∑

j=0

Hjs(p)Δs (2.28)

By injecting (2.28) in (2.27) and solving for p we get:

v = −λ

(L�s(pi)Ls(pi) +

n∑j=0

Hjs(pi)

Δsj

)−1

L�s(pi)

Δs (2.29)

Even if the Newton minimization method has a higher convergence rate than Steepest decent

minimization method, convergence problems can appear when Hessian matrix is negative

definite.

Page 43: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

2.5 Visual servoing control laws 33

2.5.4.3 Approximation to Newton minimization method (NMM)

The convergence problem mentioned above is avoided by using a positive definite approxi-

mation to replace Hessian matrix. This replacement gives approximations to Newton mini-

mization method such as Gauss-Newtonminimization method, Levenberg-Marquardt method

and Quasi-Newton Method.

Gauss-Newton minimization method (GNM)

The first one is Gauss-Newton minimization method (GNM) in which the second order

derivative∂2s(p)∂p2 is set as L�

s(p)Ls(p) by neglecting∑n

j=0Hjs(p)Δs in (2.29). The control

law obtained is:

v = −λ(L�s(p)Ls(p))

−1L�s(p)Δs (2.30)

which is the classical control scheme using the pseudo inverse, [Espiau 92].

Levenberg-Marquardt method (LMM)

The second method is Levenberg-Marquardt method (LMM) or Damped Least Squares

method (DLS). It has been firstly proposed in [Wampler 86]. This method is obtained from

Newton method by replacing the Hessian part by γD:

v = −λ(L�s(p)Ls(p) + γD

)−1L+s(p)Δs (2.31)

where D is any selected diagonal positive matrix and γ is a parameter used to determine

the convergence rate. When γ is large, Levenberg-Marquardt method shows convergence

rate similar to that of Steepest decent minimization method. While when γ is small, the

convergence rate obtained approaches the convergence rate of Gauss-Newton minimization

method. Finally, when γ = 0, the LMM method is equivalent to GNM method. This method

is usually used to solve the singularity problem in robotics [Deo 92] [Chiaverini 97].

Quasi-Newton Method (QNM)

In this method, an approximation of Hessian matrix is reached by defining a sequence of

symmetric definite matricesAk that converges to the Hessian matrix [Shanno 70].

v = −λAk−1L�

s(p)Δs (2.32)

where the initial value of A0 is set to be I or L�s(p)Ls(p) to start by coinciding with steepest

method or Gauss-Newton method respectively. This method has been applied for visual

servoing in [Piepmeier 99]. Although using transpose of Ls(p) ensures the continuity in

the control scheme even when the rank of Ls(p) changes (when some features are lost for

instance), it can lead to non optimal control. That is why it is preferable usually to use the

pseudo inverse of Ls(p) [Espiau 92].

Page 44: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

34 State of the art of visual servoing

Efficient second order method (ESM)

Classical image-based visual servoing methods can be considered as problems of low conver-

gence rate and their performance can be studied in terms of the rate of convergence of their

minimization problems. However, convergence problems can be obtained when computing

the second order derivatives which is subject to ill-conditions. In [Malis 04], two control

schemes are proposed by formulating the positioning problem as a minimization problem.

The two schemes proposed give higher convergence rate while requiring only computing the

first order derivatives. The first proposed second order approximation is the mean of the

pseudo inverse of Ls(pi) and Ls(pd). It is given by:

v = −λ

2(L+

s(pi)+ L+

s(pd))Δs (2.33)

The second scheme is obtained by computing the pseudo inverse of the mean of Ls(pi) and

Ls(pd). It is given by:

v = −2λ(Ls(pi) + Ls(pd))+Δs (2.34)

Using these two control schemes, better results can be obtained even for large displacement

leading to local minima and retreat problem using basic approaches. However, we exhibited

in the next chapter a singularity of control scheme (2.34). Finally, the 3D camera trajectory

is also improved.

Hessian approximation

In [Lapresté 04], a method that approximates the control matrix up to second order is pro-

posed. This approximation is achieved by considering Taylor expansions at first and second

order, and by defining s ∈ RN , where N = n2(n + 1), as a function of pi, and finally by

introducing supplementary parameters pipj where 0 ≤ i ≤ j ≤ n where n is the number of

degrees of freedom. The Jacobian of s as a function of Ls andHs is given by:

Js = Ls(p)Δp+Δp�Hs(p)Δp (2.35)

where Jis = [Lis(p), H

is(p)] with Hi

s(p) is obtained from the coefficients of the lower triangular

part ofHsi(p) by dividing its diagonal by 2 and putting these terms in a line vector. Finally,

the control law is defined as:

v = K+Δs (2.36)

where K+ is composed of the first n lines of the pseudo inverse of the Jacobian matrix Js.This method is used in [Lapresté 04] to overcome the retreat and advance problem in IBVS.

2.5.5 Problems in visual servoingSelecting a suitable set of visual features and designing good control schemes should be

taken into account for avoiding system failures. Very usual problems in visual servoing that

are directly influenced by this selection and can be enhanced by a good selection are local

minima, singularity and visibility problems.

Page 45: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

2.5 Visual servoing control laws 35

Local minima

Generally, local minima problems occur only with very specific configurations [Chaumette 98]

[Gans 03a]. Getting trapped in a local minima, camera velocity is null v = 0, while the fea-

ture errors have not been minimized such that s − s∗ = φ ∈ Ker(L+). This results in

converging to a final pose that is different from the desired one. When s is composed in three

image points and Ls is full rank, then we have Ker(L+s ) = 0, implying that there is no local

minima. However, when three points are used, the same image of the three points can be seen

from four different camera poses, which means four camera poses exist such that s = s∗, thatcorrespond to four global minima. When at least four points are used, unique pose can the-

oretically be obtained. However, dim(Ls) = 8 × 6, implying that dim Ker(L+) = 2. Using

four points, the control law tries to enforces 8 constraints on the image trajectory while the

system has only six degrees of freedom. In that case, due to the existence of unrealizable mo-

tions in the image that may be computed by the control law, a local minima may be reached

[Chaumette 98].

Several control strategies have been used to avoid local minima in visual servoing. For

example, and as already said before, in [Deng 05] and [Gans 07], a hybrid motion control

strategy that considers the local minima problem is presented while in [Mezouar 02] a path

planning strategy is developed.

Singularity

When the interaction matrix is singular causing a task singularity, the velocity tends to infin-

ity and the system is unstable. It may become singular if image points are chosen as visual

features. For instance, when four points are used and the required camera motion is defined

by a pure rotation of 180o around its optical axis, the image trajectory obtained of each point

is such that the points move concurrently in a straight line at the principal point, where the in-

teraction matrix is singular [Chaumette 98]. For the considered motion, the choice of image

points coordinates is really inadequate. Indeed, if the four points in the image are replaced by

cylindrical parameters (ρ, θ), the singularity can be avoided when the same initial position is

used, showing a pure rotation motion around the optical axis of the camera. Other singular

configurations will be described in Chapter 4.

In PBVS [Wilson 96] [Martinet 96] [Deng 03], most of representations avoid the problems

of local minima and/or singularities of the corresponding interaction matrices depending

on the chosen e, a 3D straight line between the initial and final camera pose is obtained

when e is defined by (2.13). This problem can also be solved by using potential function

[Hashimoto 00b], partitioning approach [Corke 01], switching approach [Deng 05], hybrid

approach [Malis 99], and PBVS [Wilson 96].

Page 46: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

36 State of the art of visual servoing

Feature visibility

Using classical 2-D and 3-D visual servoing and assuming a bad calibration and a large initial

camera displacement, the target may leave the camera field of view [Malis 99] [Chesi 04b].

That is why, it is desirable to have servoing controls able to keep features in the camera

field of view to obtain reliable feedback signal during the servoing process. To minimize

the probability that the object leaves the FOV, a repulsive potential field can be adopted

[Chesi 04b], a path planning strategy [Mezouar 02], switching strategies [Gans 07], as well

as using structure light [Xie 09].

2.6 Performance and robustness

A nice dynamic performance of PBVS is presented in [Deng 03]. It considers different

orientation formats in controlling the robot end effector in the desired and in the current end-

effector frames. In [Kyrki 04a], the effect of vision system properties to the performance

of the control system are investigated by analyzing the propagation of image errors through

pose estimation and visual servoing control law.

Quantitative performance metrics for specific tasks can be performed using some metric

measures as in [Gans 03c]. These measures include the number of iterations required to con-

verge, error at termination, maximum feature excursion, maximum camera excursion and

maximum camera rotation. The evaluation and the comparison between different visual ser-

voing approaches can thus be performed for a given task. ( But I will not use these matrices

in the following chapters).

Recently in [Malis 10], the robustness of standard image-based visual servoing control and

efficient second order approximation method (ESM) is studied theoretically with respect to

errors on the 3-D parameters introduced in the interaction matrix when any central catadiop-

tric camera is used as a sensor and when points coordinates are used as input of the control

scheme. It has been noticed that the stability region is similar for all catadioptric cameras

and it is not expected to enlarge it by simply changing the type of central camera used.

In [JanabiSharifi 10a], a comprehensive comparison of IBVS and PBVS is presented by

comparing system stability and dynamic performance in the Cartesian and image spaces on

a common framework using both predefined and taught references . The robustness and sen-

sitivity analyses are investigated with respect to all the camera, target, and robot modeling

errors. Furthermore, other fundamental characteristics of the two methods, such as sensory

task space singularity and local minima are also compared.

Page 47: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

2.7 Conclusion 37

2.7 ConclusionIn this chapter, we presented a review of the state of the art concerning two main components

in visual servoing: features selection and design of the control scheme. This review can be

summarized in two graphs as illustrated in Fig. (2.4) for features selection and in Fig. (2.5)

for visual servoing control schemes. For a chosen combination of both, different behaviors

can be obtained by the robot system. That is why several works have been accomplished

concerning the robustness with respect to selected features, structure of the employed con-

trol scheme, existence of errors and uncertainty in robot or camera calibration, and errors

and uncertainty in input signals and in object model.

Page 48: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

38 State of the art of visual servoing

Figure 2.4 – Visual feature selection

Page 49: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

2.7 Conclusion 39

Figure 2.5 – Visual servoing control schemes

Page 50: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

40 State of the art of visual servoing

Page 51: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Chapter 3

New first order control scheme

In this chapter, we analyze and compare five image-based visual servoing control laws. Three

of them are classical while two new ones are proposed. The first new control law is based

on a behavior controller to adjust the movement of the camera. It can also be used to switch

between the classical methods. The second control law is designed to try to obtain the global

stability of the system. An analytical study of all control schemes when translational motion

along and rotational motion around the optical axis is also presented. Finally, simulation

and experimental results show that the new control law with a behavior controller has a

wider range of success than the other control schemes and can be used to avoid local min-

ima and singularities. The work described in this chapter leads to the following publications

[Marey 08b][Marey 08a][Marey 08c].

This chapter is organized as follows: In Section 3.1, classical control schemes are recalled.

In Section 3.2, the control law with a behavior controller is proposed while in Section 3.3,

another control law is proposed and it global stability is studied. In Section 3.4, an analysis

of the control laws in the presence of rotation and translation w.r.t. the camera optical axis is

presented. Finally, simulation and experimental results are presented in Section 3.5.

3.1 Classical image-based control matrix

Let s ∈ Rk be the vector of the selected k visual features, s∗ their desired value and v ∈ R6

the instantaneous velocity of the camera. Most classical control laws have the following

form:

v = −λ Ls+(s− s∗) (3.1)

where λ is a gain and Ls+is the pseudoinverse of an estimation or an approximation of the in-

teraction matrix related to s. Different forms for Ls have been proposed in the past [Espiau 92,

Page 52: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

42 New first order control scheme

Chaumette 06]. For simplicity, we consider that all values can be computed accurately, lead-

ing to the following choices

1) : Ls = Ls∗ (3.2)

2) : Ls = Ls(t) (3.3)

3) : Ls = (Ls∗ + Ls(t))/2. (3.4)

In the first case, Ls = Ls∗ is constant during all the servo since it is the value of the interac-

tion matrix computed at the desired configuration. It is thus not required to estimate the depth

of each feature at each iteration, the interaction matrix is computed off line either by knowing

the 3D model of the object or by computing it from an image acquired at the desired configu-

ration. In the second case, Ls = Ls(t) changes at each iteration of the servo since the current

value of the interaction matrix is used. The depth in this case can be estimated using camera

motion measurements [Chaumette 96] or using depth observer [DeLuca 08b] or from pose

estimation. Finally, in the third case, the average of these two values is used [Malis 04], and

similar to the second case the depth is needed at each iteration. As explained in [Tahri 10],

it is possible to improve the form given in (3.4) by using Ls = (Ls∗c∗Tc + Ls(t))/2 where

c∗Tc is the spatial motion transform matrix to transform velocities expressed in the desired

camera frame to the current camera frame. However, we will not consider this supplemen-

tary control scheme in the following, since it would be necessary to know the pose between

the current and the desired camera frame. The three usual choices (3.2), (3.3) and (3.4) for

Ls when used with (3.1) define three distinct control laws:

1) : v = −λ L+s∗(s− s∗) (3.5)

2) : v = −λ L+s(t)(s− s∗) (3.6)

3) : v = −2λ (Ls∗ + Ls(t))+(s− s∗). (3.7)

We will denote these control schemes D, C and A (for desired, current and average respec-

tively) in the remainder of this chapter. On one hand, near the desired pose where the error

s− s∗ is low, the same behavior is obtained whatever the choice of Ls since we have in that

case Ls(t) ≈ Ls∗ . On the other hand, as soon as s − s∗ is large, it is well known that the

choice of Ls induces a particular behavior of the system since we thus have Ls(t) = Ls∗ .This motivates the current research on the determination of visual features such that the in-

teraction matrix is constant in all the configuration space of the camera, but it is clearly still

an open problem, and it not the subject of this work to determine new visual features.

3.2 New controller with a behavior parameterIn our proposed control law, a hybrid matrix obtained by introducing a behavior controller βis used to partially combine the two interaction matrices Ls∗ and Ls(t):

Ls = Lβ = (βLs∗ + (1− β)Ls(t)). (3.8)

Page 53: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

3.3 Pseudo-Gas control law 43

Using (3.8) in (3.1), we obtain a new control law, denoted G in the following (for "general").

v = −λ (β Ls∗ + (1− β) Ls)+(s− s∗) (3.9)

Of course, if β = 1, we find again control law D; if β = 0, we obtain control law C, and

if β = 1/2, we obtain control law A. Control law G could thus be used to switch between

the different control schemes during the execution of the task. As described in the previous

chapter, switching strategies have already been proposed in [Gans 07] [Hafez 07b] but, in

these works, switching is performed between image-based and position-based approaches,

that is between different features in order to improve the performance of the visual servoing,

while here the features are the same but their control matrix is different.

More generally, the main interesting property of control law G (3.9) is that the behavior

of the system changes gradually from the behavior using control law C to the behavior using

control law A when β varies from 0 to 1/2, and similarly, the behavior changes gradually

from the behavior using control law A to the behavior using control law D when β varies

from 1/2 to 1. Hence, this new control scheme allows us to adapt the behavior of the system

based on the selected value of β. We will see in Section 3.5 that particular values of β indeed

allows the system to converge while the other control schemes fail.

Let us finally note that in case of modeling or calibration errors, the matrices Ls∗ and Ls(t)

have to be respectively replaced by approximations Ls∗ and Ls(t), but that does not change thegeneral properties of the control schemes as long as the approximations are not too coarse.

3.3 Pseudo-Gas control lawControl laws D, C, and A are known to be locally asymptotically stable only [Chaumette 06].

The same is of course true for control law G. In this section, an attempt to obtain a globally

asymptotically stable (GAS) control scheme is presented.

3.3.1 Modeling

Let us choose as task function e ∈ R6 the following error

e = L+s∗(s− s∗) (3.10)

where, as usual, s∗ is chosen such that Ls∗ is a full rank matrix. Since Ls∗ is constant, the

time variation of e is given by e = Lev where Le ∈ R6×6 is given by

Le = L+s∗Ls.

We can note that Le is of full rank 6 as soon as Ls is also of full rank 6. To achieve an

exponential decreasing of e (that is, e = −λe), we obtain immediately as control scheme

v = −λL−1e e, (3.11)

Page 54: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

44 New first order control scheme

which is nothing but

v = −λ(L+s∗Ls)

−1L+s∗(s− s∗). (3.12)

3.3.2 Stability analysisTo study the stability of the control scheme (3.12), let us consider as candidate Lyapunov

function L = 12‖e(t)‖2. We have L = e�e = e�Lev. Applying (3.11), we obtain

L = −λ e�LeL−1e e = −λ e�e

< 0 , ∀e = 0.The control scheme (3.12) seems thus to be very promising since it is GAS in the task func-

tion space. Indeed, L always decrease to 0 whatever the initial value of e. Furthermore,

e ensures the specified behavior e = −λe as soon as Ls∗ and Ls are computed accurately.

Unfortunately, to end the demonstration of the global stability, we should demonstrate that

e = 0 if and only if s = s∗. That is usually impossible since, as soon as

(s− s∗) ∈ Ker L+s∗ , (3.13)

we have e = 0, which implies v = 0, and s = s∗, which corresponds to a local mini-

mum [Chaumette 98]. In other words, GAS in the task function space does not necessarily

imply GAS in SE3 or in the visual features space. The task function (3.10) forms a local

diffeomorphism with SE3, but not a global one as soon as a configuration such that (3.13) is

satisfied exists. Control law (3.12) will be denoted PG in the following (for “pseudo-GAS”).

In Section 3.5, we will exhibit some configurations which lead to local minima. We can

thus conclude that, as for all the previous control schemes, only the local asymptotic sta-

bility of PG can be demonstrated when image point coordinates are used as visual features.

In spite of this disappointing result, control scheme PG is still interesting, since it may be

possible in the future to determine visual features such that Ker L+s∗ = 0, leading to GAS.

3.4 Motion along and around the optical axisThis section presents an analytical analysis of all the control laws described previously when

the camera displacement is a combination of a translation tz and a rotation rz w.r.t. the

camera optical axis. As usually done in IBVS, we have considered an object composed of

four points forming a square (see Fig. 3.1). The study includes three cases in which the

movement along z-axis is from Z to Z∗. In the first case rz = 0o, which means no rotation

around the camera optical axis, while rz = 90o in the second case and rz = 180o in the

third case. In all cases, the object plane is parallel to the image plane and the desired pose is

common. Using four points, the initial visual feature vector s and the desired one s∗ are:

s = (x0, x1, x2, x3, y0, y1, y2, y3)

s∗ = (x∗0, x

∗1, x

∗2, x

∗3, y

∗0, y

∗1, y

∗2, y

∗3) (3.14)

where xi = Xi/Z, yi = Yi/Z, x∗i = X∗

i /Z∗ and y∗i = Y ∗

i /Z∗ for i = 1, .., 4.

Page 55: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

3.4 Motion along and around the optical axis 45

Figure 3.1 – Motions along the camera optical axis

3.4.1 Case 1: tz = (Z → Z∗)

We consider in this case a pure translation along the camera optical axis. Let the coordi-

nates of the four points in the camera frame at the initial and the desired configurations

be defined by pi0 = (−L,L, Z), pi1 = (L,L, Z), pi2 = (L,−L,Z), pi3 = (−L,−L,Z),pd0 = (−L,L, Z∗), pd1 = (L,L, Z∗), pd2 = (L,−L,Z∗) and pd3 = (−L,−L,Z∗). Let

l = L/Z and l∗ = L/Z∗. By defining the feature vector as in (3.14), we obtain the ini-

tial feature vector s = (−l, l, l,−l, l, l,−l,−l) and the corresponding desired feature vector

s∗ = (−l∗, l∗, l∗,−l∗, l∗, l∗,−l∗,−l∗) as depicted in Fig. 3.2.

Figure 3.2 – Case 1: Initial and desired image feature points when tz = (Z → Z∗)

Page 56: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

46 New first order control scheme

Using s∗ and the analytical form of Lx, the stacked interaction matrix Ls computed at the

desired configuration for s∗ is given as:

Ls∗ =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

−1Z∗ 0 −l∗

Z∗ −l∗2 −(1 + l∗2) l∗−1Z∗ 0 l∗

Z∗ l∗2 −(1 + l∗2) l∗−1Z∗ 0 l∗

Z∗ −l∗2 −(1 + l∗2) −l∗−1Z∗ 0 −l∗

Z∗ l∗2 −(1 + l∗2) −l∗

0 −1Z∗

l∗Z∗ 1 + l∗2 l∗2 l∗

0 −1Z∗

l∗Z∗ 1 + l∗2 −l∗2 −l∗

0 −1Z∗

−l∗Z∗ 1 + l∗2 l∗2 −l∗

0 −1Z∗

−l∗Z∗ 1 + l∗2 −l∗2 l∗

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦(3.15)

and in this particular case, the interaction matrix Ls of initial set of visual features is given

by:

Ls = Ls∗|(l∗=l,Z∗=Z) (3.16)

It is possible to compute the analytical form of L+β by using Z = l∗Z∗/l and injecting (3.16)

and (3.15) in (3.8). We obtain:

L+β = (βLs∗ + (1− β)Ls)

+ =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎣

−c0 −c0 −c0 −c0 −c1 c1 −c1 c1

−c1 c1 −c1 c1 −c0 −c0 −c0 −c0

−c2 c2 c2 −c2 c2 c2 −c2 −c2

−c3 c3 −c3 c3 0 0 0 00 0 0 0 c3 −c3 c3 −c3

c4 c4 −c4 −c4 c4 −c4 −c4 c4

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎦(3.17)

where

c0 =l∗Z∗

4(l∗β + l(1− β))

c1 =l∗Z∗(1 + l∗2β + l2(1− β))

4(l∗β + l(1− β))(l∗2β + l2(1− β))

c2 =l∗Z∗

8(l∗2β + l2(1− β))

c3 =1

4(l∗2β + l2(1− β))

c4 =1

8(l∗β + l(1− β))

Analyzing (3.17) shows that singularities exist for Lβ when l∗β + l(1 − β) = 0 or when

βl∗2 + (1− β)l2 = 0. These situations occur when β = 0 and l = 0 which corresponding to

initial pose at infinity. Another singularity is obtained when β = 1 and l∗ = 0 correspondingto setting the desired position at infinity. When β = 1/2 singularity is also obtained if either

Page 57: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

3.4 Motion along and around the optical axis 47

l∗ = 0 or l = 0, corresponding to setting the desired or the initial position at infinity similarly

to controls C and D respectively. Let us note that none of these configurations are realistic

in practice,

The error vector e to be used in the control scheme is given by:

e = s− s∗ = (−l + l∗, l − l∗, l − l∗, l − l∗, l − l∗,−l + l∗,−l + l∗) (3.18)

If the gain of the control law is λ then the values for the initial instantaneous velocity vi ofthe camera can be deduced by injecting (3.17) and (3.18) in (3.9) to get:

vi =(0, 0, −λ(l−l∗)l∗Z∗

βl∗2+(1−β)l20, 0, 0)

(3.19)

As expected, there is only a translational velocity component vz for all control schemes D,

C, M and G. The direction of motion depends on the relative relation between l and l∗. When

Z > Z∗, which is equivalent to l < l∗, a forward motion is performed making the camera

move toward the object frame to near its desired position. While, when Z < Z∗ the directionof initial motion is to make the camera move backward far from the object. The magnitude

of vz is proportional to the difference between l and l∗. To conclude, there is no particular

difference in that case between the control schemes C, D and A, and all provide a satisfactory

behavior.

3.4.2 Case 2: rz = 90o and tz = (Z → Z∗)

By considering the coordinates of the four points w.r.t. the camera frame at the desired

pose as in case 1, while the coordinates at the initial pose are now pi0 = (−L,−L,Z),pi1 = (−L, L, Z), pi2 = (L,L, Z) and pi3 = (L,−L,Z), the initial value of s is then si =(−l,−l, l, l,−l, l, l,−l) and si−s∗ = (−l+l∗,−l−l∗, l∗−l, l+l∗,−l−l∗, l−l∗, l+l∗, l∗−l)is the error vector (see Fig. 3.3).

The interaction matrix of the features at the initial configuration is:

Ls =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

−1Z

0 −lZ

l2 −(1 + b2) −l−1Z

0 −lZ

−l2 −(1 + l2) l−1Z

0 lZ

l2 −(1 + l2) l−1Z

0 lZ

−l2 −(1 + l2) −l0 −1

Z−lZ

1 + l2 −l2 l0 −1

ZlZ

1 + l2 l2 l0 −1

ZlZ

1 + l2 −l2 −l0 −1

Z−lZ

1 + l2 l2 −l

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦(3.20)

Injecting the value of the interaction matrix for the desired features, as given in (3.15), and

the interaction matrix (3.20) we find easily the analytical form of Lβ , from which L+β is

Page 58: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

48 New first order control scheme

Figure 3.3 – Case 2: Initial and desired image feature points when rz = 90o and tz = (Z → Z∗)

obtained after computations and simplifications:

L+β = (βLs∗ + (1− β)Ls)

+ =

⎡⎢⎢⎢⎢⎢⎢⎣−c0 −c0 −c0 −c0 −c1 c1 −c1 c1

−c1 c1 −c1 c1 −c0 −c0 −c0 −c0

−c3 c4 c3 −c4 c4 c3 −c4 −c3

−c5 c5 −c5 c5 0 0 0 00 0 0 0 c5 −c5 c5 −c5

c7 c6 −c6 −c7 c6 −c7 −c6 c7

⎤⎥⎥⎥⎥⎥⎥⎦ (3.21)

where

c0 =l∗Z∗

4(βl∗+(1−β)l)

c1 =

{0 if βl∗2 = (1− β)l2

c0β(1+l∗2)+(1−β)(1+l2)

(βl∗2−(1−β)l2)else.

c3 =l∗Z∗(βl∗+(1−β)l)

8((1−β)2l3+β2l∗3),

c4 =l∗Z∗(βl∗−(1−β)l)

8((1−β)2l3+β2l∗3)

c5 =

{0 if βl∗2 = (1− β)l2

−14(βl∗2−(1−β)l2)

else.

c6 =βl∗2+(1−β)l2

8((1−β)2l3+β2l∗3),

c7 =βl∗2−(1−β)l2

8((1−β)2l3+β2l∗3)

Page 59: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

3.4 Motion along and around the optical axis 49

Using the value of si − s∗, the initial velocity vi is easily deduced from (3.1) as:

vi =(0, 0, vz, 0, 0, ωz,

)(3.22)

where

vz =λZ∗l∗(βl∗2 − (1− β)l2)

β2l∗3 + (1− β)2l3, ωz =

λll∗(βl∗ + (1− β)l)

β2l∗3 + (1− β)2l3

As expected, the initial camera motion consists in performing a translation combined with a

rotation whose value only depends on image data and on the chosen value for β and λ. We

can note that Lβ is singular if βl∗2 = (1−β)l2. For instance, such a singularity occurs when

l = l∗ (i.e. Z = Z∗) and β = 1/2, which is very surprising. The control law A proposed

in [Malis 04] is thus singular for a pure rotation of 90o, which had not been exhibited before

as far as we know. In fact, the only way to avoid this singularity whatever the value of land l∗ is to select β = 0 or β = 1. As can be seen on (3.22), this singularity has no effect

on the computed velocity in perfect conditions, but, as we will see in Section 3.5, a quite

unstable behavior is obtained in the presence of image noise or for configurations near that

singularity (such that for instance the object plane is almost parallel to the image plane).

When Z = Z∗ then l = l∗ and the initial velocity vi becomes

vi =(0, 0, λZ∗(2β−1)

2β2−2β+1, 0, 0, λ

2β2−2β+1

).

In that classical case, the velocity vi contains an unexpected translation whose direction de-

pends on the value of β (vz < 0 if β < 1/2 and vz > 0 if β > 1/2). The only way to

avoid this nonzero translation is to select β = 1/2 as already shown in [Malis 04], but Lβ is

singular in that case...

Coming back to the more general case and setting β = 1 in L+β , the initial velocity vi

using control law D is given by

vi =(0, 0, λZ∗, 0, 0, λl

l∗). (3.23)

Whatever the value of Z, that is even when Z < Z∗ in which case the camera has to move

backward, the initial camera motion contains a forward translational term. This surprising

result extends the same property obtained when Z = Z∗ [Chaumette 06].

Setting β = 0, the initial velocity vi using the control law C is now

vi =(0, 0, −λl∗Z∗

l, 0, 0, λl∗

l

). (3.24)

In that case, the initial camera motion contains a backward translational term whatever the

value of Z, that is even when Z ≥ Z∗. We can even note that, more l is small, i.e. more Z is

large, more the initial backward motion is large, which is even more surprising than the result

obtained for β = 1. These results extend thus largely the property exhibited in [Corke 01]

Page 60: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

50 New first order control scheme

when Z = Z∗. By comparing (3.23) and (3.24), we can also note that the amplitude of the

rotational motion using control laws D and C is surprisingly not the same as long as l = l∗,that is as soon as Z = Z∗.

Setting β = 1/2, the velocity vi using control law A is

vi =(0, 0, 2λZ∗l∗(l∗2−l2)

l∗3+l3, 0, 0, 2λll∗(l+l∗)

l3+l∗3

).

In that case, a good behavior is obtained since the translational motion is always in the ex-

pected direction (vz < 0 when l∗ < l, that is when Z < Z∗, vz > 0 when l∗ > l (Z > Z∗),and, as already said, vz = 0 when l = l∗ (where Z = Z∗ but where Lβ is singular).

Finally, the velocity vi of the new control law PG is

vi =(0, 0, −λl∗Z∗

l, 0, 0, λl∗

l

)that is exactly the same velocity vi given in (3.24) by the control law C.

3.4.3 Case 3: rz = 180o & tz = (Z → Z∗)

We now consider the more problematic case where the camera displacement is composed of a

translation and of a rotation of 180o around the camera optical axis (see Fig 3.4). The initial

coordinates of the four points in the camera frame at the initial pose are pi0 = (L,L, Z),pi1 = (−L,L, Z), pi2 = (−L,−L,Z) and pi3 = (L,−L,Z).

Figure 3.4 – Case 3: Initial and desired image feature points when rz = 180o and tz = (Z → Z∗)

Page 61: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

3.4 Motion along and around the optical axis 51

In that case, si = (l,−l,−l, l,−l,−l, l, l) and the corresponding interaction matrix is:

Ls =

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎢⎢⎣

−1Z∗ 0 l

Z∗ −l2 −(1 + l2) −l−1Z∗ 0 −l

Z∗ l2 −(1 + l2) −l−1Z∗ 0 −l

Z∗ −l2 −(1 + l2) l−1Z∗ 0 l

Z∗ l2 −(1 + l2) l0 −1

Z∗l

Z∗ 1 + l2 l2 −l0 −1

Z∗l

Z∗ 1 + l2 −l2 l0 −1

Z∗−lZ∗ 1 + l2 l2 l

0 −1Z∗

−lZ∗ 1 + l2 −l2 −l

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎥⎥⎦(3.25)

The control matrix L+β is now given by:

L+β =

⎡⎢⎢⎢⎢⎢⎢⎣−c0 −c0 −c0 −c0 −c1 c1 −c1 c1

−c1 c1 −c1 c1 −c0 −c0 −c0 −c0

−c3 c3 c3 −c3 c3 c3 −c3 −c3

−c4 c4 −c4 c4 0 0 0 00 0 0 0 c4 −c4 c4 −c4

c5 c5 −c5 −c5 c5 −c5 −c5 c5

⎤⎥⎥⎥⎥⎥⎥⎦ (3.26)

wherec0 =

l∗Z∗4(βl∗+(1−β)l)

, c1 = c0β(1+l∗2)+(1−β)(1+l2)

βl∗2+(1−β)l2

c3 =

{0 if βl∗2 = (1− β)l2

l∗Z∗8(βl∗2−(1−β)l2)

else

c4 =1

4(βl∗2+(1−β)l2)

c5 =

{0 if βl∗ = (1− β)l

18(βl∗−(1−β)l)

else

Proceeding as before, using si−s∗ = (l+l∗,−l−l∗,−l−l∗, l+l∗,−l−l∗,−l−l∗, l+l∗, l+l∗)we obtain

vi =(0, 0, vz, 0, 0, 0,

)where vz =

{0 if βl∗2 = (1− β)l2λZ∗l∗(l+l∗)

βl∗2−(1−β)l2else.

In all cases, no rotational motion is produced while a translational motion is generally ob-

tained, but when βl∗2 = (1 − β)l2 in which case Lβ is singular, leading to a repulsive local

minimum where vz = 0. Such a case occurs for instance when Z = Z∗ (i.e. l = l∗) andβ = 1/2, which corresponds to the control law proposed in [Malis 04]. Another singularity

occurs when βl∗ = (1− β)l, which is also the case when l = l∗ and β = 1/2.

Of course, when Z = Z∗, we find again the results given in [Chaumette 98]: a pure for-

ward motion is involved when β = 1 and a pure backward motion is involved when β = 0.More generally, for β = 1 and β = 0, the direction of motion is the same (i.e. forward or

backward) whatever the value of l and l∗, that is whatever the value of Z with respect to Z∗.

Page 62: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

52 New first order control scheme

For any other value of β, the direction of motion depends on the relative value of Z with

respect to Z∗, but unfortunately, there does not exist any value of β that will give a good

behavior in that case since no rotational motion is computed by the control law. Finally, no

better results are obtained using control law PG since we have in that case

vi =(0, 0, −λl∗Z∗(l+l∗)

l2, 0, 0, 0

)which is the same as the one obtained when β = 0.We will validate the results obtained in this section through experimental results presented

at the end of the next section.

3.5 ResultsIn this section, simulation and experimental results are given. They have been obtained using

the ViSP library [Marchand 05b] in which the new control schemes have been implemented.

3.5.1 Simulation results: Motion along and around camera optical axisIn this section, the behavior of the camera movement w.r.t. the camera optical axis is studied.

A pose is denoted as p = (t, r) where t is the translation expressed in meter and r the roll,

pitch and yaw angles expressed in degrees. The initial camera pose is (0, 0, 1, 0, 0, rz) andthe desired camera pose is (0, 0, 0.5, 0, 0, 0). As in the previous section, we have considered

a square and we have set L = 0.1 so that a = 0.2 and b = 0.1. We have also set λ = 0.1.

It is important here to mention that the simulation results are not always physically coherent

because the simulation software can accept a very high velocity which can not exist in reality

to be used with most of robotic systems. So, it is necessary to saturate the very high veloc-

ities resulted from the simulation by specifying translational and rotational saturation terms

Tmax and Rmax. We consider that the real robot acts normally when the velocity sent to the

robot controller is given by Tmax = 0.05 m/s and Rmax = 5 deg/s. All velocity components

are thus normalized when needed so that the maximal one is 0.05 m/s or 5 deg/s. Implement-

ing these saturation values forbids the application of too high and thus unfeasible velocities

and causes a more realistic behavior of the control law when the system is near to a criti-

cal situation. Such a situation may occur when Ls is ill conditioned, that is near a singularity.

A general description of the camera behavior when employing the five control laws with

all possible values of rz is given in the following, as well as the results for three cases:

case A when rz = 120o, case B when rz = 170

o and case C when rz = 180o. The results of

the three cases are depicted in Figs. 3.5, 3.6 and 3.7 respectively.

Applying control law D, the camera translates toward the desired pose without any addi-

tional movement as soon as rz ≤ 78o. When rz > 78o, the camera continues its translational

Page 63: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

3.5 Results 53

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(a) control law D

-0.02-0.01

0 0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09

0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(b) control law C

-0.06-0.04-0.02

0 0.02 0.04 0.06 0.08 0.1

0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(c) control law A

-0.01 0

0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09

0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(d) control law PG)

-0.06-0.04-0.02

0 0.02 0.04 0.06 0.08

0.1

0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(e) control law G, β = 0.285

-0.01 0

0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09

0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

Figure 3.5 – Results for case A: Desired pose is (0, 0, 0.5, 0, 0, 0) and initial pose is (0, 0, 1, 0, 0, 120).First column: image points trajectories, middle column: camera velocity components (in m/s andrad/s), last column: visual features error components and global error.

Page 64: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

54 New first order control scheme

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

0 50 100 150 200 250 300 350 400

P0xP0yP1xP1yP2xP2yP3xP3y

Total

-0.06-0.04-0.02

0 0.02 0.04 0.06 0.08 0.1

0 50 100 150 200 250 300 350 400

v_xv_yv_zw_xw_yw_z

(a) control law C

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

1

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

-0.01 0

0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09

0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

(b) control law A

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

0 50 100 150 200 250 300 350 400

P0xP0yP1xP1yP2xP2yP3xP3y

Total

-0.06-0.04-0.02

0 0.02 0.04 0.06 0.08 0.1

0 50 100 150 200 250 300 350 400

v_xv_yv_zw_xw_yw_z

(c) control law PG

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

1

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

-0.01 0

0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09

0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

(d) control law G, β = 0.4

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

-0.01 0

0.01 0.02 0.03 0.04 0.05 0.06 0.07 0.08 0.09

0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

(e) control law G, β = 0.35

Figure 3.6 – Results for case B: Desired pose is (0, 0, 0.5, 0, 0, 0) and initial pose is (0, 0, 1, 0, 0, 170).First column: image points trajectories, middle column: camera velocity components (in m/s andrad/s), last column: visual features error components and global error.

Page 65: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

3.5 Results 55

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0

0.2

0.4

0.6

0.8

1

1.2

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

-0.1-0.08-0.06-0.04-0.02

0 0.02 0.04 0.06

0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

Figure 3.7 – Results for case C: Desired pose is (0, 0, 0.5, 0, 0, 0) and initial pose is (0, 0, 1, 0, 0, 180);control laws (A and G), β = 0.5

motion after reaching Z = Z∗ and then moves back toward the desired pose as depicted

in Fig. 3.5(a). The translation increases as rz increases. When rz ≥ 155o, the control law

fails since the camera reaches the object plane where Z = 0. Finally, as can be seen on

Figs. 3.5(a), vz reaches its maximal value at the first iteration and ωz after some iterations of

the control scheme. Both are saturated during few iterations.

Applying control laws C and PG which have exactly the same behavior in the situations con-

sidered here, the camera translates correctly as long as rz ≤ 61o. When rz > 61o, the camera

starts moving backward with maximum vz and then translates forward. The backward trans-

lation increases as rz increases. Maximum rz is reached and saturated when the camera

changes its translation from backward to forward as depicted in Fig. 3.5(b) and 3.5(d). The

number of iterations required to reach the desired pose increases rapidly when rz > 150.Finally, when rz ≥ 178.6o, the backward translation is so large that the camera is not able to

reach the desired pose.

Applying control law A, there is no additional translation as long as rz < 172o. This

can be checked on Figs. 3.5(c). As discussed before, control law A has a singularity when

rz = 180o, that is why the velocity components are saturated at the beginning of the servo

for large values of rz (see Fig. 3.6(b)). Applying the saturation allows control law A to

converge with a perfect behavior as long as rz < 180o. Indeed as expected from our ana-

lytical demonstration in the previous section, when rz = 180o, only a translational velocity

component vz = 0 is involved, so the camera performs a forward translation until Z = Z∗

where a = b. The analytical condition for vz = 0 is then satisfied since we have β = 1/2.That is why vz is damped suddenly to zero as illustrated in Fig. 3.7. Finally, the movement

of the camera after iteration number 34 is due to the accumulated numerical noise during the

previous iterations.

Applying the new control law G, different behaviors are obtained based on the value selected

for β. When the value of β is near to 0, 1 and 1/2, the behavior of the control law approaches

the behavior of control laws D, C and A respectively. Best selection of β leads to enhance

the behavior of the control law for a given displacement. For example, in case A where

rz = 120o, control law G allows the camera to reach its desired pose when β ∈ [−0.08, 1.19]

with the best behavior obtained when β = 0.285 as depicted in Fig. 3.5(e). In that case, the

Page 66: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

56 New first order control scheme

rotational velocity ωz is saturated after reaching its maximum value at the first iteration. The

error on each point coordinates starts also to decrease at the first iteration. When rz = 170o,

the camera reaches its desired pose as long as β ∈ [0.33, 0.85] with best behavior obtained

when β = 0.4. Nice results are also obtained with β = 0.35.

3.5.2 Experimental results: SingularitiesThe experimental results have been obtained on a six degrees of freedom robot (see Fig. 3.8).

They allow to validate the analysis presented in Section 3.4 about the motion along and

around the optical axis.

Figure 3.8 – Experimental system: Cartesian robot

3.5.2.1 Case E1

In the first experimental case, the required camera motion is composed of a rotation of 170o

around the optical axis combined with a translation of 0.5 m along the optical axis toward

the object (a square once again). As usual, gain λ has been set to 0.1.

As expected unfortunately, control law D makes the points leave the camera field of view

due to a forward motion, while control laws C and PG make the robot reach its joints lim-

its due to a backward motion. As can be seen in Fig. 3.9.a, control law A starts with high

value of vz toward the object, while ωz increases until the translational motion is almost

finished. Since the pure rotation rz = 90o corresponds to a singularity of control law A, as

demonstrated in the analytical study, the behavior of the camera is quite unstable near this

configuration, that is during 400 iterations (from iterations 800 to 1200) as can be observed

in the velocity components in Fig. 3.9.a. As can be seen in Fig. 3.9.b, using control law G

with β = 0.4 allows to decrease significantly the effect of the singularity near rz = 90o,while its effect completely disappears for β = 0.35 (see Fig. 3.9.c).

Page 67: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

3.5 Results 57

-0.2-0.15

-0.1-0.05

0 0.05

0.1 0.15

0.2

0.2 0.1 0 -0.1 -0.2

InitialReached

Desired

-0.3-0.2-0.1

0 0.1 0.2 0.3 0.4 0.5 0.6

0 500 1000 1500 2000 2500

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(a) control law A

-10

-5

0

5

10

2500 2000 1500 1000 500 0

v_xv_yv_zw_xw_yw_z

-0.2-0.15

-0.1-0.05

0 0.05

0.1 0.15

0.2

0.2 0.1 0 -0.1 -0.2

InitialReached

Desired

-0.3-0.2-0.1

0 0.1 0.2 0.3 0.4 0.5 0.6

0 500 1000 1500 2000 2500

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(b) control law G, β = 0.45

-10

-5

0

5

10

2500 2000 1500 1000 500 0

v_xv_yv_zw_xw_yw_z

-0.2-0.15

-0.1-0.05

0 0.05

0.1 0.15

0.2

0.2 0.1 0 -0.1 -0.2

InitialReached

Desired

-0.3-0.2-0.1

0 0.1 0.2 0.3 0.4 0.5 0.6

0 500 1000 1500 2000 2500

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(c) control law G, β = 0.35

-10

-5

0

5

10

2500 2000 1500 1000 500 0

v_xv_yv_zw_xw_yw_z

-0.2-0.15-0.1

-0.05 0

0.05 0.1

0.15 0.2

0.2 0.1 0 -0.1 -0.2

InitialReached

Desired

-0.3-0.2-0.1

0 0.1 0.2 0.3 0.4 0.5 0.6

2500 2000 1500 1000 500 0

P0xP0yP1xP1yP2xP2yP3xP3y

Total

-10

-5

0

5

10

2500 2000 1500 1000 500 0

v_xv_yv_zw_xw_yw_z

(d) control laws A and G, β = 0.5

Figure 3.9 – Experimental results for cases E1 and E2: rz = 170o in (a), (b) and (c); for case B:rz = 180o in (d). First column: image points trajectories, middle column: camera velocity components(in m/s and rad/s), last column: visual features error components and global error.

3.5.2.2 Case E2

In this second case, the task is still to perform a translation of 0.5 m toward the object but

combined now with a rotation of 180o. Figure 3.9.d shows the results obtained for control

law A (that is G with β = 0.5). The velocity components show that the motion of the camera

starts with a pure translation toward Z∗. From the analytical study, no rotational motion

should occur. However, due to small image noise and to the use of a real robot, that is a non

perfectly calibrated system, the robot moves away from the repulsive local minimum and

Page 68: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

58 New first order control scheme

starts to rotate. The effect of the singularity at 90o is clearly visible, but after its crossing, the

system converges to the desired pose.

3.5.3 Simulation results: Local minimaFirst, we consider two difficult configurations and compare the results obtained with the

different control schemes described previously.

3.5.3.1 Case 1

The desired camera pose is (0, 0, 0.5, 0, 0, 0), which means that the camera has to be at 0.5 m

in front of the square and such that the square appears as a centered square in the image. In

this case, the configurations where (s − s∗) ∈ Ker L+s∗ correspond to very particular cases

where the four points are aligned in the image [Chaumette 98]. The initial camera pose

is (0, 0, 0.4, 80, 20, 10) and has thus a very different orientation than the desired one. The

simulation results for the control laws D, C, A and PG are depicted on Fig. 3.10. Classical

schemes D, C and A lead the camera to converge to its desired pose while, using the new

control law PG, the camera reaches a configuration where the four points are aligned in the

image. In fact, such local minima are attractive for PG while they are not for all other control

schemes. As expected, the task function e defined in (3.10) converges exponentially to zero

as shown in Fig. 3.12.a, but that is not sufficient to obtain a good behavior of the system...

Finally, we have checked with additional simulations that control law G converges to the

desired configuration for any value of β ∈ [−1.9; 1.04] (see Fig. 3.10.e where the result for

β = 0.4 is given). It is thus not necessary that β ∈ [0; 1] and negative values can even be

chosen. For this configuration, the value of β is thus not a crucial issue.

3.5.3.2 Case 2

The desired camera pose is now given by (0, 0, 1, 45,−30, 30) which means that the desired

position of the image plane is not parallel to the object. The initial camera pose is given by

(0, 0, 1,−46, 30, 30). In that case where the desired position of the image plane is not parallel

to the object, the control law D is known to be subject to local minima. As can be seen on

Fig. 3.11.a, using control law D, the camera is first motionless, as in a local minimum, and

then starts to diverge so that the points leave the camera field of view. Even if we do not

consider this constraint (we are here in simulation where an image plane of infinite size can

be assumed), the camera then reaches the object plane where Z = 0, leading of course to a

failure. From the results depicted in Fig. 3.11.c, 3.11.d, and 3.11.e, we can see that control

laws C, A, and PG all fail in a local minimum. For PG, we can note once again that the task

function e converges exponentially to zero as shown in Fig. 3.12.b. As for control law A, it

is the first time, as far as we know, that such a local minimum problem has been exhibited.

Finally, control law G with a behavior controller is the only one to converge to the desired

position as soon as 0.515 < β < 0.569 (see Fig. 3.11.e). The oscillations observed in the

camera velocity and in the visual features allow the camera to go out from the workspace

corresponding to the attractive area of the local minimum for the other control schemes.

Page 69: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

3.5 Results 59

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0.6 0.5 0.4 0.3 0.2 0.1

0-0.1-0.2-0.3

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(a) control law D

-10

-5

0

5

10

0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0.6 0.5 0.4 0.3 0.2 0.1

0-0.1-0.2-0.3

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(b) control law C

-10

-5

0

5

10

0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0.6 0.5 0.4 0.3 0.2 0.1

0-0.1-0.2-0.3

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(c) control law A

-10

-5

0

5

10

0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0.6 0.5 0.4 0.3 0.2 0.1

0-0.1-0.2-0.3

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(d) control law PG

-10

-5

0

5

10

0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.2 0.1 0 -0.1 -0.2

InitialReached

Desired

0.6 0.5 0.4 0.3 0.2 0.1

0-0.1-0.2-0.3

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(e) control law G, β = 0.4

-10

-5

0

5

10

0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

Figure 3.10 – Results for case 1. Desired pose is (0, 0, 0.5, 0, 0, 0) and initial pose is(0, 0, 0.4, 80, 20, 10). First column: image points trajectories, middle column: camera velocity com-ponents (in m/s and rad/s), last column: visual features error components and global error.

Page 70: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

60 New first order control scheme

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

-1.5-1

-0.5 0

0.5 1

1.5 2

2.5

0 10 20 30 40 50 60

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(a) control law D

-10

-5

0

5

10

0 10 20 30 40 50 60

v_xv_yv_zw_xw_yw_z

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0.1 0.08 0.06 0.04 0.02

0-0.02-0.04-0.06

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(b) control law C

0.2

0.1

0

-0.1

-0.2 0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0.1 0.08 0.06 0.04 0.02

0-0.02-0.04-0.06

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(c) control law A

-10

-5

0

5

10

0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0.1 0.08 0.06 0.04 0.02

0-0.02-0.04-0.06

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(d) control law PG

0.2

0.1

0

-0.1

-0.2 0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

-0.06-0.04-0.02

0 0.02 0.04 0.06 0.08

0.1

0 10 20 30 40 50 60 70 80 90 100

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(e) control law G, β = 0.54

-10

-5

0

5

10

0 10 20 30 40 50 60 70 80 90 100

v_xv_yv_zw_xw_yw_z

Figure 3.11 – Results for case 2. Desired pose is (0, 0, 1, 45,−30, 30) and initial pose is(0, 0, 1,−46, 30, 30). First column: image points trajectories, middle column: camera velocity com-ponents (in m/s and rad/s), last column: visual features error components and global error.

Page 71: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

3.5 Results 61

-0.5

0

0.5

1

1.5

2

0 10 20 30 40 50 60 70 80 90 100

err 1err 2err 3err 4err 5err 6Total

0.03

0.02

0.01

0 0 10 20 30 40 50 60 70 80 90 100

err 1err 2err 3err 4err 5err 6Total

Figure 3.12 – Task function e for control law PG for case 1 (on the left) and case 2 (on the right)

3.5.3.3 Case 3

The desired camera pose is now given by (0, 0, 0.5,−45, 30, 30). The initial camera pose is

given by (0, 0, 0.5, 42,−27, 30). Similar to case 2, the desired position of the image plane

is not parallel to the object and the control law C is subject to local minima. The results

are depicted on Fig. 3.13. We can see that all control schemes fail and reach a local min-

imum, but the control law D, which is indeed known to be less subject to this problem

than C [Chaumette 98]. As for A, it is the second time, as far as we know, that such a local

minimum problem is exhibited, (case 2 is the first time). For new control law PG, we can note

once again that the task function e converges exponentially to zero as shown in Fig. 3.12.

Finally, control law G converges to the desired position as soon as 0.75 < β < 1.7, which is

coherent to the fact that control law D (where β = 1) converges while control laws C and A

(where β = 0 and 1/2) fail.

Page 72: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

62 New first order control scheme

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0 0.02 0.04 0.06 0.08

0.1 0.12 0.14 0.16 0.18

100 80 60 40 20 0

P0xP0yP1xP1yP2xP2yP3xP3y

Total

-0.02

-0.01

0

0.01

0.02

0.03

0.04

0.05

100 80 60 40 20 0

v_xv_yv_zw_xw_yw_z

(a) control laws D and G, β = 1

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0

0.02

0.04

0.06

0.08

0.1

0.12

100 80 60 40 20 0

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(b) control law C

-0.002-0.0018-0.0016-0.0014-0.0012

-0.001-0.0008-0.0006-0.0004-0.0002

0 0.0002

100 80 60 40 20 0

v_xv_yv_zw_xw_yw_z

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0

0.02

0.04

0.06

0.08

0.1

0.12

0.14

100 80 60 40 20 0

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(c) control law A

-0.05 0

0.05 0.1

0.15 0.2

0.25 0.3

0.35

100 80 60 40 20 0

v_xv_yv_zw_xw_yw_z

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0

0.02

0.04

0.06

0.08

0.1

0.12

100 80 60 40 20 0

P0xP0yP1xP1yP2xP2yP3xP3y

Total

(d) control law PG

-0.004

-0.002

0

0.002

0.004

0.006

0.008

100 80 60 40 20 0

v_xv_yv_zw_xw_yw_z

-0.4

-0.2

0

0.2

0.4

0.4 0.2 0 -0.2 -0.4

InitialReached

Desired

0 0.02 0.04 0.06 0.08

0.1 0.12 0.14 0.16 0.18

100 80 60 40 20 0

P0xP0yP1xP1yP2xP2yP3xP3y

Total

-0.05 0

0.05 0.1

0.15 0.2

0.25 0.3

0.35

100 80 60 40 20 0

v_xv_yv_zw_xw_yw_z

(e) control law G with β = 0.49

Figure 3.13 – Results for case 3. Desired pose is (0, 0, 0.5,−45, 30, 30) and initial pose is(0, 0, 0.5, 42,−27, 30). First column: image points trajectories, middle column: camera velocity com-ponents (in m/s and rad/s), last column: visual features error components and global error.

Page 73: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

3.6 Conclusion 63

3.6 ConclusionIn this chapter, new configurations have been exhibited, for the first time as far as we know:

a local minimum for all classical control schemes, especially for the control law proposed

in [Malis 04]. This configuration has been found by studying a new control scheme built to

try to obtain its global asymptotic stability.

A singularity of the control scheme proposed in [Malis 04] has also been exhibited and its ef-

fects have been emphasized through experiments obtained on a 6 DOF robot. New surprising

results have also been obtained for the other classical control schemes for motion combining

translation along and rotation around the optical axis. Finally, a new control law based on a

behavior controller has also been proposed. Setting β = 0, 1, or 1/2 would allow to switch

between the three most classical schemes but we have preferred to analyse the behavior of

the control scheme for all possible values of this parameter. In all considered cases (difficult

configurations subject to local minima for all classical schemes, motion along and around the

optical axis), it has always been possible to determine values of this parameter that provide

a satisfactory behavior of the control scheme.

In this chapter, we also demonstrated the superiority of the proposed control law with a

behavior controller over all other classical control laws. As shown in simulation results, only

control law G is able to avoid the local minima and reach the desired pose while all other

classical control laws fail to avoid this local minima.

The experimental results show that control law G helps to decrease the bad effect due to

a singularity situation and avoid it completely when the behavior controller β of the hybrid

interaction matrix is selected perfectly. In fact, the suitable values of the behavior controller

rely on the displacement that the camera has to realize, as illustrated on Fig. 3.14 for the

rotation around the optical axis.

Page 74: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

64 New first order control scheme

Figure 3.14 – Behavior for a rotation around the optical axis

Page 75: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Chapter 4

New second order control schemes

Generally, image-based visual servoing has been found to give satisfactory, accurate and

robust results. However, and as established in the previous chapter, singularity and local

minima may appear causing stability and convergence problems. In this chapter, we present

new control schemes based on Halley’s method as a temptative to obtain a robust system

even when the desired configuration is singular. The new control scheme uses the first and

the second order derivatives of the error to be regulated to zero. Hessian matrices of an im-

age point are thus determined to be used in the control schemes. Preliminary experimental

results obtained on a 6 dof eye-in-hand system shows that a more accurate positioning can

be obtained compared with classical methods. The work described in this chapter leads to

the following publication [Marey 09]

This chapter is organized as follow: In Section 4.1, singular configurations are discussed

followed by analysing the classical control schemes in Section 4.2. The new second or-

der control schemes is proposed in Section 4.3 and the Hessian matrices is determined in

Section 4.4. Finally, experimental results are presented in Section 4.5.

4.1 Introduction

Singular configurations correspond to a loss of rank of the Jacobian matrix that relates the

features used as input of the control scheme to the control parameters. The classical control

schemes are known to be unstable and very sensitive to noise and perturbations around sin-

gular configurations. We will discuss in this section different singularity configurations in

IBVS and strategies used for avoiding this situation.

Page 76: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

66 New second order control schemes

4.1.1 Different singular configurations in IBVS

Several singular configurations in IBVS have been exhibited in the literature. As explained

in chapter 2, a singularity is encountered by all classical controls schemes for a pure rotation

of 180o around the camera optical axis, and as exhibited in chapter 3, a singularity also exists

for control A when a pure rotation of 90o around the optical axis has to be performed. In the

following, additional singularity configurations are discussed.

Singularity of three points

The most well known singularity appears for a target composed of three points. Indeed, for

that target, when the camera optical center lies on the surface of a cylinder built from these

three points (see Fig. 4.1), the interaction matrix related to the Cartesian coordinates of the

three image points is singular (with rank 5) [Michel 93], while it is of full rank 6 as soon as

the camera optical center lies outside of this surface. The same singular configurations exist

whatever the image features selected to represent the three points (cylindrical coordinates of

the points, parameters representing the three straight lines that can be defined from the three

points, etc.)

Figure 4.1 – Three point singularity for cylinder configuration

Another singularity of course occurs when the three points are aligned. But we are in that

case in a degenerate configuration whatever the camera pose.

Singularity of a circle

Another singular configuration has been exhibited in [Chaumette 93]: if the target is a circle,

(see Fig. 4.2), then the interaction matrix related to any set of parameters that represents the

image of that circle, which is an ellipse in the general case, is always of rank 5, but when

Page 77: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

4.1 Introduction 67

Figure 4.2 – Centered circle singularity

the circle appears in the image as a centered circle, in which case the interaction matrix if of

rank 3.

Singularity of the norm

More general singular configurations can be exhibited: whatever the set s of features selectedand its desired value s∗, the interaction matrix related to ‖s − s∗‖ is always of full rank 1

but when s = s∗, in which case the interaction matrix is null [Fruchard 06]. This case

is extremely problematic since the singularity occurs at the desired configuration where we

would like the system to be stable and robust. As we will see in Chapter 6, this singularity can

be solved by defining a switching strategy when the system nears its desired configuration.

4.1.2 Avoiding singularities in IBVS

Adding feature

Usually, these singular configurations are avoided trivially by selecting features such that

their interaction matrix is always of full rank, that is by considering a fourth point when the

original target is a set of three points, or by considering two circles or a circle and a point

instead of just a circle, or by using s instead of ‖s− s∗‖. This is not completely satisfactory

from a scientific point of view, and may not be always possible in practice, when only three

points can be extracted in the image for instance.

Redundancy

When the redundancy framework can be applied, that is when the main task does not control

all the robot degrees of freedom (dof), a secondary objective can be designed to try to avoid

the singular configurations [Nelson 95, Marchand 96]. Once again, this method can not

always be used, typically when the task constrains all the robot dof. We will see that the

Page 78: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

68 New second order control schemes

new projection operator that we propose in the second part of this thesis can be used to solve

this problem.

Damped least square

The previous solution is not efficient if the goal is to reach a singular configuration. To deal

with this problem, a classical approach in robotics is to use the damped-least-squares in-

verse [Wampler 86, Nakamura 86, Egeland 90, Chiaverini 97] instead of the Moore-Penrose

pseudo inverse. This method, which artificially increases the lowest singular values of the

Jacobian matrix, reduces the effect of the singularity in terms of robustness, but decreases

the precision of the control.

Regularization

Finally, a regularization technique has recently been introduced in [Fruchard 06]. It also

allows reducing the effect of the singularity, but with the price of decreasing the convergence

speed, which is inefficient if the task consists in tracking a moving target.

4.2 Analysis of classical control schemes

Again, let s, s∗ ∈ Rk be the vectors of current and desired set of selected k visual features

and v ∈ R6 the instantaneous velocity of the camera. As before, the classical form of the

control laws is given by:

v = −λ Ls+(s− s∗) (4.1)

In this chapter, we are interested in the case where the interaction matrix is singular at the

desired configuration, that is when Ls∗ is singular. In that case, control law D is of course

inefficient since it is subject to numerous local minima. Indeed, all configurations such that:

(s− s∗) ∈ N (L+s∗)

(where N (A) is the null space of matrix A) correspond to a local minimum, and such con-

figurations are generally numerous since N (L+s∗)is at least of dimension 1. If the initial

error si − s∗ is large and Lsi is not singular, control law C can be used at the beginning of

the servo, but, as soon as s− s∗ will become small, the system will be unstable since it nears

the singularity. The same comment can unfortunately be done for control law A, even if we

could hope for some smoothing effects of the singularity thanks to the use of the constant

matrix Ls∗ in A. This simple analysis of the behavior of control laws D, C, and A will be

confirmed in Section 4.5 through experimental results.

To avoid the unstability near the singularity of all control schemes based on the interac-

tion matrix only, we could think of using second order schemes, such as the one based on

the classical Newton minimization method. It is given by (see section 2.5.4):

v = −λ K+1 L

�s (s− s∗) (4.2)

Page 79: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

4.3 New second order control schemes 69

where

K1 = L�s Ls +

k∑i=1

Hsi(si − s∗i )

Hsibeing the Hessian matrix of the i-th component of s. Unfortunately, the convergence

domain of this control scheme is generally very limited due to the fact that the Hessian is not

always positive definite (see Section 4.4.2). Furthermore, all configurations such that Ls is

singular and

(s− s∗) ∈ N (L�s

)correspond to a local minimum since we have in that case v = L�

s (s− s∗) = 0. This is alsoof course the same for the basic control scheme

v = −λ L�s (s− s∗)

based on the steepest descent and usually named gradient method.

All control schemes described above being not satisfactory when trying to reach a singular

configuration, we propose in the next section new control schemes based on Halley’s method.

4.3 New second order control schemes

4.3.1 Halley’s Method for scalar caseHalley’s method is well known in the numerical analysis community to find a root of a

function f(x) (that is to find xr such that f(xr) = 0) [Ortega 00]. As classical gradient and

Newton methods, it is an iterative algorithm that can be applied if function f is continuous

and twice differentiable. It is based on the second order Taylor expansion of f :

f(x) = f(xn) + f ′(xn)(x− xn) +1

2f ′′(xn)(x− xn)

2 (4.3)

where xn is the estimate of xr at iteration n of the algorithm. Let xn+1 be the root of f(x) =0. It can be written:

xn+1 = xn − f(xn)

f ′(xn) +12f ′′(xn)(xn+1 − xn)

This equation can not be used directly since xn+1 appears both in its left and right sides.

However, using on the right side, the result of the Newton-Raphson step (which is easily

obtained by solving the first order Taylor expansion f(x) = f(xn)+ f ′(xn)(x−xn)), that is

xn+1 = xn − f(xn)

f ′(xn)

we obtain

xn+1 = xn − 2f(xn)f′(xn)

2 [f ′(xn)]2 − f(xn)f ′′(xn)

(4.4)

Page 80: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

70 New second order control schemes

which is known as the Halley’ rational formula. We can note that, thanks to the term

f(xn)f′′(xn), there is no inversion problem when f ′(xn) = 0 as long as f(xn) = 0 and

f ′′(xn) = 0.We now apply exactly the same reasoning for the case where x and f(x) are not scalar

values but vectors.

4.3.2 Modeling the new control schemesAs before, let p and p∗ be the parameters that represent the current and the desired camera

poses as before and s : R3 × SO3 �→ Rn a mapping function from the camera pose space to

the feature space. The first order Taylor expansion of s(p) is given by

s∗ = s+ JsΔp

where s∗ = s(p∗), s = s(p) and Δp represents the displacement between p∗ and p. As

presented in Chapter 2, we immediately deduce the following control law using the Newton-

Raphson method:

v1 = −λ1L+s (s− s∗) (4.5)

where Js and Ls are linked by Ls = JsP, where P is defined such that p = Pv. Note that

control law (4.5) is nothing but the classical control law C.

Let us now consider the second order Taylor expansion of s. It is given by

s∗ = s+K2 Δp (4.6)

where matrixK2 is

K2 = Js +1

2

⎡⎢⎢⎢⎢⎣Δp�Hs1

. . .

. . .

. . .Δp�Hsk

⎤⎥⎥⎥⎥⎦ (4.7)

Solving (4.6) for Δp, we obtain

Δp = −K+2 (s− s∗) (4.8)

from which we deduce the following control law:

v = −λK+s (s− s∗) (4.9)

where the output (4.5) of control law C is used to go fromK2 toKs:

Ks = Ls − λ1

2

⎡⎢⎢⎢⎢⎣(s− s∗)�L+�

s Hs1

. . .

. . .

. . .

(s− s∗)�L+�s Hsk

⎤⎥⎥⎥⎥⎦ (4.10)

Page 81: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

4.3 New second order control schemes 71

This control law is named K in the following. Let us note that, if Ls is singular, it does

not necessarily imply that Ks is singular thanks to the Hessian part involved in this control

scheme. Furthermore and contrarily to the control law (4.2) based on the Newton method,

when Ls is singular, the configurations such that

(s− s∗) ∈ N (L�s

)does not generally correspond to a local minimum. That are for the good points of K. Unfor-

tunately, some bad points also exist. First,Ks may be singular for some configurations where

Ls is not singular. Then, for s = s∗, Ks is singular when Ls∗ is singular (since Ks = Ls∗in that case). However, since we never have s = s∗ in practice, due to unavoidable image

noise, we will never have exactlyKs = Ls∗ , which makes appealing the use of K when Ls∗is singular. Furthermore, near the singularity, the low conditioning of Ls in the first part of

Ks is compensated by the high conditioning of Ls in its second part. However, we will see

in Section 4.5 that if the rank of Ks is indeed improved, it increases the sensitivity of the

control scheme to the image noise.

Following the same idea than going from control law C to D, we could think of using:

v = −λK+s∗(s− s∗) (4.11)

whereKs∗ is given by:

Ks∗ = Ls∗ − λ1

2

⎡⎢⎢⎢⎢⎣(s− s∗)�L+�

s∗ Hs∗1. . .. . .. . .

(s− s∗)�L+�s∗ Hs∗k

⎤⎥⎥⎥⎥⎦However, that is definitively not a good idea since this control scheme has exactly the same

bad properties of D that all configurations such that

(s− s∗) ∈ N (L+s∗)

will lead to a local minima (to check that, just note that in that case (s−s∗)�L+�s∗ = 0, which

impliesKs∗ = Ls∗).

A last control scheme can be obtained by considering each feature independently in the

second part ofKs, that is using

vi = −λ1L+si(si − s∗i ) (4.12)

instead of (4.5). In that case we obtain

v = −λK+i (s− s∗) (4.13)

Page 82: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

72 New second order control schemes

where

Ki = Ls − λ1

2

⎡⎢⎢⎢⎢⎣(s1 − s∗1)

�L+�s1Hs1

. . .

. . .

. . .

(sk − s∗k)�L+�

skHsk

⎤⎥⎥⎥⎥⎦This control law will be named Ki in the following. Even if we are currently unable to give

any theoretical explanation, we will see in Section 4.5 that this control law allows improving

the accuracy of the positioning in a singular configuration.

4.4 Hessian Matrices of an image pointIn the experiments presented in the next section, we will compare the behavior of the control

schemes presented in this chapter in the case of a target composed of three points. The ana-

lytical form of the Hessian matrices Hx and Hy of the coordinates (x, y) of an image point

are thus needed. Let us note that these matrices have already been used in [Lapresté 04], but

the analytical form given in that paper contains unfortunately few typos errors.

4.4.1 ModelingWe recall that an image point with coordinates (x, y) results from the perspective projection

of a 3D point such that x = X/Z and y = Y/Z where (X,Y, Z) are the coordinates of this

3D point expressed in the camera frame. We also recall that the velocity (x, y) of an image

point is linked to the camera velocity v = (vx, vy, vz, ωx, ωy, ωz) through the well known

equations:

x = Lxv , y = Lyv (4.14)

where the interaction matrices Lx and Ly are given by:

Lx =[−1

Z0 x

Zxy −(1 + x2) y

](4.15)

Ly =[0 −1

ZyZ(1 + y2) −xy −x

](4.16)

The Hessian matricesHx andHy can easily be determined by differentiating (4.14). Indeed,

for any feature s, we have:

s = Lsv + v�Hsv (4.17)

whereHs is a symetrix matrix. Using (4.14), (4.15) and (4.16), we obtain:

x = Lxv +Z

Z2vx +

xZ − Zx

Z2vz + (xy + xy) ωx − 2xx ωy + y ωz

y = Lyv +Z

Z2vy +

yZ − Zy

Z2vz + 2yy ωx − (xy + xy) ωy − x ωz

By substituting (4.14) for x and y, and knowing that:

Z = −vz − yZωx + xZωy

Page 83: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

4.4 Hessian Matrices of an image point 73

we obtain after simple developments:

x = Lxv − 2

Z2vxvz − 2y

Zvxωx +

3x

Zvxωy − x

Zvyωx − 1

Zvyωz

+2x

Z2vzvz +

4xy

Zvzωx − (1 + 4x

2)

Zvzωy +

2y

Zvzωz

+x(1 + 2y2) ωxωx − y(1 + 4x2) ωxωy + (1 + 2y2 − x2) ωxωz

+2x(1 + x2) ωyωy − 3xy ωyωz − x ωzωz

y = Lyv +y

Zvxωy +

1

Zvxωz − 2

Z2vyvz − 3y

Zvyωx +

2x

Zvyωy

+2y

Z2vzvz +

1 + 4y2

Zvzωx − 4xy

Zvzωy − 2x

Zvzωz

+2y(1 + y2) ωxωx − x(1 + 4y2) ωxωy + y(1 + 2x2) ωyωy

+(1 + 2x2 − y2) ωyωz − y ωzωz

from which we deduce by identification with (4.17)

Hx =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

0 0 −1Z2

−yZ

3x2Z

00 0 0 −x

2Z0 −1

2Z−1Z2 0 2x

Z22xyZ

−1−4x2

2ZyZ−y

Z−x2Z

2xyZ

x(1 + 2y2) −y(12+ 2x2) 1−x2+2y2

23x2Z

0 −1−4x2

2Z−y(1

2+ 2x2) 2x(1 + x2) −3xy

2

0 −12Z

yZ

1−x2+2y2

2−3xy

2−x

⎤⎥⎥⎥⎥⎥⎥⎥⎦and

Hy =

⎡⎢⎢⎢⎢⎢⎢⎢⎣

0 0 0 0 y2Z

12Z

0 0 −1Z2

−3y2Z

xZ

0

0 −1Z2

2yZ2

1+4y2

2Z−2xyZ

−xZ

0 −3y2Z

1+4y2

2Z2y(1 + y2) −x(1

2+ 2y2) −3xy

2y

2ZxZ

−2xyZ

−x(12+ 2y2) y(1 + 2x2) 1−y2+2x2

21

2Z0 −x

Z−3xy

21−y2+2x2

2−y

⎤⎥⎥⎥⎥⎥⎥⎥⎦4.4.2 Positiveness ofHx andHy

Using the determinant test to study the positiveness of the Hessian matricesHx andHy, we

found that the determinants of the leading principal minor vectors of Hx and Hy areMx =

(0, 0, 0, x2

4Z6 ,x3(1+x2)

Z6 , 0) andMy = (0, 0, 0, 0,y3(1+y2)

Z6 , 0) respectively, whereMs[i] = |Hs[1..i, 1..i]|.This means that the necessary and sufficient condition for both Hx and Hy to be positive

semi-definite is that x and y are positive, which is of course not always achieved. This may

explain the fact that Newton and Halley’s methods based on image point coordinates have a

small convergence domain.

Page 84: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

74 New second order control schemes

4.5 Results

The experimental results presented in this section have been obtained on a 6 DOF eye-in-

hand system (see Fig. 3.8). As before, all the control schemes have been easily implemented

thanks to the open source ViSP library [Marchand 05b]. The task consists of positioning the

camera with respect to a target composed of three points (in practice, three white dots on a

black background to avoid any image processing problem) using the Cartesian coordinates

of the perspective projection of these points in the image. The three points form a rectangle

isosceles triangle whose side lengths are equal to 0.06 m, 0.06 m and 0.06 × √2 m. The

interaction matrix is thus of dimension 6×6 and of full rank 6 but for the singularities exhib-

ited in [Michel 93] in which case it is of rank 5. We recall that the singularities occur when

the optical center of the camera belongs to the surface of the right circular cylinder whose

basis is defined by the circle to which the three points belong. From this general result, it is

easy to see that if one of the three points appears at the principal point in the image (which

corresponds to the image of the optical axis), then the interaction matrix is singular.

The desired pose between the camera and the triangle has thus been chosen such that they

are parallel (at a distance of 0.5 m) and one point appears at the principal point (see Fig. 4.3).

The initial pose has been chosen very near from the desired one, that is pi = (0.0022, 0.001,0.501, 0.8, 0.4, 0.6) where the first three components represent the translation expressed in

meter, and the last three ones represent the rotation expressed in degree. We are indeed in-

terested in the behavior near the singularity.

Let us finally note that the depth of the points, which appears in the translational term of

the interaction matrix and in the Hessian matrix, are estimated at each iteration of the control

scheme using a classical pose estimation method. Let us also note that the gain λ1 involved

in K and Ki has been set to 1, and the gain λ involved in all the control schemes has been

set to 0.5. This value has voluntarily been chosen small to show the effects of the singularity

and to avoid any unstability due to a too high value of λ. The very large number of iterations

in each experiment is thus not significant. As for the singular value decomposition used to

compute the pseudo-inverses involved in the different control schemes, the condition number

threshold has been set to 0.0001. We recall that the condition number is the ratio between the

minimal and the maximal singular values of a matrix, and the threshold is used to compute

its rank and to consider if a singular value is zero or not. This value has been chosen to not

damage the robot by forbidding high values in the outputs of the control scheme.

The results obtained for control laws D, C, M, K and Ki are given on Figs. 4.3, 4.4, 4.5,

4.6 and 4.7 respectively. As expected, control law D is always of rank 5. It is thus not sur-

prising that it reaches a local minimum. Contral law C is of rank 6 at the beginning of the

servo, which allows the system to near the desired position. It is then of rank 5 due to the

high condition number threshold, but for some iterations where it becomes again of rank 6,

due to image noise, producing high robot velocities at these iterations. Control law A has

not a very satisfactory behavior: even if it is of rank 6 at the beginning of the servo, it fails,

Page 85: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

4.6 Conclusion 75

-0.1

-0.05

0

0.05

0.1-0.1 -0.05 0 0.05 0.1

InitialReached

Desired

-0.01

-0.005

0

0.005

0.01

0.015

0 100 200 300 400 500

P0xP0yP1xP1yP2xP2y

Total

0

2

4

6

8

10

0 100 200 300 400 500

Rank ld

-1

-0.5

0

0.5

1

0 100 200 300 400 500

v_xv_yv_z

-1

-0.5

0

0.5

1

0 100 200 300 400 500

w_xw_yw_z

-10

-8

-6

-4

-2

0

2

4

0 100 200 300 400 500

tx_ety_etz_e

-1.2-1

-0.8-0.6-0.4-0.2

0 0.2 0.4 0.6

0 100 200 300 400 500

TUx_eTUy_eTHz_e

Figure 4.3 – Experimental results of control law D. First line from the left: initial and desired images,visual features errors and rank of the control matrix; second line: translational (cm/s) and rotationalcomponents (dg/s) components of the control law; third line: translational (in mm) and rotational (indg) error components

as D, in a local minimum. As for K, it is almost always of rank 6, as expected, but it reaches

also a local minimum and is quite unstable due to the fact that it is of rank 6. Finally, control

law Ki provides with the best behavior, similar to the one of C, but with a better positioning

accuracy and less noise.

4.6 ConclusionIn this chapter, we have been interested by the difficult problem of reaching a visual singular

configuration. Without any surprise, all classical control schemes have been shown to be

unsatisfactory. Control schemes based on second order minimization Halley’s method have

been proposed to try to improve the behavior of the system near the desired singular position.

The experimental results obtained have shown that it is possible to improve the accuracy of

the positioning using one of these control schemes.

Page 86: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

76 New second order control schemes

-0.1

-0.05

0

0.05

0.1-0.1 -0.05 0 0.05 0.1

InitialReached

Desired

-0.01

-0.005

0

0.005

0.01

0.015

0 100 200 300 400 500

P0xP0yP1xP1yP2xP2y

Total

0

2

4

6

8

10

0 100 200 300 400 500

Rank lc

-1

-0.5

0

0.5

1

0 100 200 300 400 500

v_xv_yv_z

-1

-0.5

0

0.5

1

0 100 200 300 400 500

w_xw_yw_z

-10

-8

-6

-4

-2

0

2

4

0 100 200 300 400 500

tx_ety_etz_e

-1.2-1

-0.8-0.6-0.4-0.2

0 0.2 0.4 0.6

0 100 200 300 400 500

TUx_eTUy_eTHz_e

Figure 4.4 – Experimental results of control law C.

-0.1

-0.05

0

0.05

0.1-0.1 -0.05 0 0.05 0.1

InitialReached

Desired

-0.01

-0.005

0

0.005

0.01

0.015

0 100 200 300 400 500

P0xP0yP1xP1yP2xP2y

Total

0

2

4

6

8

10

0 100 200 300 400 500

Rank lm

-1

-0.5

0

0.5

1

0 100 200 300 400 500

v_xv_yv_z

-1

-0.5

0

0.5

1

0 100 200 300 400 500

w_xw_yw_z

-10

-8

-6

-4

-2

0

2

4

0 100 200 300 400 500

tx_ety_etz_e

-1.2-1

-0.8-0.6-0.4-0.2

0 0.2 0.4 0.6

0 100 200 300 400 500

TUx_eTUy_eTHz_e

Figure 4.5 – Experimental results of control law M

Page 87: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

4.6 Conclusion 77

-0.1

-0.05

0

0.05

0.1-0.1 -0.05 0 0.05 0.1

InitialReached

Desired

-0.01

-0.005

0

0.005

0.01

0.015

0 100 200 300 400 500

P0xP0yP1xP1yP2xP2y

Total

0

2

4

6

8

10

0 100 200 300 400 500

Rank LHac

-1

-0.5

0

0.5

1

0 100 200 300 400 500

v_xv_yv_z

-1

-0.5

0

0.5

1

0 100 200 300 400 500

w_xw_yw_z

-14-12-10

-8-6-4-2 0 2 4

0 100 200 300 400 500

tx_ety_etz_e

-2

-1.5

-1

-0.5

0

0.5

0 100 200 300 400 500

TUx_eTUy_eTHz_e

Figure 4.6 – Experimental results of control law K

-0.1

-0.05

0

0.05

0.1-0.1 -0.05 0 0.05 0.1

InitialReached

Desired

-0.01

-0.005

0

0.005

0.01

0.015

0 100 200 300 400 500

P0xP0yP1xP1yP2xP2y

Total

0

2

4

6

8

10

0 100 200 300 400 500

Rank LHic

-1

-0.5

0

0.5

1

0 100 200 300 400 500

v_xv_yv_z

-1

-0.5

0

0.5

1

0 100 200 300 400 500

w_xw_yw_z

-10

-8

-6

-4

-2

0

2

4

0 100 200 300 400 500

tx_ety_etz_e

-1.2-1

-0.8-0.6-0.4-0.2

0 0.2 0.4 0.6

0 100 200 300 400 500

TUx_eTUy_eTHz_e

Figure 4.7 – Experimental results of control law Ki

Page 88: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

78 New second order control schemes

Page 89: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Part II

Redundancy formalism

79

Page 90: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne
Page 91: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Chapter 5

State of the art in redundancy

This chapter presents the state of the art of the redundancy framework in robotic. The state

of the art of the most common avoidance strategies used for solving the problem of robot

joint limits avoidance are also detailed.

5.1 Redundancy

Redundant robots attract special attention due to their dexterity and versatility for addressing

sophisticated tasks, particularly for underactuated systems [Shkolnik07]. There are several

types of redundancy for a robot: Redundancy with respect to the task when the number of

independent parameters required by a task is less than the DOFs of the robot and redundancy

with respect to the end effector when the manipulator has more DOFs than those required to

place its end effector at a given position and orientation within the task space [Samson 91]

[Khatib 96] [Siciliano 91] [Hanafusa 81]. When robots have more DOFs than those required

to execute a given task, the excess DOFs can be profitably used for performing other subtasks

such as singularity and obstacles avoidance, keeping the joints within their limits, torque op-

timization, and energy minimization. However, they pose some difficulty in solving the

inverse kinematics problem because they provide an infinite number of joint values for a

certain end-effector position and orientation.

In robot control, actions are generally operated in the joint space. However, in most practical

applications and as mentioned in Part I, it is desired to design a task space controller and to

allow the robot considering other constraints. The solution of this problem requires the joint

variables to satisfy not only the original task, but also the constraints. This makes interesting

the use of a redundant system [Siciliano 91].

The use of redundancy can be formulated in the framework of task with order of priority.

Page 92: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

82 State of the art in redundancy

When the main task is defined as to following a given trajectory of the end effector while

avoiding joint limits [Euler 89], avoiding singular configurations [Paulin 04], avoiding oc-

clusion [Marchand 98] and/or avoiding obstacles in the workspace [Choi 00] [Perdereau 02],

trajectory following has the first priority while joint limits, singularity and obstacle avoid-

ance are given the second priority [Nakamura 87]. In visual servoing applications, the re-

dundancy has been exploited for example to avoid joint limits or occlusions [Chaumette 01]

[Marchand 96][Mansard 09a].

5.2 Projection operators

5.2.1 Classical projection operator Pe

Let e ∈ Rk be the main task function where k is the number of its components. The classical

approach that tries to ensure an exponential decrease of all components of e while consider-

ing a secondary task leads to the following control scheme [Liegeois 77]:

q = qe +Pe g (5.1)

= J+e e

∗ + (In − J+e Je) g (5.2)

where q ∈ Rn is the robot joint velocity sent as input of the low level robot controller,

Je ∈ Rk×n is the task Jacobian defined such that e = Jeq, n is the number of robot DOFs,

J+e is the Moore-Penrose pseudoinverse of Je, e

∗ is usually set as−λe to have an exponential

decrease of each component of e, g represents the motion induced by the secondary task, and

Pe = (In−J+e Je) is a projection operator on the null space of Je so that g is realized at best

under the constraint that it does not perturb the regulation of e to 0 (we have JePg = 0,∀g).This projection operator may be too much constraining: it has no component available when

the error e constrains all the n DOFs of the system (in that case Pe = 0), and only n − r(where r is the rank of Je) in the general case.

5.2.2 Bidirectional projection operator Pz

A nonlinear projection operator has been recently proposed in [Mansard 09a] to enlarge the

free space on which the gradient is projected. This is achieved by decreasing the error of

the system when the secondary task goes in the same direction than the main task. This

operator is built such that when q = J+e∗ + q2, where q2 = Pg, q2 respects the condition

∇L�Jq2 ≤ 0, where ∇ = ∂∂e

and L is the Lyapunov function associated to q = J+e∗ such

that L = ∂∂ee < 0 . By considering the singular values decomposition of J = UΣV�, where

U is a basis of the task function space, V is a basis of the joint space, Σ = [Δσ 0], and Δσ

is the diagonal matrix whose coefficients are the m singular values of J, the condition can be

written as ∇L�Σ˜q2 ≤ 0, where ∇L = U�∇L and ˜q2 = V�q2. This leads to a restricted

condition:

∀i ∈ [1 · · ·m], vσ˜q2i≤ 0 (5.3)

Page 93: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

5.2 Projection operators 83

where L = (v1, · · · , vm).

For a given g, q2 that respect the condition (5.3) is built by keeping the components that

respect (5.3) and nullifying the other components. This leads to obtaining the projection

operator as:

Pg = V

⎡⎢⎣p1(g) 0. . .

0 pn(g)

⎤⎥⎦V� (5.4)

where pi(g) is defined by:

pi(g) =

⎧⎨⎩110

if i > m or gi = 0if gi and viσi have opposite signs

if gi and viσi have the same sign

(5.5)

where g = Vg. Finally, to realize the task e and respect the condition, the control law is

given by:

q = J+e∗ +Pgg (5.6)

The second term Pgg has a proper sign however it can be arbitrary large when the main task

converges to zero which may introduce oscillation. This problem is corrected by computing

the projection operator from the second-order Taylor expansion which introduces an upper

bound on the value of the secondary term. In this case, when considering the norm of the

error as a Lyapunov function L = 12e�e, the condition to be respected is:

∇L�JΔq2 + 12Δq2

�J�JΔq2 < 0 (5.7)

where ∇L� = (e+Δe∗)�.

When introducing the SVD bases and performing some simplifications, the condition can

be written as: ∀i ∈ [1 · · ·m], 2(ei + Δe∗i )σiΔq2i+ σ2

i Δq2i

2< 0, where Δq2 is a possible

secondary motion that belongs to the free space of the main task if and only if:

∀i ∈ [1 · · ·m] =

⎧⎪⎨⎪⎩Δq2i

= 0

or 0 < Δq2i≤ − 2

σi(ei + Δe∗i )

or − 2σi(ei + Δe∗i ) ≤ Δq2i

(5.8)

In order to ensure that q2irespects condition (5.8), the projection operator is defined by:

Pg = V

⎡⎢⎣p1(g) 0. . .

0 pn(g)

⎤⎥⎦V� (5.9)

Page 94: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

84 State of the art in redundancy

where

pi(g) =

⎧⎪⎪⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎪⎪⎩

gigigi00

−2(ei+Δe∗i )

σigi

if i > m or gi = 0

if 0 < gi − 2σi(ei + Δe∗i )

if − 2σi(ei + Δe∗i ) ≤ gi

if σigi > 0 and ei + Δe∗i < 0

if σigi < 0 and ei + Δe∗i > 0else

(5.10)

This projection operator (5.10) obtained by considering the robot as a discrete system allows

to avoid the oscillations that appear when the projection operator (5.5) is used. Finally, let us

note that even if this projection operator improves the performance of the system, the number

of DOF can be insufficient to realize the constraints. We will see in the next chapter that the

new projection operator we propose can be used even if all DOFs are used by the main task.

5.3 Redundancy-based task-priority formalismVarious frameworks were proposed in order to manage the redundancy among two or sev-

eral tasks to control their levels of priority. There are classical and efficient formalisms

[Hanafusa 81], [Baerlocher 98] [Lenarcic 98]; Inverse-based and transpose-based projection

methods [Hanafusa 81] [Chiacchio 91] [Baerlocher 98]; Successive and augmented projec-

tions [Siciliano 91] [Baerlocher 98] [Mansard 04] [Antonelli 09]; and Kinematic and dy-

namic controllers [Siciliano 91] [Hanafusa 81]. In the following we present a review con-

cerning these issues.

5.3.1 Inverse-based projection methods (successive, augmented)For tasks with order of priority, the problem is formulated using the kinematic relationship

between the joint variable q ∈ Rn and the manipulation variables [Liegeois 77]. As illus-

trated in Fig. 5.1, the final solution is a homogeneous solution belonging to the null space of

J1.

Successive inverse-based projections

In [Mansard 04], several approaches were discussed to propose suitable ones that enable

stacking n different tasks e1, e2, ..., en using redundancy. In successive inverse-based pro-

jections the last task is projected on the null space of its previous task and then project this

composed task on the null space of their previous tasks to get:

e = e1 +P1.e2 +P1P2.e3 + . . .

n−1∏i=1

.Pien (5.11)

The null space projectors are not commutative and the solution obtained by (5.11) may

lead to conservative stability conditions. Since P1...Pn−1ek is not in the null space of e2,

Page 95: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

5.3 Redundancy-based task-priority formalism 85

Figure 5.1 – Two priority level architecture

...,en−1, the low level priority task en may modify tasks e2, ...,en−1 of higher level prior-

ity. This stability problem can be solved by the augmented or best the intersection methods

[Antonelli 09].

Augmented inverse-based projections

In this approach, the generic task is projected onto the null space of the task achieved by

considering the augmented Jacobian of all the higher priority ones. By assuming that e1...n is

a task that realize the n first tasks while respecting their priorities, the orthogonal projection

operator P1..n is given by [Baerlocher 98]:

P1..n = I− J+1..nJ1..n (5.12)

where J1..n is the Jacobian of e1..n defined by:

J1..n =∂e1..k∂q

=∂(e1 +P1.e2 + . . .+P1..nen)

∂q(5.13)

The approximation value J1..n = J1 + P1.J2 + . . . + P1..nJn is used to compute P1..n by

assuming that ∂P1..n

∂q= 0 since it is difficult to compute. This approximation however does

not guarantee that the error of the first n tasks remains 0 when the control due to en+1 is

high.

Intersection inverse-based projections method

To ensure that the motion will be projected if it is in the null space of all n tasks, the new

task en+1 is projected onto the space Ni...n defined as the intersection of the null spaces for

Page 96: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

86 State of the art in redundancy

all n previous tasks . Ni...n is given by:

Ni...n =n⋂

k=1

Null(Jk) (5.14)

The null space can be computed by stacking the Jacobian of the n tasks such that:

Ni...n = Null

⎛⎜⎝J1...Jn

⎞⎟⎠ (5.15)

5.3.2 Transpose-based projection methods (successive, augmented)In [Chiacchio 91], instead of using inverse kinematics schemes that use the Jacobian pseudo

inverse, a transpose-based task-priority redundancy resolution has been proposed for the two

tasks in which Jacobian transpose is used such that:

q = J�e∗ +PJ�2 e2 (5.16)

Successive transpose-based projections

In [Antonelli 09], transpose projection approach is generalized by following the same guide-

lines previously mentioned for the inverse-based projection approaches, i.e., by projecting

successively the lower priority tasks onto the null space of higher priority task:

e = J�1 e1 +P1.J�2 e2 +P1P2.J

�3 e3 + . . .

n−1∏i=1

.PiJ�n en (5.17)

Augmented transpose-based projections

The lower priority task is projected onto the null space of the augmented Jacobian obtained

by stacking all the higher priority tasks while the Jacobian transpose is used, [Antonelli 09]:

e = J�1 e1 +P1.J�2 e2 +P1···2.J�3 e3 +P1···nJ�n en (5.18)

5.3.3 Unified formula for redundancyIn [Antonelli 09], four priority-based approaches have been grouped in an unified formula,

and their stability analysis has been discussed. For n tasks this unified formula is written as:

e = J#1 e1 + P1.J

#2 e2 + P2.J

#3 e3 + PnJ

#n en (5.19)

where for the successive projection Pk = P1P2 · · ·Pk, and for the augmented projection

Pk = P1···k, while for transpose-based or for inverse-based method (#) is replaced by (�)or (+) respectively.

Page 97: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

5.3 Redundancy-based task-priority formalism 87

5.3.4 Efficient task-priority using the classical operator Pe

Using the classical approach as given by (5.2) to treat priority among several tasks may

distort tasks of lower priority. In order to execute the lower priority tasks more efficiently, a

nice approach has been proposed in [Siciliano 91] [Baerlocher 98]. We refer this approach

as efficient redundancy-based task-priority.

5.3.4.1 Two tasks

When two tasks e1 and e2 are considered where ei ∈ Rmi , 1 = 1, 2, Ji ∈ Rmi×n are their

Jacobian such that ei = Jiqi and e1 has priority over e2. For i = 1, 2 we have:

qi = J+i ei (5.20)

When the robot is controlled using its articular velocity, the general solution of the primary

task is

q = J+1 e

∗1 +P1g (5.21)

Using (5.21) and e2 = J2q we get:

e2 = J2J+1 e

∗1 + J2P1g (5.22)

Solving (5.22) for g and injecting the computed z in (5.21) we get:

q = J+1 e

∗1 +P1(J2P1)

+(e∗2 − J2J+1 e

∗1) (5.23)

Since P1 is idempotent and Hermitian then (5.23) becomes:

q = J+1 e

∗1 + (J2P1)

+(e∗2 − J2J+1 e

∗1) (5.24)

Using (5.20) when i = 1, (5.24) is written as:

q = q1 + (J2P1)+ (e∗2 − J2q1) (5.25)

where J2P1 is the limited Jacobian of e2 and (e∗2 − J2q1) is the secondary control compo-

nent after removing the part of e2 already accomplished by e1.

5.3.4.2 Several tasks

Kinematics controller

Now, we consider l tasks. Let e1 ∈ Rm1 , · · · , el ∈ Rml be l tasks where the ith task

Jacobian Ji ∈ Rmi×n is defined by ei = Jiq where∑l

i=1 rank(Ji) ≤ n. If the ith task is to

be executed with lower priority than the (i− 1)th task, the general control scheme is given

by q = ql where qi is recursively defined by, [Baerlocher 98]:

qi =

{qi−1 +

(JiP

Ai−1)+(e∗i − Jiqi−1) if i = 2, .., l

J+1 e

∗1, if i = 1

(5.26)

Page 98: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

88 State of the art in redundancy

where PAi = In − JAi +

JAi is the orthogonal projection operator on the null space of the

augmented Jacobian JAi = (J1, ...,Ji) and is recursively defined by:

JAi = (JAi−1,Ji) (5.27)

Using the generalized inverse of partitioned matrix we get [Cline 64] [Baerlocher 04]:

JA+

i =[(JA

+

i−1 −TiJiJA+

i−1) Ti

](5.28)

where

Ti = J+i +X(In − JiJ+

i ) (5.29)

such that

Ji = Ji(In − JA+

i−1JAi−1) (5.30)

and X is a complex term. Since PAi is defined by

PAi = In − JA

+

i JAi (5.31)

By injecting (5.28) and (5.27) in (5.31)

PAi = In −

[(JA

+

i−1 −TiJiJA+

i−1) Ti

](JAi−1,Ji)

= In − (JA+

i−1JAi−1 −TiJiJ

A+

i−1JAi−1 +TiJi)

= In − (JA+

i−1JAi−1 +Ti Ji(In − JA+

i−1JAi−1) ) (5.32)

By injecting (5.29) and (5.30) in (5.32) we get:

PAi = In − (JA+

i−1JAi−1 − (J+

i +X(In − JiJ+i ))Ji)

= (In − JA+

i−1JAi−1)− J+

i Ji +X(Ji − JiJ+i Ji) (5.33)

By using the property (Ji = JiJ+i Ji) of the pseudoinverse, we get:

PAi = (In − JA+

i−1JAi−1)− J+

i Ji

= PAi−1 − J+

i Ji (5.34)

The recursive formula of the augmented classical projection operator is thus written as:

PAi =

{PA

i−1 −(JiP

Ai−1)+ (JiP

Ai−1)

if i = 2, .., lI− J+

1 J1, if i = 1(5.35)

If the classical projection operator is used to manage several tasks, the higher level priority

tasks e1, ..., ei−1 should leave some DOFs to the lower level priority tasks ei, ..., el. Usually,

this is ensured by selecting e1, ..., ei−1 such that Rank(JAi−1) < n.

Page 99: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

5.3 Redundancy-based task-priority formalism 89

Dynamic controller

In [Siciliano 91], the recursive form is expressed in terms of joint acceleration. Instead

of deriving joint acceleration solutions from the second-order differential kinematics and

then substituting in the robot dynamic model for designing a computed torque control,

which may lead to internal unstable behavior for a redundant manipulator, [Nakamura 87],

it is derived from the joint velocity solution by direct differentiation with respect to time,

[KaBerounian 88] [Siciliano 91]. By taking the derivative of (5.26) we get:

qi =

{qi−1 + J+

i

(e∗i − Jiqi−1 − Jiqi−1

)+ ˙J

+

i (e∗i − Jiqi−1) if i = 2, .., l

J+1 e

∗1 + J

+1 e

∗1, if i = 1

(5.36)

where J = JiPAi−1 and ˙J

+

i is given by:

˙J+

i = J+i˙JiJ

+i +(I− J+

i Ji) ˙J�i (JiJ�i )−1

(5.37)

These frameworks based on equations (5.26) and (5.36) have been studied in [Siciliano 91]

by simulations on snake-like robot with an obstacle avoidance constraint.

Handling singularities

In [Chiaverini 97], singular configurations in the framework of the redundancy-based task-

priority resolution technique are addressed for two tasks. First singularity, the algorithmic

singularity, appears in the task-priority formulation given by (5.26) that needs to compute

an inverse of JiPi−1 which becomes singular when the secondary task cannot be achieved

without preventing the primary task from being realized. Close to such singularities the

priority levels may even be inverted. Algorithmic singularity is handled by defining the

formulation of q as:

q = q1 +P1J+2 e2 (5.38)

Using (5.38), algorithmic singularities are decoupled from the singularities of J2. On one

hand, when J2 is singular, it is replaced by its corresponding damped least square inverse

J2+λ2 defined by:

J2+λ2 = J2

�(J2J2� + λ22I)

−1 (5.39)

where λ2 ∈ R is the damping factor. On the other hand, when J1 has a kinematic singular-

ity, J1+ can be replaced by the damped least square inverse J1

+λ1 . This formulation helps

to avoid the singularity, however, the tracking error for the secondary task increases to be

higher than that of (5.25).

The kinematic singularity can also be be handled by introducing the damping factor λ1 such

that, [Chiaverini 97]:

q = J+1λ1 e∗1 + (I− J+λ1J1)J

+2λ2 e∗2 (5.40)

Page 100: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

90 State of the art in redundancy

In [Baerlocher 04], the algorithmic singularity of the recursive formula (5.26) is handled

by:

qi =

{qi−1 +

(JiP

Ai−1)+λi

(e∗i − Jiqi−1) if i = 2, .., lJ+

1 e∗1, if i = 1

(5.41)

Finally, the expression that considers algorithmic and kinematic singularities corresponding

to (5.25) is given by:

q = J+1λ1 e∗1 + (J2P1)

λ2(e2∗ − J2(J+

1λ1e∗1)) (5.42)

However, additional algorithmic singularity appears when the low priority task can not be

realized when projected onto the null space of J1.

5.3.5 Tasks priority and tasks sequencingBoth task priority and task sequencing have been validated by applying them to a unicycle-

like mobile platform carrying on board a pan-tilt unit [DeLuca 08a]. Both redundancy res-

olution schemes show effectiveness against noise and unmodeled effects present in realistic

conditions, as well as the ability to correctly execute regulation of ill-conditioned tasks. With

the task priority method, it is possible to simultaneously regulate all the task variables. How-

ever, no additional requirements could be specified for the motion execution. On the other

hand, the task sequencing approach allowed such a possibility thanks to the artificial redun-

dancy introduced in the first phase, at the expense of a longer execution time for completing

the original task.

In [Mansard 07], a global architecture is proposed to sequence a stack of tasks in order

to reach a goal taking into account several environment constraints. Redundancy and stack-

ing are involved in a proposed framework which consists in four controller layers. The

first controller is composed of a stack which orders the active subtasks. The subtask at the

bottom level of the stack has higher priority where the priority decreases as the stack level

increases. The second controller ensures that enough DOF remain free to take the constraints

into account and selects the optimal subtask to be removed from the stack. The third con-

troller observes the removed subtasks from the stack and try to put them back in the stack

as soon as possible.Finally, the forth controller ensures the convergence of the global system

by solving the dead locks of the bottom controllers. Several sets of experiments are realized

to validate this architecture. It have been shown that this approach is able to converge to the

desired position despite various kinds of constraints.

5.4 Joint limit avoidanceAlmost all robots including robot arms and humanoid robots have physical joint limits.

Avoiding those limits while performing a task is one of the most important problems to

Page 101: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

5.4 Joint limit avoidance 91

be solved by redundancy. In the following, we present the state of the art of joint limits

avoidance strategies including potential field approaches [Khatib 85], the gradient projec-

tion method [Liegeois 77], joint position specifications and joint velocity specifications in

task function approaches [Samson 91] [Marchand 96], redundancy-based iterative approach

[Chaumette 01], manipulability measure approach [Nelson 95], weighted least norm solu-

tion approach [Chan 95], general weighted least norm solution approach [Xiang 10], con-

strained quadratic optimization methods [Ellekilde 07], improved damped least square ap-

proach [Na 08], fuzzy logic-based approach [Ahson 96], and neural network-based approach

[Assal 06a] [Assal 06b]. In the following, we denote the value of the ith joint ji by qi andthe lower and upper limits for each joint ji by qmin

i and qmaxi respectively.

5.4.1 Potential field approachIn [Khatib 85], artificial potential field is used to satisfy robot joint limits constraints. Each

joint ji can be kept within its minimum and maximum boundary qmini and qmax

i by defining

barriers of potential and defining their corresponding potential forces by:

Γqmini

=

{η(

1pmin

i− 1

pmini (0)

)1

pmini

2 ,

0,

if pmini ≤ pmin

i (0)else

(5.43)

Γqmaxi

=

{−η(

1pmax

i− 1

pmaxi (0)

)1

pmaxi

2 ,

0,

if pmaxi ≤ pmax

i (0)else

(5.44)

where pmini (0) and pmax

i (0) represent the distance of the potential field influence and pmini =

qi− qmini and pmax

i = qmaxi − qi are the distance limits. This potential field is integrated with

the operational space approach.

5.4.2 Gradient projection methodThe most widely used method to consider constraints or some criteria is the Gradient projec-

tion method (GPM). This method has been originally introduced for non-linear optimization

[Rosen 60][Rosen 61], then applied in robotics [Liegeois 77] [Chang 87] [Nakamura 87]

[Dubey 88] [Zghal 90] [Marchand 96] [Luya 01]. It requires defining a function represent-

ing the performance criterion and projecting the gradient of this function using the projection

operator presented in Section 5.2. The self-motions generated allows the robot to perform

the additional task as best as possible.

In GPM, the control law used to solve joint limits avoidance problem is usually given by:

q = q1 + qa = q1 +Pe∂hs

∂q(5.45)

where qa is the articular velocity to perform the joint limits avoidance and hs is a cost

function designed to be minimal at safe configuration and maximal in the vicinity of the

joint limits.

Page 102: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

92 State of the art in redundancy

Joint position specification

In [Liegeois 77], an active utilization of redundancy of robot manipulator by gradient pro-

jection method is presented and the generalized inverse of the Jacobian matrix is used to

present a general solution of joint velocity problem to keep the physical joint values within

its allowable limits. The cost function used has a quadratic form (see Fig. 5.2) and is given

by:

hs =1

6

6∑i=1

(qi − qmean

i

qmeani − qmax

i

)2

(5.46)

where qmeani = (qmax

i + qmini )/2. When the robot is not redundant, the desired end-effector

task becomes impossible to accomplish in addition to the damage that may occur to the ma-

nipulators if the commanded robot joint angles exceed the joint limits.

Figure 5.2 – Cost function, [Liegeois 77][Samson 91][Marchand 96]

In [Samson 91], the avoidance task aims to keep the joints of the manipulator at their op-

timal positions. The cost function is designed such that it is minimal when the manipulator

reaches a specified desired joint position qmeani , and is maximal in the vicinity of joint lim-

its. The robot moves the nearest of that position under the constraint that the visual task is

realized.

hs =1

6

n∑i=1

(qi − qmeani )2 (5.47)

The cost functions defined by (5.46) and (5.47) are active as long as (qi − qmeani ) = 0, even

if the joint is not in the vicinity of its limits as can be seen in Fig. 5.2.

In [Chaumette 01], to solve the problem mentioned above, the configuration q of the robot is

considered safe with respect to its joint limits if for all joints ji, qi ∈ [ qmin0i

, qmax0i

] ( see Fig.

5.3), where :

qmin0i

= qmini + ρΔqi

qmax0i

= qmaxi − ρΔqi (5.48)

Page 103: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

5.4 Joint limit avoidance 93

define the safe domain of articulation ji with ρ ∈ [0, 12] is a tuning parameter (typically

ρ = 0.1) and Δqi = qmaxi − qmin

i . It is generally defined by:

hs =β

2

n∑i=1

Δi2

Δqi(5.49)

where

Δi =

⎧⎨⎩qi − qmin

0i,

qi − qmax0i

,0,

if qi < qmin0i

if qmax0i

< qielse

(5.50)

Therefore

gsi=

∂hs

∂qi= β

Δi

Δqi(5.51)

The classical avoidance task is thus given by:

Figure 5.3 – Cost function, [Chaumette 01]

qa = Peg = Pe∂hs

∂q= βPeΔ (5.52)

whereΔ = (Δ1/Δq1, ..,Δn/Δqn).

Joint velocities specification.

In [Marchand 96], the behavior of the manipulator is specified in term of velocities by intro-

ducing more constraints on the performance of the system to ensure an exponential decrease

toward the final joint position qmeani such that:

q∗i (t) = −αq∗i (t)− qmean

i

qmaxi − qmin

i

(5.53)

where α is a constant value. The cost function defined for velocity specification is given by:

hi =β

2

n∑i=1

[(qi(t)− q∗i (t))]2 (5.54)

Page 104: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

94 State of the art in redundancy

and therefore

gsi=

∂hs

∂qi= β(qi(t)− q∗i (t)) (5.55)

Experimental results showed that velocity specification method allows the robot to reach a

final position closer to the middle of the joint limits faster than the position specification one.

Let us finally note that, in these methods, the tuning of β is extremely difficult. If too large, it

results in oscillations. If too small, it can not allow avoiding the joint limits. Also, a suitable

value for a given configuration may be either too large or too small for other configurations.

5.4.3 Redundancy-based iterative approachIn [Chaumette 01], a redundancy-based iterative approach is proposed to avoid the robot

joint limits. This method does not affect the main task achievement and ensures the avoid-

ance problem by automatically generating a robot motion compatible with the main task by

iteratively solving a system of linear equations to cut any motion on the axis that are in a

critical situation. In this method, if the robot system has n joints and the main task has mindependent constraints then the global task is given by:

e = J1+ e1 +

na∑i=1

ai E•i (5.56)

where na = dim Ker J1 = n−m,∑na

i=1 ai E•i define the available motions to avoid the joint

limit or to perform any other secondary task, E is a basis of Ker J1 of dimension n × na.

a is the vector of gain that will be automatically computed. The effect of this gain vector

is to stop any motion of all joints that are in a critical situation. The global velocity vector

obtained using (5.56) for the ith joint is given by:

q[i] = −λ

((J1

+ e1)[i] +na∑k=1

ak Eik

)(5.57)

In order to stop the motion of the robot joint ji which nears its limits, q[i] is set to be zero

then we get:

(q[i] = 0)⇐⇒(

na∑k=1

ak Eik = −(J1+ e1)[i]

)(5.58)

If there are l joints near their limits then we need to ensure (5.58) for each of these joints. The

common adaptive gain vector that works for all joints is automatically computed by solving

the system of linear equations defined by:⎡⎢⎣...

Ei•...

⎤⎥⎦ a =⎡⎢⎣

...

−(J1+ e1)[i]...

⎤⎥⎦ ≡ Aa = d (5.59)

Page 105: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

5.4 Joint limit avoidance 95

where a and d are vectors of dimension na and A is of dimension n × na. The solution for

(5.59) has the form a∗ = A+d and the control law is written as:

q = −λ

(J1

+ e1 +na∑k=1

a∗k E•k

)= q1 − λ

(na∑k=1

a∗k E•k

)(5.60)

When the number of axis in critical situation nb is less than na, other axes may enter in the

critical situation. This problem is handled by choosing a∗ as:

a∗ = A+d+

nb∑k2=1

bk2B•k2 (5.61)

where B = Ina − A+A is the basis of Ker A and nb = dim Ker A. Let us note that∑nb

k2=1 bk2B•k2 has no effect on the main task. Finally, by injecting (5.61) in (5.60) we get:

q = q1 − λ

na∑i=k

(A+d+

nb∑k2=1

bk2B•k2

)k

E•k (5.62)

This control scheme ensures that the motion of all joints near their limits will stop. However,

a problem of discontinuity appears because each time a new joint has to be considered in the

joint limit avoidance, a sudden stopping for this joint occurs. To deal with this discontinuity

problem, the condition for q[i] given in (5.58) is changed as:

na∑k=1

ak Eik = −γi (J+ e)[i] (5.63)

where the coefficient γi is used to produce a smooth decay of the axis velocity and is given

by:

γi =

⎧⎪⎪⎨⎪⎪⎩qi(t+1)−qmin

�0i

qmini −qmin

�0i,

0,qi(t+1)−qmax

�0i

qmaxi −qmax

�0i,

if qmin ≤ qi(t+1) ≤ qmin0i

if qmin0i

< qi(t+1) < qmax0i

if qmax0i≤ qi(t+1) ≤ qmax

(5.64)

The predicted value qi(t+1) is used for computing γi to ensure that the joint ji will not reach

any of its limits qmin or qmax. In the implementation described in [Chaumette 01], it is

proposed to reduce the range of each joint by 20%. Furthermore, as the control law is dis-

continuous, it also relies on the low level robot controller to smooth the resulting trajectory.

5.4.4 Manipulability measure approachIn [Nelson 95], a global objective function that realizes a compromise between the main task

and secondary tasks is used by exploiting the robot redundant DOFs with respect to the main

task. In this approach, a manipulability measure is determined to compare different manip-

ulator configurations in order to indicate nearness to joint limits. This measure combines a

Page 106: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

96 State of the art in redundancy

penalty function which approaches zero as soon as the joints approach their limits. It is given

by:

w(q)′ =

(1− e

−kΠni=1

(qi−qmini )(qmax

i −qi)

(qmaxi

−qmini

)

)(det(J(q)))2 (5.65)

where k is a constant used to change the envelop of the penalty function. This approach was

used to avoid kinematic singularity and joint limits in a target tracking system by introducing

the manipulability measure into the visual tracking objective function to define a new objec-

tive function. However, important perturbations can be produced by the obtained motions,

which are generally not compatible with the regulation to zero of the main task. Also, the

global task can fail when the same joints are used for the avoidance and for achieving the

main task.

5.4.5 Weighted least norm solution approachIn [Chan 95], weighted least-norm solution (WLN) to avoid joint limits for redundant joint

manipulators is presented. Unlike the basic gradient projection method, WLN guarantees

joint limit avoidance as well as minimizes unnecessary self-motion. The weighted norm of

the joint velocity vector is defined by:

|qW| =√q�Wq (5.66)

whereW ∈ Rn×n is a symmetric and positive definite weighting matrix. If the relationship

between the joint velocities q and the end effector velocity v is v = Jq, the weighted least

square solution is given by:

qW =W−1J�[JW−1J�]−1v (5.67)

provided J is full ranked. To avoid joint limits, the weighted norm matrixW assumed to be

diagonal matrix is given byW = diag(w1, w2, ..., wn), where the ith element wi is defined

as:

wi = 1 +

∣∣∣∣∂H(q)∂qi

∣∣∣∣ (5.68)

whereH(q) is the performance criteria (cost function) and is defined as:

H(q) =n∑

i=1

(qmaxi − qmin

i )2

4(qmaxi − qi)(qi − qmin

i )(5.69)

In order to determine wither the joint is moving away from or toward the limits, the weighting

factors is redefined as follows:

wi =

{1 +∣∣∣∂H(q)

∂qi

∣∣∣1

if Δ∣∣∣∂H(q)

∂qi

∣∣∣ ≥ 0else

(5.70)

Page 107: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

5.4 Joint limit avoidance 97

In this case, if the joint is moving away from the limit, no penalization occurs to its motion

and the joint moves freely. This allows the redundancy to be useful for other purposes such

as avoiding obstacles. Finally, let us note that using WLN, the self-motion depends on the

configuration and the end-effector velocity vector v. The magnitudes of qW changes ac-

cording to the magnitude and the direction of the end-effector velocity vector qi. When the

end-effector is not moving, WLN scheme does not change its configuration.

5.4.6 General weighted least norm solution approach

In [Xiang 10], a general version of the weighted least norm solution approach is presented. In

this method, the performance criterion is selected to satisfy thatHg(q) > hg, whereHg(q)is the performance function and hg is a predefined threshold. By regardingHg(q) as a virtual

joint variable, the solution is to make Hg(q) = 0 when the joint Hg(q) = hg. The veloc-

ity of the virtual joint variable becomes qv = T(q)q where T(q) = [∇Hg(q) Kg(q)]�,

∇Hg(q) being the gradient vector ofHg(q), andKg(q) the orthogonal complementary ma-

trix of ∇Hg(q). The kinematic equation relating the end-effector velocity and virtual joint

velocities is given as follows x = Jvqv where Jv = JT−1(q) is the Jacobian matrix with

respect to the virtual joint variable. The velocity command of the virtual joints is given by:

qv =W−1v J

�v [JvW

−1v J

�v ]

−1xd (5.71)

The joint-velocity command is then given by:

q =W−1T J

�[JW−1T J

�]−1xd (5.72)

whereWT = T−1(q)W−1

v T−T (q) and

wv1 =

{1,0,

if qv1 − hg > ε or qv1 > 0if qv1 − hg ≤ ε or qv1 ≤ 0 (5.73)

This method effectively meets the multiple constraints, whose number might be even larger

than the number of joints while guaranteeing the good performance of the main task [Xiang 10].

However, a drawback of this method is the inability to avoid the singularity configurations

because the transformation on the joint space will change the criterion for avoidance.

5.4.7 Constrained Newton optimization methods

In [Shahamiri 05], the control scheme proposed is based on constrained optimization using

a null space-biased Newton technique. It is shown that, by formulating the control law as an

optimization problem, singularities and obstacles can be detected online, and an avoidance

command can be computed in the null space of the main task. Similar to the GPM, this

approach exploits redundant degrees of freedom to execute a secondary task in the null space

of the main task.

Page 108: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

98 State of the art in redundancy

5.4.8 Constrained quadratic optimization methodIn [Craig 86] [Cheng 92] [Cheng 94] [Ellekilde 07], the problem of joint limits avoidance is

addressed using the quadratic optimization method. In this method, it is assumed that the

value qi of the joint ji respects position, velocity and acceleration constraints defined by:

qmini < qi < qmax

i , vmini < qi < vmax

i , and amini < qi < amax

i , where vmini and vmax

i are

the minimum and maximum velocity and amini and amax

i are the minimum and maximum

acceleration of the joint ji respectively. To determine the feasible joint velocity q, the three

constraints are expressed as constraints of q asA < q < B. In order to consider joint limits

when computing qi, each joint is treated individually by defining the lower and upper limits

of its velocity. When the joint ji is in an arbitrary position q∗i , to ensure that a motion of jitoward qmax

i can be stopped before actually reaching the limit, the velocity of the joint must

be limited when arriving at q∗i . This joint velocity upper bound is defined by the braking

capabilities of the robot as well as the distance from the current joint position to the upper

limit such that.

di = qmaxi − q∗i (5.74)

The worst case approximation of the distance traveled while braking with the maximal ac-

celeration qmini is given by:

di = h(j + 1)qendi − h2amini

j(j + 1)

2(5.75)

where h is the cycle time. By setting qendi = 0, and qendi = −hamini , the upper and lower

bound for j can be obtained by solving (5.75) for jmax from which the integer between them

can be defined as:

j(di) = ROUND

(1

2

√1− 8di

h2amini

− 1)

(5.76)

By injecting (5.76) in (5.75) we get:

qendi (di) =di + h2amin

ij(di)(j(di)+1)

2

h(j(di) + 1)(5.77)

The maximal velocity allowable at current time t is Qmaxi (di) = qendi (di)− j(di)q

mini h. The

worst-case estimate of the position at time t + h is given by d∗i = di − hQmaxi (di), and the

limit velocity needed at time t is qmaxi (di) = Qmax

i (d∗i ) = Qmaxi (di− hQmax

i (di)). The same

steps can be followed to determine the lower velocity limit when moving toward the lower

joint limit. qmini (di) = Qmin

i (d∗i ) = Qmini (di − hQmin

i (di)). At each control step, qmini (di)

and qmaxi (di) are computed and then used in A < q < B in order to use the entire joint

range.

5.4.9 Improved damped least square approachIn [Na 08], an improved damped least square solution is used for joint limits avoidance. In

this method, to ensure that joint limits will not be violated and qmini < qi < qmax

i , a diagonal

Page 109: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

5.4 Joint limit avoidance 99

matrix D(λ) is introduced to replace the constant damping factor λ in the damped least

square approach (remember equation (5.39)):

D(λ) = diag(λ) =

⎡⎢⎣λ1

. . .

λn

⎤⎥⎦ (5.78)

where

λi = c

[2qi − qmax

i − qmini

qmaxi − qmin

i

]p(5.79)

where c, p ∈ R+ are constants and p is an even number.

When the joint value is within its motion range, small value of λi gives accurate solutions;

and when the joint is near or moves away from its limits, large value of λi results in a feasible

solution. Therefore, each λi makes a restriction to each joint value qi. The larger the p is, the

flatter the bottom of the curve of damping factor is, which implies the approximate low cost

to the reasonable joint values.

5.4.10 Fuzzy logic-based approachIn [Ahson 96], a fuzzy logic-based method is presented to automatically choose an appropri-

ate magnitude of a self-motion to avoid joint limits in kinematically redundant manipulators.

For a given robot configuration, membership functions and linguistic rules are defined. A

planner articulated arm having four DOFs is considered in simulation test where the limits

of each joint are defined relative to its previous joint. The normalized cost function is defined

as

hi =(qi − ai)

(ai − qmaxi )

(5.80)

where

ai = qi−1 +1

2(Δqmin

i +Δqmaxi ) (5.81)

is the average link position such that Δqmini and Δqmax

i are the joint limits with respect to

the previous link. This method helps in choosing appropriate joint angles by selecting the

right amount of self-motion using linguistic rules. The effectiveness of this method has been

demonstrated by computer simulation.

5.4.11 Neural network-based approachClassical methods for avoiding the joint limits using GPM do not guarantee a minimization

of a certain performance criterion for each individual joint, particularly when the number of

degrees of redundancy becomes less than the number of critical axes for a given task. In

[Assal 06a] [Assal 06b], an intelligent control system is proposed which relies on exploiting

the neural network as an advanced nonlinear controller to solve the kinematic inversion to

Page 110: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

100 State of the art in redundancy

avoid the joint limits of a redundant manipulator. The fuzzy-neuro system works as a hint

generator to generate an approximate solution qh = (qh1 , qh2 , ...qhn) of the inverse kine-

matic problem that presents joint angle vector for the next step of the manipulator. All

elements of qh are within the values required for the avoidance of joint angle limits. Two

input vectors are presented to the fuzzy-neuro system to generate the hint vector qh, namely

qrel = (qr1 , qr2 , ..., qrn) ∈ Rn and vr = (vr1 , vr2 , ..., vrn) ∈ Rn, where qriis the relative

position of joint ji to its limits given by:

qri=

{qi(t)−qm

i

qmaxi −qm

i,

− qi(t)−qmi

qmaxi −qm

i,

if qi > qmini

else(5.82)

where qmi =qmaxi +qmin

i

2, and vri

is the absolute relative velocity of joint ji to its maximum

velocity limit given by:

vri=|vi|

vimax

(5.83)

where vimax is the maximum velocity of joint ji and vi is the velocity of joint ji given by:

vi =

{qi(t)−qi(t−1)

Δt,

− qi(t−1)−qi(t)Δt

,

if qi > qmini

else(5.84)

where Δt is the sampling time.

The general concept of constructing the fuzzy rules for each joint angle is to generate an

approximate value for each joint angle from the current and previous ones for the next step

according to the situation of the relative position of joint ji as:

qhi(t+ 1) = qi(t) + c(qi(t)− qi(t− 1)) (5.85)

where c is a constant.

The output of the neural network is guided by feeding an additional hint input vector to

the NN, which is an approximate value vector for the required joint angle vector for the next

step. Then, the neural network converges rapidly to the required joint angle vector that is

close to the hint vector. In addition, it solves the redundancy resolution problem since the

hint values are the values that can achieve the joint limits avoidance.

5.4.12 Comparison between different JLA methodsIn [Ellekilde 07], a nice comparison is presented among different joint limits avoidance

methods including iterative gradient projection method, weighted least norm method and

quadratic optimization method. This comparison shows that, as the iterative GPM method

reaches its critical area located at 0.8, it could be 0.95, in normalized joint coordinates it stops

any motion of this joint toward the limit. However, the chosen limit may be reached with

Page 111: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

5.5 Conclusion 101

maximum speed using iterative GPM method. The WLN method does not completely block

the joint, but simply increases the cost of moving toward its limit causing the joint approach

the limit slower than when the robot is controlled using the previous iterative GPM method.

The characteristic behaviors of the QP controller, the WLN and the iterative GPM methods

are similar, however, the QP controller exploits the entire joint range without attempting to

break the limit while the basic GPM controller stops any motion of this joint toward the limit

as soon as the critical limit is reached.

The advantage of projecting the gradient of the secondary task onto the null space of the

Jacobian used for the main task is that the joint limit process have no influence on the main

task and hence is executed under the constraint that the main task is realized. Unfortunately, it

turns out that the success of these methods depends on a parameter, which specifies the influ-

ence of the secondary task on the final robot command sequence [Chan 95] [Chaumette 01].

This parameter must be tuned very precisely to ensure the effectiveness of the joint limit pro-

cess. If this parameter is too small, the secondary task will not be able to influence the robot

motions in time to avoid limits or singularities. If, on the other hand, the parameter is too

large, the robot motions will be disturbed by the secondary task which results in undesired

end-effector oscillations [Euler 89]. To make bad even worse, it turns out that a parameter

suitable for a given robot configuration is often either too large or too small for other config-

urations. This effectively renders such basic gradient projection methods unsuitable for most

applications. We will bring a significant contribution to this approach in the next chapter.

5.5 ConclusionIn this chapter, state of the art considering redundancy in robotic control as well as joint

limits avoidance strategies were presented. The main advantage that redundant manipulators

over non-redundant ones is the property of self-motion i.e. the ability to move joints without

moving the end-effector. Self-motion makes redundant manipulators capable of optimizing

various performance criteria in addition to the main task motion [Siciliano 91]. This is not

possible in case of non-redundant manipulators since the joint configuration for any given

end-effector position and orientation is unique except for finite variation. A summarization

of our review of the state of the art is summarized by the two graphs illustrated in Fig. 5.4

and Fig. 5.5 for redundancy and joint limits avoidance respectively.

Page 112: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

102 State of the art in redundancy

Figure 5.4 – Redundancy

Page 113: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

5.5 Conclusion 103

Figure 5.5 – Joint limits avoidance

Page 114: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

104 State of the art in redundancy

Page 115: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Chapter 6

New large projection operator for theredundancy framework

In this chapter, we propose a new projection operator for the redundancy framework based

on a task function defined as the norm of the usual error. This projection operator allows

performing secondary tasks even when the main task is full rank. To ensure the convergence

of the system, a switching strategy is then defined to switch from the new projection operator

to the classical one before the norm of the total error reaches zero. An adaptive gain is also

defined to slow down the convergence of the main task. It allows the secondary tasks to

be active for longer. Treating priority among several tasks of different priority levels is

also discussed and the recursive formula for the new projection operator is deduced. The

experimental results obtained show the agreement with the analytical study and demonstrate

the effectiveness of the proposed projection operator with respect to the classical one. The

work described in this chapter leads to the following publication [Marey 10a].

6.1 IntroductionThe methods presented in the previous chapter and based on the GPM approach require that

the main task does not constrain all the robot DOFs. Indeed, in that case, the main task Jaco-

bian becomes of full rank and no redundancy space is left for projecting any constraint. This

is a limitation of the classical gradient projection method. That is why a proper selection of

the projection operator is required to provide secondary motions of the manipulator that re-

spect the constraints and keep the projected vector from being distorted as much as possible

[Khatib 96], [Yoshikawa 96]. We will see that the method we propose leads to significant

improvements.

This chapter is organized as follow: in Section 6.2, the new projection operator is devel-

Page 116: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

106 New large projection operator for the redundancy framework

oped and discussed. In Section 6.2.1, different test cases with respect to varying number of

task components and robot DOFs are presented and analytically studied as e nears zero. In

Section 6.2.2, a switching strategy is given to solve the problems exhibited in Sections 6.2

and 6.2.2. In Section 6.3, the recursive formula of the new projection operator is deduced for

several task priorities levels and articular task priority is presented in Section 6.3.3. Finally,

a description of the implementation of the new projection operator followed by experimental

results in visual servoing are given in Section 6.4.

6.2 New projection operator P‖e‖The main original idea of this chapter is to consider η = ‖e‖γ , where γ ∈ R − {0} to

build a new projection operator Pη. Since the error norm and the error vector are linked by

‖e‖2 = e�e, we have ‖e‖ ˙‖e‖ = e�e where ˙‖e‖ is the variation of the norm of the total error

‖e‖, from which we obtain:

η = γ‖e‖γ−1 ˙‖e‖ = γ‖e‖γ−2 e�e (6.1)

Since e = Jeq, we obtain by injecting e in (6.1) :

η = γ‖e‖γ−2 e�Jeq (6.2)

from which we deduce:

Jη = γ‖e‖γ−2 e�Je (6.3)

Note that Jη ∈ R1×n is at most of rank 1. For all e such that ‖e‖ = 0, we then obtain

J+η =

1

γ‖e‖γ−2 (e�JeJ�e e)J�e e (6.4)

If we want η to have an exponential decrease, i.e. η = −λη, then the least square solution

qη of Jη qη = −λ η is given by:

qη = −λ ‖e‖γ J+η (6.5)

and the general control law will be:

q = qη + q⊥η = qη +Pηg (6.6)

wherePη = (In−Jη+Jη) is a projection operator on the null space of Jη and g is any vector

that can be designed to try to realize secondary tasks. Using (6.3) and (6.4), we directly get:

Pη = P‖e‖ = In − 1

e�JeJ�e eJ�e ee

�Je (6.7)

We can note that Pη does not depend of the value of η. It is thus denoted P‖e‖ in the

following. Since Jη is at most of rank 1, P‖e‖ is at least of rank n − 1, which will thus not

filter a lot the secondary task g. That is the main contribution of this work especially if we

remember that, in the classical approach, the rank of Pe is equal to n− r. As soon as r > 1,supplementary directions of motions are thus available to achieve the secondary tasks. That

is particularly true when Je is of full rank n, in which case Pe = 0 and no secondary task at

all can be considered in that usual case.

Page 117: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

6.2 New projection operator P‖e‖ 107

6.2.1 Analytical study of P‖e‖This section presents an analytical analysis of the projection operator P‖e‖ given by (6.7).

On one hand, we can note that, as e→ 0, the value ofP‖e‖ is unstable since the denominator

tends to zero. On the other hand, we would like that P‖e‖ tends to Pe when e→ 0. Indeed,when e→ 0, no perturbation has to be introduced by the secondary tasks on each component

of e to preserve its convergence and stability. In the following, we illustrate this point by two

examples.

Case when k=2 and n=2

If we consider a system with two DOFs and a task with two components e = (x, y) thenJe ∈ R2×2. By assuming Je is full rank and given by:

Je =

[a1 a2

b1 b2

](6.8)

we get:

J�e ee�Je =

[X2 XYXY Y 2

](6.9)

and

e�JeJ�e e = X2 + Y 2 (6.10)

where X = (a1x+ b1y) and Y = (a2x+ b2y).

By injecting (6.9) and (6.10) in (6.7) and assuming for simplicity that x = y then taking

the limit as e→ 0 for the upper left entry of P‖e‖ we get:

lime→0P‖e‖[1, 1] = 1− (a1 + b1)

2

(a1 + b1)2 + (a2 + b2)2= 0 (6.11)

while we havePe = In−J−1e Je = 0, which implies lime→0P‖e‖ = Pe. Similarly, the same

result can be obtained for a higher DOFs system when the number of features is equal to the

number of the robot DOFs and the task Jacobian is full rank.

Case when k=2 and n=3

If a system of three DOFs is considered with the same task e = (x, y), whose task Jacobian

Je ∈ R2×3 is now given by:

Je =

[a1 a2 a3

b1 b2 b3

](6.12)

then we get:

J�e ee�Je =

⎡⎣X2 XY XZXY Y 2 Y ZXZ Y Z Z2

⎤⎦ (6.13)

Page 118: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

108 New large projection operator for the redundancy framework

and

e�JeJ�e e = x2 A+ 2xy C + y2 B (6.14)

where X = a1x + b1y, Y = a2x + b2y, Z = a3x + b3y, A =∑3

i=1 a2i , B =

∑3i=1 b2

i ,

C =∑3

i=1 aibi.Injecting (6.13) and (6.14) in (6.7) we get:

P‖e‖ =

⎡⎣1− X2

DXYD

XZD

XYD

1− Y 2

DY ZD

XZD

Y ZD

1− Z2

D

⎤⎦ (6.15)

where D = e�JeJ�e e. By considering again the particular case x = y and taking the limit

of P‖e‖ when e→ 0, then multiplying the first column of the result by Je we get:

Je lime→0P‖e‖[1, 1..3] =

[a1 − a1X2

0

D0+ a2X0Y0

D0+ a3X0Z0

D0

b1 − b1X20

D0+ b2X0Y0

D0+ b3X0Z0

D0

]

=[00

](6.16)

where D0 =∑3

i=1(a2i + aibi + b2

i ), X = a1 + b1, Y = a2 + b2, Z = a3 + b3. This result

shows that once again lime→0P‖e‖ = Pe since we have of course JePe = 0.

The previous study directs us to the following conclusion: as soon as the system nears its goal

(that is when e → 0) we have to switch P‖e‖ to the classical projection operator Pe. This

switching will ensure the convergence of the system since it allows solving the instability

problem of P‖e‖ as e→ 0 due to the singularity of J‖e‖ when e = 0.

6.2.2 Switching based projection operatorThe switching strategy designed consists in defining a convex combination Pλ between the

classical and the new projection operator such that:

Pλ = λ(‖e‖) P‖e‖ + (1− λ(‖e‖)) Pe (6.17)

where the proposed formula for the switching function λ(‖e‖) : R→ [0, 1] is defined by:

λ(‖e‖) =⎧⎨⎩1λ(‖e‖)−λ0

λ1−λ0

0

if e1 < ‖e‖if e0 ≤ ‖e‖ ≤ e1

if ‖e‖ < e0

(6.18)

where e1 and e0 are two threshold values that define the starting and the ending conditions

for the switching period. λ(t) : R → R is a continuous monotonically increasing function,

such that λ1 = λ(e1) ≈ 1 and λ0 = λ(e0) ≈ 0. The sigmoid function λ(t) = 11+exp(−t)

shows

early exponential growth from zero for negative t, which slows to linear growth of slope 1/4

Page 119: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

6.2 New projection operator P‖e‖ 109

near t = 0, then approaches one with an exponentially decaying gap for positive t [Pearl 20].A good selection for the function λ(‖e‖) is then the sigmoid function given as:

λ(‖e‖) = 1

1 + exp(−12‖e‖−e0e1−e0

+ 6)(6.19)

where values of e0 and e1 have to be selected such that the system does not converge too

fast during the interval [e0, e1]. This allows the switching to be performed smoothly during

a sufficient number of iterations. Figure (6.1) shows the shape of the switching function

λ(‖e‖) when e0 = 0.1 and e1 = (0.3, 0.5, 0.7, 0.9).

Figure 6.1 – Switching function λ(‖e‖).

6.2.3 Stability analysis of qη

Let us now consider qη. After injecting (6.4) in (6.5) we obtain:

qη = −λ

γ

‖e‖2(e�JeJ�e e)

J�e e (6.20)

Using control scheme qη given by (6.20), a singular configuration is obtained if e ∈ Ker(J�e ).That is clear from (6.3). This case corresponds to a local minimum of the classical control

scheme qe = −λJ+e e (see (5.1) and (5.2)), since Ker(J�e ) = Ker(J+

e ). Another singularity

occurs when e → 0 if the denominator e�JeJ�e e has a convergence rate to zero faster than

that of the nominator. If the denominator and the nominator have the same convergence rate

to zero when e → 0, then lime→0 qη is indeterminate and the system will not be stable nor

robust with respect to any perturbation. Of course, we obtain the same results by studying

the stability analysis of the control scheme (6.20). Let us consider the candidate Lyapunov

Page 120: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

110 New large projection operator for the redundancy framework

function V (t) = η2. By taking the derivative of V (t) and injecting (6.2) in the result we

obtain:

V (t) = 2η η = 2γ ‖e‖2γ−2e�Jeq (6.21)

Injecting (6.20) in (6.21), we get:

V (t) =−2λ‖e‖2γe�JeJ�e e

e�JeJ�e e

= −2λ‖e‖2γ when J�e e = 0 (6.22)

We have V (t) < 0 as soon as e = 0 and e /∈ Ker(J�e ), thus ensuring the same asymptotic

stability properties of the system as in the classical case [Chaumette 06], but when e = 0. Letus note that these results are obtained whatever the value of γ, for example by considering

the main task functions such as η =√‖e‖ when γ = 0.5 or η = ‖e‖2 when γ = 2. We will

see how to deal with these problems of singularity and stability in Section 6.2.5.

6.2.4 Studying qη and Pη as e→ 0 (t→∞)

Now, we study the behavior of qη and Pη when defined as a function of t, where t is the time

parameter.

Finding η and e as a function of the time t

By injecting (6.20) in e = Jeq we get:

e = −λ

γ

‖e‖2e�JeJ�e e

JeJ�e e

If we assume that JeJ�e = Ik (which is a strong assumption), then we get:

e =

{ −λγe

0

if e = 0if e→ 0

(6.23)

Solving this differential equation, we obtain:

e(t) = e(0) exp(−λγt) (6.24)

from which we deduce:

η(t) = ‖e(t)‖γ = ‖e(0)‖γ exp(−λt)⇒ η(t)→ 0 when t→ 0

(6.25)

Page 121: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

6.2 New projection operator P‖e‖ 111

Studying v‖e‖ as e→ 0 (t→∞)

Assuming again that JeJ�e = Ik, we obtain from (6.20):

v‖e‖ = −λ

γ

‖e‖2‖e‖2 J

�e e

= −λ

γJ�e e when e = 0 (6.26)

Injecting (6.24) and (6.25) in (6.26) gives

v‖e‖(t) = −λ

γexp(−λt) J�e e(0) (6.27)

If Je is constant, then as t→∞ we get:

limt→∞

v‖e‖(t) = 0 (6.28)

Studying P‖e‖ as e→ 0 (t→∞)

For the projection operator

P‖e‖ = In − 1

e�JeJ�e eJ�e ee

�Je (6.29)

If JeJ�e = Ik, then:

P‖e‖ = In − 1

‖e‖2 J�e ee

�Je (6.30)

Writing P‖e‖ as a function of t by injecting (6.24) and (6.25) in (6.30) and assuming Je is

constant, we get:

P‖e‖(t) = In − 1

‖e(0)‖2 J�e e(0)e

�(0)Je (6.31)

Since this expression does not depend of the time, we have of course:

limt→∞

P‖e‖(t) = In − 1

‖e(0)‖2 J�e e(0)e

�(0)Je (6.32)

To study the limit of the classical projection operatorPe = In−J+e Je as t→∞, three cases

arise. These cases are based on the number m of features and the number n of DOF of the

system. For the first case when m > n then J+e = (J

�e Je)

−1J�e then we get:

Pe(t) = In − J+e Je

= In − (J�e Je)−1J�e Je= 0 (6.33)

Page 122: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

112 New large projection operator for the redundancy framework

Similarly, for the second case when m = n then J+e = J

−1e and we have:

Pe(t) = In − J−1e Je

= 0 (6.34)

Finally, for the third case when m < n then J+e = J�e (JeJ

�e )

−1 and since we assumed

JeJ�e = Ik then we get:

Pe(t) = In − J+e Je

= In − J�e (JeJ�e )−1Je

= In − J�e Je (6.35)

These results show that if JeJ�e = Ik then the classical and the new projection operators may

have non-zero values as t→∞ when m < n which allows secondary tasks to be projected.

If m ≥ n and t → ∞ then Pe = 0. That is why the classical projection operator does not

filter any secondary task as soon as the system nears the desired configuration. While for the

new projection operator Pe = 0. This leads to the same conclusion obtained in 6.2.1, i.e.,

when the system approach the desired configuration, the new operator should switch to the

classical one to ensure the convergence of the system.

6.2.5 Main task function

Possible control schemes

As discussed in Section 6.2.3, stability problem and singularity appear when e → 0 if the

control scheme qη defined in (6.20) is used as output of the main task. These problems can

be avoided by performing a switching from qη to the classical qe given in (5.2) by defining

as control scheme:

q = qλ +Pλg (6.36)

where

qλ = λ(‖e‖) qη + (1− λ(‖e‖)) qe (6.37)

with λ(‖e‖) given by (6.18) for which e0 is selected such that as long as ‖e‖ > e0, the

singularity effect of q‖e‖ does not appear. Applying this switching strategy ensures that the

main task will avoid the singularity situation when e→ 0 since

lime→0qλ = lim

e→0qe = 0 (6.38)

In order to avoid the indetermination problem when the denominator of q‖e‖ is equal to zero

(that is when e ∈ Ker(J+e ), a direct switching without a transition interval from q‖e‖ to qe

can be employed by setting e0 = e, where e is the value of the error norm when the indeter-

mination problem occurs.

Page 123: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

6.3 Efficient task-priority using the new operator P‖e‖ 113

Finally, the classical control scheme qe can also be used directly with the new projection

operator Pλ instead of qη or qλ. In that case, the control scheme will be:

q = qe +Pλg (6.39)

Starting the task with the classical control scheme ensures an exponential decreasing of each

error component when g = 0, which may be useful in practice as will be seen in Section 6.4.

Adaptive gain β(‖e‖)Usually, the gain λ involved in the classical control scheme qe = −λJ+

e e is tuned so that

the convergence rate of the main task is as fast as possible while preserving the stability

of the system. This leads to increase λ as e decreases (for instance could be an equation

λ(‖e‖) = λ min +K exp(−B‖e‖) where λmin, K and B are constant positive scalar values).

However, having a fast convergence rate for the main task may not be adequate in our case.

Indeed, since P‖e‖ switches to Pe when e → 0, this switch may occur too early to have

enough time to take the secondary tasks into account. This is especially the case when the

secondary tasks are critical (such as obstacles and joints limits avoidance for example). That

is why it may be useful to slow down the convergence rate of the main task. For that, the norm

of the total error may be introduced in the control gain and we may think to use λ = λ0 ‖e‖where λ0 is a constant. However, as soon as ‖e‖ → 0, the convergence rate of the main task

becomes too slow. To avoid this problem, we define a gain function β(‖e‖) which returns

the norm of the total error as long as Pλ = P‖e‖2 and switches smoothly to 1 as soon as

the norm of the error reaches a specified threshold value. The scheme of the adaptive gain

function β(‖e‖) can be deduced and written as:

β(‖e‖) = 1− λ(‖e‖) + ‖e‖ λ(‖e‖) (6.40)

with the same switching conditions used to switch from P‖e‖ to Pe used for β(‖e‖). Settingλ = λ0β(‖e‖), the control scheme (6.39) becomes:

q = −λ0 β(‖e‖) J+e+Pλ g (6.41)

Using this control scheme increases the time during which the secondary tasks will be active

thanks to the use of P‖e‖.

Let us finally note that the new projection operator Pλ does not modify the stability prop-

erties of the system. More precisely, if the control schemes (6.36), (6.39) and (6.41) are

globally or locally stable in the Lyapunov sense using Pe, they are also with Pλ. This is due

to the fact that, by construction, the variation of the Lyapunov function η2 = ‖e‖2 does not

depend of g,∀g.

6.3 Efficient task-priority using the new operator P‖e‖Since the projection operator Pλ provides a highly redundant system, it is possible to in-

troduce several constraints and to manage the order of priority between them. When using

Page 124: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

114 New large projection operator for the redundancy framework

the new projection operator Pλ, each subtask is equivalent to a scalar constraint and it is

thus possible to consider a large number of subtasks, whatever their corresponding dimen-

sion and rank are ( rank(Ji) ≤ n for i = 1, .., l). This enlarges the applicability domain of

redundancy-based task-priority.

6.3.1 Two tasksRecalling (5.25), when only two tasks are considered then the general control is given as:

q = q1 +(J2P‖e1‖

)+(e∗2 − J2q1) (6.42)

which allows us to use the efficient scheme for task-priority even when the main task uses

all the robot degrees of freedom.

6.3.2 Several tasksSimilarly as in Section 5.3.4.2, For l tasks, we define the general control scheme as q = qlwhere qi is given by (see 5.26):

qi =

{qi−1 +

(JiP

A‖ei−1‖)+(e∗i − Jiqi−1) , if i = 2, .., l

J+1 e

∗1, if i = 1

(6.43)

where PA‖ei‖ is the new projection operator on the null space of the augmented Jacobian JAi .

When i = 1 then PA‖e1‖ = P‖e1‖ while if i = 2..l then PA

‖ei‖ is defined by:

PA‖eAi ‖ = In −

1

eAi�JAi J

Ai�eAi

JAi�eAi e

Ai

�JAi (6.44)

where eAi is the augmented task and JAi is the augmented Jacobian::

eAi = (eAi−1, ei), e

A1 = e1 (6.45)

JAi = (JAi−1,Ji), J

A1 = J1 (6.46)

Equation (6.44) can be written as:

PA‖eAi ‖ = In −

1

aAi aAi� aAi

�aAi (6.47)

where aAi is defined as:

aAi = eA�i J

Ai (6.48)

The recursive formula of aAi is obtained by injecting (6.45) and (6.46) in (6.48) to get:

aAi =[eA

�i−1 e�i

](JAi−1,Ji)

= eA�

i−1JAi−1 + e

�i Ji

= aAi−1 + e�i Ji (6.49)

Page 125: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

6.3 Efficient task-priority using the new operator P‖e‖ 115

We thus obtain:

aAi =

{aAi−1 + e

�i Ji, if i = 2, .., l

e�1 J1, if i = 1(6.50)

Another recursive formula of PA‖eAi ‖ can directly be obtained from (5.35) by using the partic-

ular form J‖ei‖ =1

‖e‖e�i Ji:

PA‖eAi ‖ =

{PA

‖eAi−1‖− Π, if i = 2, .., l

In − 1ei�Jei

J�eieiJ�eieiei

�Jei , if i = 1(6.51)

where

Π =1

bAi bAi� bAi

�bAi (6.52)

and

bAi = e�i JiP

A‖eAi−1‖ (6.53)

As can be seen, the computational cost required for computing the recursive formula (6.51),

which requires the computation of term Π, is higher compared to the other recursive form

defined by (6.47) and (6.50). That is why we prefer to use (6.47) and (6.50) in practice.

Finally, let us note that a switching fromPA‖ei‖ toPi

A has to be performed as soon as eAi → 0.This can be done using the same strategy than the one proposed in Section 6.2.2.

6.3.3 Redundancy-based articular-task-priorityNow, we consider the case when a task ei describes a desired motion in the articular space

given by an articular velocity q∗i . Instead of replacing the Jacobian Ji by the identity matrix

In as in [Hanafusa 81], we propose to replace Ji by a diagonal matrix In having zero value in

all components not used by ei. In that case, the control schemes (5.26) and (6.43) are given

by:

qi = qi−1 +(InP

A∗i−1

)+ (q∗i − Inqi−1

), i = 2..l (6.54)

where PA∗i−1

is PA‖ei−1‖ or PA

i−1. In order to analyses the behavior of (6.54), let us consider

a simple case when ei ∈ Rn is to send a motion to a joint j0. The above equation (6.54)

indicates that if the joint j0 is not used by the higher level priority tasks, then q∗i [j0] will not

be modified before being projected by(InP

A∗i−1

)+. If the joint j0 is already used by the first

(i-1) tasks, then qi−1[j0] = 0 is subtracted from q∗i [j0] and the difference is then projected by(InP

A∗i−1

)+. When j = j0 and qi−1[j] = 0, then (Inqi−1)[j] = 0, which ensures that there

is no unrequired motion from qi−1[j] to be projected, while using the identity matrix In as

proposed in [Hanafusa 81] we get (q∗i − Inqi−1)[j] = −qi−1[j].

Now, if a zero articular value is to be sent to the joint j0, then q∗i [j0] = 0 and In[j, j] = 1.

Page 126: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

116 New large projection operator for the redundancy framework

When(InP

A∗i−1

)+lets the required movement in the jth component of (q∗i − qi−1) to be

projected completely, then we get qi[j] = q∗i [i].

If two tasks e1 and e2 are considered while e2 is defined in the articular space, the con-

trol schemes (5.25) and (6.42) are given by:

q = q1 +(IPA

∗i−1

)+ (e2 − Iq1

)(6.55)

where PA∗i−1

is PA‖ei−1‖ or PA

ei−1. For a 6 DOFs robot manipulator, if e2 is used to change

the first three joints, we set diag(I) = (1, 1, 1, 0, 0, 0). When it is required to change the last

three joints by e2 then diag(I) = (0, 0, 0, 1, 1, 1).

This analysis shows that task priority based redundancy gives two directions of priority. The

first direction is a task-priority, which ensures the performance of the most significant task

defined as the tasks of higher priorities while using the remaining redundancy to perform the

lower priority tasks if it is possible. The second direction is a behavior-priority of the global

motion obtained by the control scheme which tries to give the motion required to perform

tasks of lower level of priority whenever it is possible, based on the available redundancy.

6.4 Experimental Results

The experimental results presented in this section have been obtained after applying the pro-

posed methods in visual servoing using a six degrees of freedom Gantry robot. The task

function is defined by e = s− s∗ where s and s∗ ∈ Rk are two vectors representing the cur-

rent and the desired selected visual features. The task Jacobian Je = LsMJq, where Jq is

the robot Jacobian andM is the matrix that relates v to the variation of the camera pose p by

v =Mp. Two different objects are used. The first object (obj1) is a square of length 0.1 m

composed of four points. The Cartesian coordinates of these points in the image define the

visual features (x1, y1, x2, y2, x3, y3, x4, y4) used in the visual servoing system. In that case,

we have a system of six DOFs and eight visual features with a main task of full rank 6. The

second object (obj2) is a cylinder of radius 0.08 m. The visual features are now defined by

parameters (ρ1, θ1, ρ2, θ2) that describe the configuration of the cylinder in the image plane

and the main task is of full rank four. The camera pose is represented by p = (t, r) where tis a translational vector expressed in meter and r = θu is the rotational vector expressed in

degree using the classical angle axis representation.

Two sets of experiments are presented: Set A in Section 6.4.1 using obj1 compares different

control schemes and studies the behavior of the new projection operator by applying simple

secondary tasks; Set B in Section 6.4.2 using obj2 compares Pe and Pλ when classical and

efficient redundancy-based task-priority control schemes are employed.

Page 127: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

6.4 Experimental Results 117

6.4.1 New Projection operator using obj1

Three different cases have been implemented to validate and investigate the efficiency of the

new projection operator when a full rank main task is considered. The chosen task consists

of positioning the camera with respect to obj1. In all cases, the desired camera pose is (0, 0,

0.5, 0, 0, 0), which means that the camera has to be at 0.5 m in front of the square obj1 so

that it appears as a centered square in the image.

6.4.1.1 Case A1: Control schemes q‖e‖ and qe

In this case, we study the behavior of q‖e‖ and the effect of the adaptive gain β(‖e‖) with

the classical control scheme qe. The initial camera pose is I=(-0.1,0.1,1.0,0,0,0) and no

secondary task is added to the main task. Applying q‖e‖, the initial movement consists of

translations along x-axis and y-axis combined with a small rotation around y-axis till iteration

number 80, as depicted in Fig. 6.2(a). Then the translational movement along z-axis starts

to increase. Figure 6.2(a) shows also that as soon as ‖e‖ nears zero, the control law q‖e‖ is

completely unstable, as explained in Section 6.2. As expected, the norm of the total error is

exponentially decreasing but during the instability due to the perturbations it induces. If the

switching to qe is performed when ‖e‖ → 0, that is using (6.37), the system converges as

can be seen in Fig. 6.2(b). As shown in Fig. 6.2(c), applying the classical control scheme qe,each error component converges exponentially to zero (as can be seen from the image points

trajectories, which are now pure straight lines), as well as the norm of the error. Setting

the gain λ = λ0‖e‖, the convergence rate is extremely slow, as depicted in Fig. 6.2(d).

Finally, using the adaptive gain function β(‖e‖) with the classical control law ensures the

convergence of the system as shown in Fig. 6.2(e).

6.4.1.2 Case A2: Secondary task g = (0.1, 0, 0, 0, 0, 0)

Now, we use the new projection operator and apply a simple secondary task consisting of

a translation of 10 cm/s along x-axis. Let us first recall that using Pe would not allow any

secondary task to be achieved. As expected, using the projection operator P‖e‖ during all

the servo does not produce a satisfactory behavior (see Fig. 6.3). Indeed, some secondary

motions are produced while the main task tries the robot to reach the desired pose, where it

has then to be motionless. This explains the oscillating behavior on the velocity components

of the global task. In Fig 6.4, the projection operator Pλ is used and the system switches

automatically to the classical projection operator. This allows the secondary task to be taken

into account at the beginning of the servo and ensures the convergence of the system to

the desired position. Fig. 6.5 shows the results obtained when the switching gain function

β(‖e‖) is used. It is clear that the secondary task is considered during a longer number of

iterations.

Page 128: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

118 New large projection operator for the redundancy framework

-0.2

-0.1

0

0.1

0.2

-0.2 -0.1 0 0.1 0.2

InitialReached

Desired

-0.2-0.15

-0.1-0.05

0 0.05

0.1 0.15

0.2

0 500 1000 1500 2000

P0xP0yP1xP1yP2xP2yP3xP3y

0

0.05

0.1

0.15

0.2

0.25

0.3

0 500 1000 1500 2000

Total

-5

0

5

10

15

20

0 500 1000 1500 2000

V_v_xV_v_yV_v_zV_w_xV_w_yV_w_z

(a) q‖e‖

-0.2

-0.1

0

0.1

0.2

-0.2 -0.1 0 0.1 0.2

InitialReached

Desired

-0.2-0.15

-0.1-0.05

0 0.05

0.1 0.15

0.2

0 500 1000 1500 2000

P0xP0yP1xP1yP2xP2yP3xP3y

0

0.05

0.1

0.15

0.2

0.25

0.3

0 500 1000 1500 2000

Total

-5

0

5

10

15

20

0 500 1000 1500 2000

V_v_xV_v_yV_v_zV_w_xV_w_yV_w_z

(b) q‖e‖ & switching

-0.2

-0.1

0

0.1

0.2

-0.2 -0.1 0 0.1 0.2

InitialReached

Desired

-0.2-0.15

-0.1-0.05

0 0.05

0.1 0.15

0.2

0 500 1000 1500 2000

P0xP0yP1xP1yP2xP2yP3xP3y

0

0.05

0.1

0.15

0.2

0.25

0.3

0 500 1000 1500 2000

Total

-5

0

5

10

15

20

0 500 1000 1500 2000

V_v_xV_v_yV_v_zV_w_xV_w_yV_w_z

(c) qe, λ = λ0

-0.2

-0.1

0

0.1

0.2

-0.2 -0.1 0 0.1 0.2

InitialReached

Desired

-0.2-0.15

-0.1-0.05

0 0.05

0.1 0.15

0.2

0 500 1000 1500 2000

P0xP0yP1xP1yP2xP2yP3xP3y

0

0.05

0.1

0.15

0.2

0.25

0.3

0 500 1000 1500 2000

Total

-5

0

5

10

15

20

0 500 1000 1500 2000

V_v_xV_v_yV_v_zV_w_xV_w_yV_w_z

(d) qe, λ = λ0‖e‖

-0.2

-0.1

0

0.1

0.2

-0.2 -0.1 0 0.1 0.2

InitialReached

Desired

-0.2-0.15-0.1

-0.05 0

0.05 0.1

0.15 0.2

0 500 1000 1500 2000

P0xP0yP1xP1yP2xP2yP3xP3y

0

0.05

0.1

0.15

0.2

0.25

0.3

0 500 1000 1500 2000

Total

-5

0

5

10

15

20

0 500 1000 1500 2000

V_v_xV_v_yV_v_zV_w_xV_w_yV_w_z

(e) qe, λ = λ0β(‖e‖)

Figure 6.2 – Results for case A1. Comparison between the different control schemes. Line 1:imagepoints trajectories, line 2: image point error, line 3: norm of the total error, line 4: translational cameravelocity (cm/s) and rotational camera velocity (deg/s).

Page 129: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

6.4 Experimental Results 119

-0.2

-0.1

0

0.1

0.2

-0.2 -0.1 0 0.1 0.2

InitialReached

Desired

-0.2-0.15-0.1

-0.05 0

0.05 0.1

0.15 0.2

0 200 400 600 800 1000

P0xP0yP1xP1yP2xP2yP3xP3y

0

0.05

0.1

0.15

0.2

0.25

0.3

0 200 400 600 800 1000

Total

-30

-20

-10

0

10

20

30

0 200 400 600 800 1000

V_v_xV_v_yV_v_zV_w_xV_w_yV_w_z

-30

-20

-10

0

10

20

30

0 200 400 600 800 1000

V1_v_xV1_v_yV1_v_zV1_w_xV1_w_yV1_w_z

-30

-20

-10

0

10

20

30

0 200 400 600 800 1000

V2_v_xV2_v_yV2_v_zV2_w_xV2_w_yV2_w_z

Figure 6.3 – Results for case A2: qe, P‖e‖, Line 1 from left to right: image points trajectories, imagepoint error and norm of the total error, lines (2) from left to right: camera velocity components in cm/sand deg/s of the general, main and secondary tasks respectively.

-0.2

-0.1

0

0.1

0.2

-0.2 -0.1 0 0.1 0.2

InitialReached

Desired

-0.2-0.15-0.1

-0.05 0

0.05 0.1

0.15 0.2

0 200 400 600 800 1000

P0xP0yP1xP1yP2xP2yP3xP3y

0

0.05

0.1

0.15

0.2

0.25

0.3

0 200 400 600 800 1000

Total

-30

-20

-10

0

10

20

30

0 200 400 600 800 1000

V_v_xV_v_yV_v_zV_w_xV_w_yV_w_z

-30

-20

-10

0

10

20

30

0 200 400 600 800 1000

V1_v_xV1_v_yV1_v_zV1_w_xV1_w_yV1_w_z

-30

-20

-10

0

10

20

30

0 200 400 600 800 1000

V2_v_xV2_v_yV2_v_zV2_w_xV2_w_yV2_w_z

Figure 6.4 – Results for case A2: qe, Pλ, λ = λ0.

Page 130: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

120 New large projection operator for the redundancy framework

-0.2

-0.1

0

0.1

0.2

-0.2 -0.1 0 0.1 0.2

InitialReached

Desired

-0.2-0.15-0.1

-0.05 0

0.05 0.1

0.15 0.2

0 200 400 600 800 1000

P0xP0yP1xP1yP2xP2yP3xP3y

0

0.05

0.1

0.15

0.2

0.25

0.3

0 200 400 600 800 1000

Total

-30

-20

-10

0

10

20

30

0 200 400 600 800 1000

V_v_xV_v_yV_v_zV_w_xV_w_yV_w_z

-30

-20

-10

0

10

20

30

0 200 400 600 800 1000

V1_v_xV1_v_yV1_v_zV1_w_xV1_w_yV1_w_z

-30

-20

-10

0

10

20

30

0 200 400 600 800 1000

V2_v_xV2_v_yV2_v_zV2_w_xV2_w_yV2_w_z

Figure 6.5 – Results for case A2: qe, Pλ, λ = λ0β(‖e‖).

6.4.1.3 Case A3: Secondary task g = (-0.02,0.04,0.02, 4,4,4)

In this case, a more general secondary task is used with non-zero value in all its components.

Using the new projection operator Pλ with the classical control scheme, most of the sec-

ondary task components are projected successfully onto the main task (see Fig.6.6). At each

iteration, the main task tries keeping the exponential decrease of each error component while

the projection operator allows keeping the exponential decrease of the norm of the total error,

which leads to a nice behavior of the system. Then, thanks to the switching strategy to the

classical projection operator, the system converges to the desired pose. When the switching

gain function β(‖e‖) is used (see Fig. 6.7), the number of iterations where the secondary

task is active increases significantly.

Page 131: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

6.4 Experimental Results 121

-0.2

-0.1

0

0.1

0.2

-0.2 -0.1 0 0.1 0.2

InitialReached

Desired

-0.2-0.15-0.1

-0.05 0

0.05 0.1

0.15 0.2

0 100 200 300 400 500 600 700 800

P0xP0yP1xP1yP2xP2yP3xP3y

0

0.05

0.1

0.15

0.2

0.25

0.3

0 100 200 300 400 500 600 700 800

Total

-30

-20

-10

0

10

20

30

0 100 200 300 400 500 600 700 800

V_v_xV_v_yV_v_zV_w_xV_w_yV_w_z

-30

-20

-10

0

10

20

30

0 100 200 300 400 500 600 700 800

V1_v_xV1_v_yV1_v_zV1_w_xV1_w_yV1_w_z

-30

-20

-10

0

10

20

30

0 100 200 300 400 500 600 700 800

V2_v_xV2_v_yV2_v_zV2_w_xV2_w_yV2_w_z

Figure 6.6 – Results for case A3: qe, Pλ, λ = λ0.

-0.2

-0.1

0

0.1

0.2

-0.2 -0.1 0 0.1 0.2

InitialReached

Desired

-0.2-0.15-0.1

-0.05 0

0.05 0.1

0.15 0.2

0 100 200 300 400 500 600 700 800

P0xP0yP1xP1yP2xP2yP3xP3y

0

0.05

0.1

0.15

0.2

0.25

0.3

0 100 200 300 400 500 600 700 800

Total

-30

-20

-10

0

10

20

30

0 100 200 300 400 500 600 700 800

V_v_xV_v_yV_v_zV_w_xV_w_yV_w_z

-30

-20

-10

0

10

20

30

0 100 200 300 400 500 600 700 800

V1_v_xV1_v_yV1_v_zV1_w_xV1_w_yV1_w_z

-30

-20

-10

0

10

20

30

0 100 200 300 400 500 600 700 800

V2_v_xV2_v_yV2_v_zV2_w_xV2_w_yV2_w_z

Figure 6.7 – Results for case A3: qe, Pλ, λ = λ0β(‖e‖).

Page 132: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

122 New large projection operator for the redundancy framework

6.4.2 Comparison betweenPλ andPe, classical and efficient task-priority,using obj2

In these experiments, the robot has to reach a desired position where the optical axis of

the camera is normal to the axis of the cylinder and y-axis of the camera frame is parallel

to this axis (see Fig. 6.8). In that case, the rank of Pe is two while the rank of P‖e‖ is

five. Different motions moving the robot with a velocity 0.03 m/s in the camera frame are

specified as secondary tasks. The vector g that produces this motion is defined as g =(0, 0, 0.03, 0, 0, 0) when (1 ≤ iter < 100), g = (0, 0,−0.03, 0, 0, 0) when (100 ≤ iter <200), g = (0.03, 0, 0, 0, 0, 0) when (200 ≤ iter < 300), g = (0, 0.03, 0, 0, 0, 0) when

(300 ≤ iter < 400), else g = (0, 0, 0, 0, 0, 0). The two components available in Pe are the

translation along y axis and a combination of translation along x axis and rotation around y

axis so that the image of the cylinder is not modified despite these motions.

(a) Initial image (b) Desired image

Figure 6.8 – Initial and desired camera images of obj2, (cases B1, B2, B3 and B4)

6.4.2.1 Cases B1 and B2, two tasks, classical task-priority

In this case, the control scheme used is given by:

q = −λJ+1 e1 + λ∗1P∗g (6.56)

where P∗ represents the projection operator used (Pe or Pλ) and the adaptive gain function

λ∗1 is given by:

λ∗1 =|g[i0]||(P∗g)[i0]| (6.57)

where i0 is the axis where the secondary motion projected is the most significant one. Using

Pe, motions along x-axis and y-axis are projected as depicted in Fig. 6.9 while no redun-

dancy remains for projecting any motion along z-axis. Using Pλ, the vector g is completely

projected into the x, y, and z components of v2 as depicted in Fig. 6.10. Results also show

that the classical redundancy-based task-priority control scheme leads to accumulate the mo-

tions needed by the secondary tasks with the motion needed by the main task provided that

the convergence of the main task is ensured.

Page 133: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

6.4 Experimental Results 123

-6

-4

-2

0

2

4

0 200 400 600 800 1000

V_v_xV_v_yV_v_zV_w_xV_w_yV_w_z

-2

-1

0

1

2

3

4

0 200 400 600 800 1000

V1_v_xV1_v_yV1_v_zV1_w_xV1_w_yV1_w_z

-0.2 0

0.2 0.4 0.6 0.8

1 1.2 1.4

0 200 400 600 800 1000

Rho1Theta1

Rho2Theta2

Total

-6

-4

-2

0

2

4

0 200 400 600 800 1000

V2_v_xV2_v_yV2_v_zV2_w_xV2_w_yV2_w_z

-4-3-2-1 0 1 2 3 4

0 200 400 600 800 1000

g_v_xg_v_yg_v_zg_w_xg_w_yg_w_z

Figure 6.9 – Results for case B1: Pe and classical redundancy-based task-priority are considered;lines (1) : camera velocity components in cm/s and deg/s of the general and main tasks respectivelyand image parameter errors for the cylinder, line (2): the projected and the required additional motionin cm/s.

-6

-4

-2

0

2

4

0 200 400 600 800 1000

V_v_xV_v_yV_v_zV_w_xV_w_yV_w_z

-2

-1

0

1

2

3

4

0 200 400 600 800 1000

V1_v_xV1_v_yV1_v_zV1_w_xV1_w_yV1_w_z

-0.2 0

0.2 0.4 0.6 0.8

1 1.2 1.4

0 200 400 600 800 1000

Rho1Theta1

Rho2Theta2

Total

-6

-4

-2

0

2

4

0 200 400 600 800 1000

V2_v_xV2_v_yV2_v_zV2_w_xV2_w_yV2_w_z

-4-3-2-1 0 1 2 3 4

0 200 400 600 800 1000

g_v_xg_v_yg_v_zg_w_xg_w_yg_w_z

Figure 6.10 – Results for case B2: Pλ and efficient redundancy-based task-priority are considered.

Page 134: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

124 New large projection operator for the redundancy framework

6.4.2.2 Cases B3 and B4, two tasks, efficient task-priority

In this case, the task e2 is defined to produce some motion in the camera frame by e∗2 = g.This can be realized using control schemes (5.26) and (6.43) while, instead of replacing the

Jacobian J2 by the identity matrix In as in [Hanafusa 81], which constrains all DOFs, we

replace J2 by a diagonal matrix In having zero value in all components not used by e∗2 (see

Section 6.3.3). By considering two tasks e1 and e2, the control schemes (5.26) and (6.43)

are now given by:

q = q1 + λ∗2(InP∗)+ (e∗2 − Inq1

)(6.58)

where the adaptive gain λ∗2 is given by:

λ∗2 =| (e∗2 − Inq1) [i0]|∣∣∣((InPA∗1)+ (e∗2 − Inq1

))[i0]∣∣∣ (6.59)

Defining the adaptive gain λ∗2 by (6.59) ensures producing the required motion on the axis

i0 of the camera.

Again, using Pe, no motion is produced along z-axis as depicted in Fig. 6.11 while us-

ingPλ lets the three motions x, y, and z to be projected. Figure 6.12 shows that the projected

motions required by the lower level priority task are perfectly performed by the global task

components.

By comparing the results obtained from cases B1 and B2 and the results obtained from cases

B3 and B4, we conclude that, when classical task-priority is enabled, the projected motion

corresponds to the desired one g on v2 but not on v, while when the efficient task-priority is

enabled the desired motion g is realized through v.

Page 135: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

6.4 Experimental Results 125

-6

-4

-2

0

2

4

0 200 400 600 800 1000

V_v_xV_v_yV_v_zV_w_xV_w_yV_w_z

-2

-1

0

1

2

3

4

0 200 400 600 800 1000

V1_v_xV1_v_yV1_v_zV1_w_xV1_w_yV1_w_z

-0.2 0

0.2 0.4 0.6 0.8

1 1.2 1.4

0 200 400 600 800 1000

Rho1Theta1

Rho2Theta2

Total

-6

-4

-2

0

2

4

0 200 400 600 800 1000

V2_v_xV2_v_yV2_v_zV2_w_xV2_w_yV2_w_z

-4-3-2-1 0 1 2 3 4

0 200 400 600 800 1000

g_v_xg_v_yg_v_zg_w_xg_w_yg_w_z

Figure 6.11 – Results for case B3: Pe and classical redundancy-based task-priority are considered.

-6

-4

-2

0

2

4

0 200 400 600 800 1000

V_v_xV_v_yV_v_zV_w_xV_w_yV_w_z

-2

-1

0

1

2

3

4

0 200 400 600 800 1000

V1_v_xV1_v_yV1_v_zV1_w_xV1_w_yV1_w_z

-0.2 0

0.2 0.4 0.6 0.8

1 1.2 1.4

0 200 400 600 800 1000

Rho1Theta1

Rho2Theta2

Total

-6

-4

-2

0

2

4

0 200 400 600 800 1000

V2_v_xV2_v_yV2_v_zV2_w_xV2_w_yV2_w_z

-4-3-2-1 0 1 2 3 4

0 200 400 600 800 1000

g_v_xg_v_yg_v_zg_w_xg_w_yg_w_z

Figure 6.12 – Results for case B4: Pλ and efficient redundancy-based task-priority are considered.

Page 136: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

126 New large projection operator for the redundancy framework

6.5 ConclusionIn this chapter we have proposed a new large projection operator for the redundancy frame-

work. The new projection operator is obtained by considering the norm of the total error.

A switching strategy has been proposed to ensure that the new projection operator smoothly

switches to the classical projection operator as soon as the error nears zero. An adaptive gain

has also been developed so that the secondary task is effective during a long period. That

may be useful when the secondary tasks have really to be considered (which is the case for

instance for joints limits and obstacle avoidance).

The main interest of the new projection operator is that it is always at least of rank n − 1.Hence it can be used even if the main task is full rank. The new projection operator has been

employed within two different redundancy-based task-priority approaches. A comparative

study of results obtained using Pλ and Pe shows that Pλ provides more versatility when

the main task is not full rank. The effectiveness of the new projection operator has been

addressed when the task is defined in the camera space or in the articular space. All experi-

ments have been implemented on a six DOFs robot arm.

These theoretical developments enlarges the applicability domain of the redundancy frame-

work. These enlargements cover all relevant problems solved using the redundancy formal-

ism when it is required to add system constraints (joint limit avoidance, as we will be seen in

the next chapter, obstacle avoidance, occlusion avoidance, etc.), to add supplementary tasks

(trajectory following, etc.) or to manage the levels of priority among several tasks, using for

instance the stack of tasks proposed in [Mansard 07].

Page 137: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Chapter 7

New joint limits avoidance strategy

In this chapter, we present a new redundancy-based strategy for avoiding joint limits of a

robot arm. This strategy is based on defining three functions for each joint: an activation

function; an adaptive gain function ; and a tuning function. These functions allow to deter-

mine automatically the required sign and the suitable magnitude for the avoidance process at

each joint. The problem of adding an additional task with the main task and the avoidance

process is also considered and solved. As for the redundancy framework, the large projec-

tion operator based on the norm of the usual error proposed in the previous chapter is used

to enlarge the redundancy domain for applying our proposed avoidance strategy. The three

functions defining the new avoidance strategy are also used with kernel-based approach in

order to decreasing the discontinuity effect when a new joint is nearing its limits and when

a joint becomes very close to one of its two limits. The experimental results obtained on a

6 DOF robot arm in eye-in hand visual servoing show that the new avoidance strategy gives

smooth joint avoidance behavior without any tuning step. Using the new projection operator

allows a significant improvement of the joint avoidance process, especially in the case of a

full rank task function. The work described in this chapter leads to the following publication

[Marey 10b].

7.1 Introduction

Joints limit avoidance is a classical and crucial issue in robot control. As already described

in Section 5.4, the utilization of redundancy has been widely used for solving this problem.

The general solution by this technique is obtained as a minimum norm solution together with

a homogeneous solution, which is referred to as self-motion.

This chapter is organized as follows: In Section 7.2, the new avoidance strategy is pre-

sented and discussed. In Section 7.2.5, the problem of adding additional secondary tasks to

Page 138: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

128 New joint limits avoidance strategy

the avoiding process is considered. In Section 7.3, the simualtaneous avoidance of several

joints is considered. Finally, the experimental results are presented in Section 7.4.

7.2 New joint limits avoidance strategyWe recall that the classical control law used to solve joint limits avoidance problem is given

by:

q = q1 + qa (7.1)

where q1 and qa are the velocity vectors of the main task and the avoidance task of the robot

arm. We also recall that a safe robot configuration q is reached when ∀ji ∈ {j1, ..., jn},qi ∈ [ qmin

0i, qmax

0i], where

qmin0i

= qmini + ρΔqi

qmax0i

= qmaxi − ρΔqi (7.2)

where ρ ∈ [0, 12] is a tuning parameter and Δqi = qmax

i − qmini .

The new proposed scheme for qa is given by:

qa =n∑

i=1

qia = −n∑

i=1

λseciλiPg

i (7.3)

whereP is either the classical projection operatorPe, or the new onePλ, λseciis an adaptive

gain function to control the magnitude of the avoidance task, λi is a tuning function to

ensure the smoothness of injecting the avoidance task into the main task, and gi is a vector

indexing function that controls the activation of the avoidance task and determines its sign.

7.2.1 Activation and sign functionIf the configuration is not safe due to the joint ji, a secondary task is activated by the vector

gi = (gi [1],g

i [2], ...,g

i [n]) defined by:

gi [i0] =

⎧⎨⎩−1,1,0,

if qi < qmin0i

and i = i0if qmax

0i< qi and i = i0

else

(7.4)

where i0 ∈ {1, 2, ..., n}. If gi [i] = 0, no avoidance task for the joint ji is activated. The

values 1 and −1 for gi determine the sign of the avoidance task. If gi [i] = 1, the avoidance

task is negative and causes the joint to move away from its maximum limit qmaxi . If gi [i] =

−1, the avoidance task is positive and causes the joint to move away from its minimum limit

qmini . This can be explained by recalling that the projection operator has its diagonal with

elements of non-negative values. Since the vector gi for the joint ji that nears its limits has

only one non-zero element at its ith component, then the sign of (Pgi)[i] is the sign of the ith

component of gi since both λseciand λi have positive sign, as we will see in the following.

Page 139: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

7.2 New joint limits avoidance strategy 129

7.2.2 Tuning functionThe tuning function λi(qi) depicted in Figure 7.1 is used as a scaling parameter for the

secondary task to control its injection onto the main task such that the final behavior of

the system avoids any sudden movement due to the secondary task. For the joint under

consideration ji we define this tuning function based on two sigmoid functions as follows:

λi(qi) =

⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩

1, if qi < qmin1i

or qmax1i

< qiλmin

�i (qi)−λmin�0i

λmin�1i −λmin

�0i

, if qmin1i≤ qi ≤ qmin

0i

λmax�i (qi)−λmax

�0i

λmax�1i −λmax

�0i, if qmax

0i≤ qi ≤ qmax

1i

0, if qmin0i

< qi < qmax0i

(7.5)

where as previously defined, qmin0i

and qmax0i

are the threshold values to start the secondary task

for avoidance. qmin1i

= qmin0i− ρ1ρΔqi and qmax

1i= qmax

0i+ ρ1ρΔqi are the threshold values

at which the smoothing function equals one so that the avoidance task is totally injected

into the main task, where ρ1 ∈]0, 1]. λmini (qi) : R → R and λmax

i (qi) : R → R are two

continuous monotonically increasing functions such that λmax1i

= λmaxi (qmax

1i) ≈ 1, λmax

0i=

λmaxi (qmax

0i) ≈ 0, λmin

1i= λmin

i (qmin1i) ≈ 1 and λmin

0i= λmin

i (qmin0i) ≈ 0. A good selection for

Figure 7.1 – Tunning function λ�i(qi) for the joint ji

the functions λmaxi (qi) and λmin

i (qi) can be obtained using a sigmoid function as:

λmaxi (qi) =

1

1 + exp(−12 qi−qmax�0i

qmax�1i −qmax

�0i+ 6)

(7.6)

λmini (qi) =

1

1 + exp(−12 qi−qmin�0i

qmin�1i −qmin

�0i+ 6)

(7.7)

Introducing qmin1i

and qmax1i

gives the following advantage. By selecting these values different

from qmini and qmax

i respectively, the joint will never reach the joint limit so that there is no

need to predict the next joint position in order to avoid passing the joint limit.

Page 140: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

130 New joint limits avoidance strategy

7.2.3 Adaptive gain function

The adaptive gain function λseciis used to adapt the magnitude of |qia[i]| in order to compen-

sate the corresponding component of the main task near a joint limit. For the joint ji, λseciis

defined from the current value of the ith component of both q1 and (Pgi). It is defined by:

λseci=

{(1 + λi)

|q1[i]||(Pg�

i )[i]|,

0,

if gi [i] = 0else

(7.8)

where λi ≥ 0.

An advantage of using this gain function is that the value of qia given by (7.3) is a func-

tion of q1, so that at any time step, the value of qia is compatible with q1 to be sent to

the joint ji. It can be demonstrated that using the adaptive gain function λseciallows en-

suring that the ith velocity component of the main task and of the avoidance task satisfy

the inequality |qia[i]| > |q1[i]| before reaching the limit qmaxi or qmin

i . Indeed, since near

a joint limit λi(qi) = 1 and λseci= (1 + λi)

|q1[i]||(Pg�

i )[i]|then by recalling (7.3) we obtain

|qia[i]| = | − (1 + λi)q1[i]| > |q1[i]| where λi > 0. Therefore, the gain function λsecien-

sures that the joint limit will never be reached, this property being obtained without any gain

tuning step.

7.2.4 Behavior analysis of qia

Now, we study more precisely the behavior of qia for avoiding the limits of the joint ji. By

injecting (7.30) and (7.5) in (7.3) , we get:

qia =

⎧⎪⎪⎨⎪⎪⎩−(1 + λi)

|q1[i]||(Pg�

i )[i]|Pg

i , if C1

−λi(qi)(1 + λi)|q1[i]|

|(Pg�i )[i]|

Pgi , if C2

0, if C3

(7.9)

where

C1 ≡ (qi < qmin1i

or qmax1i

< qi) (7.10)

C2 ≡ (qmin1i≤ qi ≤ qmin

0ior qmax

0i≤ qi ≤ qmax

1i)

C3 ≡ (qmin0i

< qi < qmax0i)

By investigating (7.9), we can study the behavior of the ith component of the secondary task

|qia[i]| (see Figure 7.2). Within the two intervals [qmax0i

, qmax1i] and [qmin

1i, qmin

0i], the value of

|qia[i]| is changing continuously from 0 to |(1 + λi)q1[i]| as the value qi of the joint ji ischanging continuously from qmax

0ito qmax

1iand from qmin

0ito qmin

1irespectively. Within the two

intervals [qmax1i

, qmaxi ] and [qmin

i , qmin1i], the value of |qia[i]| is greater than or equal to |q1[i]|.

The value of the constant λi is used to tune the difference in the magnitude between |q1[i]|and |qia[i]|. For example, if λi = 0, we get |qia[i]| = |q1[i]| as soon as the value of the joint ji

Page 141: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

7.2 New joint limits avoidance strategy 131

reaches qmax1i

or qmin1i

. While if λi = 1, as soon as the value of qi approaches qmax1i

or qmin1i

we

obtain |qia[i]| = 2|q1[i]|. Figure 7.2 illustrates the relations between ith velocity component

of qa and the ith velocity component of q1 for all values of qi ∈ [qmini , qmax

i ]. Red and blue

arrows mean that the corresponding velocity component causes the joint to move away from

its minimum and maximum limit respectively. It is ensured by this investigation that, near to

a limit of a joint ji, |qia[i]| ≥ |q1[i]|.

Figure 7.2 – Comparison between the magnitudes of qa[i] and q1[i] within the different values for qi

of the joint ji and when λi > 0; Illustration of the relation between the direction of q1[i], qia[i], and

q[i].

Page 142: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

132 New joint limits avoidance strategy

7.2.5 Additional secondary tasks

7.2.5.1 Additional secondary tasks having the same priority level

Now, we consider the case when there are three tasks to be performed by the system. The

three tasks consist of the main task; the secondary task for robot joint limits avoidance, and

an additional secondary task. In this case, the control law is defined as:

q = q1 + qa + q2 (7.11)

where q1 is the velocity vector due to the main task, qa is the velocity vector due to the joint

limit avoidance and finally q2 is the velocity vector to produce the required secondary task.

More precisely, we get:

q = −λ1Je+e1 −

n∑i=1

λaseci

λiPgi − λ2Pg2 (7.12)

Since the additional secondary task q2 can lead to reach a joint limit, it is necessary to change

the definition of λaseci

in (7.30) to consider q2. In that case λaseci

is defined by:

λaseci

=

{(1 + λi)

|q∗[i]||Pg�

i [i]|0,

if gi [i] > 0else

(7.13)

where q∗ = q1+q2. This form for the adaptive gain ensures that the total velocity component

is taken into account by qa for joint limit avoidance. The avoidance task in this case is given

by:

qia =

⎧⎪⎪⎨⎪⎪⎩−(1 + λi)

|q∗[i]||(Pg�

i )[i]|Pg

i , if C1

−λi(qi)(1 + λi)|q∗[i]|

|(Pg�i )[i]|

Pgi , if C2

0, if C3

(7.14)

To analyze the behavior of the avoidance task qia near a joint limit, it is sufficient to replace

q1 by q∗, which will lead to the same behavior as studied in 7.2.4.

Finally, when more additional secondary tasks have to be considered, the avoidance task

(7.14) can be used by setting:

q∗ =kmax∑k=1

qk (7.15)

and the global control scheme is given by:

q = q∗ +n∑

i=1

qia (7.16)

Page 143: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

7.3 Integrating the new joint limits avoidance with kernel approach 133

7.2.5.2 Additional secondary tasks having several priority levels

When a global task consists of l tasks having different priorities, the avoidance task qia is

given by:

qia =

⎧⎪⎪⎨⎪⎪⎩−(1 + λi)

|ql[i]||(Pg�

i )[i]|Pg

i , if C1

−λi(qi)(1 + λi)|ql[i]|

|(Pg�i )[i]|

Pgi , if C2

0, if C3

(7.17)

where ql is the articular velocity to perform the execution of the l tasks with their required

priority computed using (6.43). The global control scheme is:

q = ql +n∑

i=1

qia (7.18)

Let us finally note that, as in the classical approaches, when several joints are not in their

safe configuration, a potential problem may occur if motions required to avoid a limit of one

joint move another joint nearer to one of its limits. However, the new avoidance approach is

better than the classical one. Indeed, on one hand, the classical approach induces very high

values for the avoidance task components qa when a joint is very close to its limit. On the

other hand, the new approach ensures a smooth behavior and decrease of the velocity qa[i]when a joint nears its limit, which reduces the amplitude of the motions projected on the

other joints, and thus the possibility that other joints near their limits due to these motions.

7.3 Integrating the new joint limits avoidance with kernelapproach

In this section, we deal with the problem of avoiding several joint limits simultaneously.

For that, we integrate the proposed activation and sign function and the tuning function

previously presented in 7.2 as well as a new adaptive gain function in the method given in

[Chaumette 01]. This ensures the smoothness of the injection of the avoidance task into the

main task with a reasonable magnitude and correct direction. We also propose a method

to consider additional secondary tasks to the proposed avoidance control system so that the

final control scheme ensures that the joint limits will not be reached. Finally, we present a

solution to discontinuity effect in that approach.

7.3.1 Gain vector a

We recall that the ith component of the control (5.56) can be written as:

q[i] = −λ(J+ e1)[i] +na∑k=1

ak Eik = q1[i] +na∑k=1

ak Eik (7.19)

where value of the gain vector a is computed to ensure that the control stops any motion of

the joint ji toward its limits.

Page 144: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

134 New joint limits avoidance strategy

Here, the value of the gain vector a is computed so that it ensures the same behavior ob-

tained using our avoidance method proposed in Section 7.2. Let us consider that there are

several joints within their critical intervals and near their limits. The global task is given by:

q = q1 + qa (7.20)

where qa is the new proposed avoidance scheme. It is defined by:

qa = −n∑

i=1

gi λi(qi) λseci(7.21)

where gi is to activate the avoidance and to determine its direction, λi(qi) is to allow a

smooth behavior as explained in Sections 7.2.1 and 7.2.2 respectively. λseciis a new adaptive

gain function to determine the magnitude of the avoidance task and is defined by:

λseci= (1 + λi) |q1[i]| , λi ≥ 0 (7.22)

Our proposed method to determine a consists in defining the velocity q[i] of the joint ji as:

q[i] = q1[i] + qa[i] (7.23)

To compute the vector a which ensures that the velocity for each joint near its limits is equal

to q[i] given by (7.23) and satisfies (7.19) we set:

na∑k=1

ak Eik = qa[i] = −gi [i] λi(qi)(1 + λi) |q1[i]| (7.24)

If there are several joints near their limits then the gain vector a that ensures (7.24) for all of

those joints is the solution of the system of linear equation defined by:⎡⎢⎣...

Ei•...

⎤⎥⎦ a =⎡⎢⎣

...

−gi [i] λi(qi)(1 + λi) |q1[i]|...

⎤⎥⎦ ≡ Aa = b (7.25)

The solution of (7.25) has the form a0 = A+b. Finally, the control law will be:

q = q1 +na∑k=1

a∗0kE•k (7.26)

The velocity given in (7.26) ensures a smooth behavior for each joint nears its joint limits

when injecting the avoidance into the main task.

Page 145: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

7.3 Integrating the new joint limits avoidance with kernel approach 135

7.3.2 Gain vector a and additional secondary tasksIn this section we deal with the joint limit avoidance problem when some additional tasks

should be considered. If these secondary tasks are given by e2, e3, ..., edmax and their velocity

vectors are computed as q2, q3, ..., qdmax then the global task is given by:

q = q1 + qa +dmax∑d=2

qd (7.27)

and the ith component of q will be:

q[i] = qa[i] +dmax∑d=1

qd[i] (7.28)

In this case, the required avoidance task is given by:

qa = −n∑

i=1

gi λi(qi) λaseci

(7.29)

where λaseci

is defined by:

λaseci

= (1 + λi)

∣∣∣∣∣dmax∑d=1

qd[i]

∣∣∣∣∣ , λi ≥ 0 (7.30)

To compute the vector a and take into account the velocity components∑dmax

d=2 qd[i] due to

the additional secondary tasks we need to ensure that:

na∑i=1

ai Eik = qa[i] = −gi λi(qi)(1 + λi)

∣∣∣∣∣dmax∑d=1

qd[i]

∣∣∣∣∣ (7.31)

Again, if there are several joints near their limits, the gain vector a is computed by solving a

system of linear equations constructed from (7.31) defined for each joint, so we get:⎡⎢⎣...

Ei•...

⎤⎥⎦ a =⎡⎢⎢⎣

...

−gi [i] λi(qi)(1 + λi)∣∣∣∑dmax

d=1 qd[i]∣∣∣

...

⎤⎥⎥⎦ ≡ Aa = b (7.32)

The system of linear equations that consider all joints near their limits is solved to obtain a

vector a1. If the solution of (7.32) is a1 = A+b, then the global control scheme will be:

q = q1 +na∑i=1

a∗1iE•i +

dmax∑d=2

qd (7.33)

Page 146: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

136 New joint limits avoidance strategy

7.3.3 Transition switching inner loop to solve the discontinuity problemIn this section, we propose a solution to solve the discontinuity problem occurring in the it-

erative method when a new joint enters its critical situation or when a joint leaves its critical

situation [Chaumette 01]. This solution consists in introducing a transition switching inner

loop inside the visual servoing control loop.

Let us assume that the system velocity is given by qtk−1 during iteration k − 1 of the vi-

sual servoing loop, where tk−1 is the starting time of iteration k − 1. If the basic method is

applied (see Section 5.4.3) and a new joint ji enters its critical configuration during iteration

k − 1 the discontinuity in the system behavior occurs when the new computed velocity qtk

is sent to the robot controller at the beginning of iteration k, i.e. at time tk as illustrated in

Fig. 7.3.

Figure 7.3 – Inner switchng loop

Instead of sending qtk directly to the robot controller, we apply a transition switching from

qtk−1 to qtk using the following convex combination:

qλ = (1− λ(niter))qtk−1 + λ(niter)q

tk (7.34)

where niter is the iteration number of the inner loop and the switching function λ(niter) canbe defined as:

λ(niter) =λ(niter)− λ(0)

λ(Niter)− λ(0)(7.35)

Page 147: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

7.4 Results 137

where

λ(niter) =1

1 + exp(−12 niter

Niter+ 6)

(7.36)

To ensure that the switching operation from qtk−1 to qtk is performed during the iteration ikof the outer loop, the number of iterations Niter of the inner loop can easily be specified such

that Niter < ΔtvsΔtr

, whereΔtvs is the time of a visual servoing iteration and Δtr is the required

time to execute a single velocity commend by the robot.

7.4 Results

Set A: New avoidance strategyThe joint limits avoidance strategy proposed in Section 7.2 has been applied to a six DOF

Cartesian robot, see Fig. 3.8, for a full rank main task using obj1. The values of the lower

and upper limits for each joint are qmin1 = −0.7, qmax

1 = 0.7, qmin2 = −0.6, qmax

2 = 0.63,qmin3 = −0.5, qmax

3 = 0.46, qmin4 = −156.41, qmax

4 = 156.0, qmin5 = −5.73, qmax

5 = 142.0,qmin6 = −91.0, and qmax

6 = 91.0, where q1, q2 and q3 are in meter and q4, q5 and q6 are

in degrees. The initial pose between the camera and the object has been chosen such that

the camera is at a distance of 0.5 m from obj1 and the optical axis of the camera passes

vertically through the center of obj1 such that the initial camera pose is (0,0,0.5,0,0,130) for

cases {A1, A2} and (0,0,0.5,0,0,100) for case A3. The desired camera pose is (0,0,0.5,0,0,0),

which corresponds to a pure rotation of 130 and 100 degrees around the camera optical axis

respectively. These initial and desired configurations have been selected such that the main

task causes the robot to perform a backward motion when Cartesian image points coordi-

nates are selected as visual features. By setting the initial configuration of the robot system

such that the optical axis of the camera is parallel to the joint j3 of the robot, a limit of the

joint j3 is reached.

All limit thresholds qmin0i

and qmin1i

are computed using the parameters ρ = 0.1 and ρ1 =0.5 as explained in Section 7.2 except for the lower limit of the 5th joint. Indeed, since

the articular position for the initial configuration of the robot for our experiment is q =(−0.51, 0.09,−0.02,−128.89, 1.46, 0.01), we set qmin

05 = −2 and qmin15 = −5 so that j5 is not

in a critical situation at the beginning of the experiments. The avoidance and secondary tasks

describing a trajectory to follow in the articular space are projected using the new projection

operator Pλ. Let us recall that it is impossible to consider joint limits avoidance with the

classical projection operator Pe in all cases presented here, since Pe = 0.

7.4.1 Case A1: Single joint limits avoidanceIn this case, the system starts its movement as expected from control (7.1) by a backward

motion. When the threshold value qmin03 is reached, the avoidance is activated for the joint

j3 and the robot avoids reaching qmin3 . As depicted in Figure 7.4, qa computed using (7.3)

Page 148: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

138 New joint limits avoidance strategy

-0.2

-0.1

0

0.1

0.2

-0.2 -0.1 0 0.1 0.2

InitialReached

Desired

-0.1-0.05

0 0.05 0.1

0.15 0.2

0.25 0.3

0 500 1000 1500 2000 2500 3000

P0xP0yP1xP1yP2xP2yP2xP2y

Total

-40

-20

0

20

40

0 500 1000 1500 2000 2500 3000

q3_zq3_min_z

q3_max_zq3_L0_min_zq3_L0_max_zq3_L1_min_zq3_L1_max_z

-15

-10

-5

0

5

10

15

0 500 1000 1500 2000 2500 3000

q_dot_1q_dot_2q_dot_3q_dot_4q_dot_5q_dot_6

(a)

-15

-10

-5

0

5

10

15

0 500 1000 1500 2000 2500 3000

q1_dot_1q1_dot_2q1_dot_3q1_dot_4q1_dot_5q1_dot_6

(b)

-4-2 0 2 4 6 8

10 12 14

0 500 1000 1500 2000 2500 3000

q2_dot_1q2_dot_2q2_dot_3q2_dot_4q2_dot_5q2_dot_6

(c)

Figure 7.4 – Results for cases A1; Single joint limit avoidance; Line 1 (a,b,c): image points trajecto-ries, image point error and articular values and joint limits of robot joint q3;; Line 2 (a,b,c): articularvelocity components in cm/s and deg/s of the general task q, main task q1 (high level priority) andsecondary task q2 (low level priority) respectively.

starts to have a non-zero value in the opposite direction of the main task within the iteration

interval (250,420). During this interval, the velocity component of the global task q for the

joint j3 is approximately equal to zero because of the effect of qa, while the rotation around

the revolute joint j4, which corresponds to the camera optical axis, is performed by the main

task. Then, the direction of q1[3] of the main task is inverted near to iteration number 420, so

that it causes the robot to move away from its joint limit, that is why qa stops since it becomes

useless, although the value of q3 ∈ [qmin03 , qmin

3 ] within the iteration interval (420,500). This

can be seen in Figure 7.4 by noticing the values of the third component of qa and q3. Finally,

the system converges to its desired position while ensuring the exponential decreasing for

‖e‖ all along the task.

7.4.2 Case A2: Additional secondary task sent to revolute joint, classi-cal task-priority

Now, we consider the case when an additional task is considered describing a trajectory to

follow in the articular space inducing an articular velocity g. A sinusoidal motion defined by

f(t) = −3π180(sin( πt

180)) on the revolute joint j5 is chosen as additional secondary task causing

two joints to near their limits simultaneously. Through the first 360 iterations, the vector of

this secondary task is given by g = (0, 0, 0, 0, f(t), 0) where t is the number of iterations.

Control scheme (7.11) is used where qa is given by (7.14) and q2 = g. The obtained results

depicted on Fig. 7.5, in which q2_dot = qa+ q2, show that the system succeeds to avoid two

joint limits simultaneously, a joint limit of j3 at qmin03 due to the main task and a joint limit

Page 149: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

7.4 Results 139

-0.2

-0.1

0

0.1

0.2

-0.2 -0.1 0 0.1 0.2

InitialReached

Desired

-0.1-0.05

0 0.05 0.1

0.15 0.2

0.25 0.3

0 300 600 900 1200 1500 1800

P0xP0yP1xP1yP2xP2yP2xP2y

Total

0 20 40 60 80

100 120 140

0 300 600 900 1200 1500 1800

q5_ryq5_min_ryq5_max_ry

q5_L0_min_ryq5_L0_max_ryq5_L1_min_ryq5_L1_max_ry

-40

-20

0

20

40

0 500 1000 1500 2000 2500 3000

q3_zq3_min_zq3_max_z

q3_L0_min_zq3_L0_max_zq3_L1_min_z

q3_L1_max_z

-15

-10

-5

0

5

10

15

0 300 600 900 1200 1500 1800

q_dot_1q_dot_2q_dot_3q_dot_4q_dot_5q_dot_6

(a)

-15

-10

-5

0

5

10

15

0 300 600 900 1200 1500 1800

q1_dot_1q1_dot_2q1_dot_3q1_dot_4q1_dot_5q1_dot_6

(b)

-4-2 0 2 4 6 8

10 12 14

0 300 600 900 1200 1500 1800

q2_dot_1q2_dot_2q2_dot_3q2_dot_4q2_dot_5q2_dot_6

(c)

Figure 7.5 – Results for cases A2; Classical priority; Two joint limits avoidance; Additional secondarytask sent to a revolute joint q5; Line 1 (a,b,c): image points trajectories, image point error and articularvalues and joint limits of robot joint q5;; Line 2 (a,b,c): articular velocity components in cm/s and deg/sof the general task q, main task q1 (high level priority) and secondary task q2 (low level priority)respectively; q2_dot = qa + q2

of j5 at qmin05 due to the additional task. Again, as illustrated in Fig. 7.5, the system keeps

the exponential decreasing of the norm of the total error while projecting the additional

secondary task g and avoiding the joint limits.

7.4.3 Case A3: Additional secondary task sent to revolute joint, effi-cient task-priority

We now consider the efficient-task priority approach with the same additional secondary task

as before. The control scheme to manage e1 and e2 is similar to (6.58) except that it is defined

in the articular space. Using I in (6.58) indicates that, if the i0th joint jio is not used by g, then

g[i0] will not be modified by q1[i0] before being projected by(InP∗1)+

. This prevents any

unnecessary motions when q1[i0] = 0. In this case, the higher level priority task is the visual

servoing task while the lower level priority additional task is given by g = (0, 0, 0, 0, f(t), 0)where f(t) = 3π

180(sin( πt

180)). It is activated during the first 360 iterations. The initial camera

Page 150: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

140 New joint limits avoidance strategy

-0.2

-0.1

0

0.1

0.2

-0.2 -0.1 0 0.1 0.2

InitialReached

Desired

-0.1-0.05

0 0.05 0.1

0.15 0.2

0.25 0.3

0 300 600 900 1200 1500 1800

P0xP0yP1xP1yP2xP2yP2xP2y

Total

-20

-10

0

10

20

30

40

50

0 300 600 900 1200 1500 1800

q3_zq3_min_z

q3_max_zq3_L0_min_zq3_L0_max_zq3_L1_min_zq3_L1_max_z

-15

-10

-5

0

5

10

15

0 300 600 900 1200 1500 1800

q_dot_1q_dot_2q_dot_3q_dot_4q_dot_5q_dot_6

(a)

-15

-10

-5

0

5

10

15

0 300 600 900 1200 1500 1800

q1_dot_1q1_dot_2q1_dot_3q1_dot_4q1_dot_5q1_dot_6

(b)

-4-2 0 2 4 6 8

10 12 14

0 300 600 900 1200 1500 1800

q2_dot_1q2_dot_2q2_dot_3q2_dot_4q2_dot_5q2_dot_6

(c)

Figure 7.6 – Results for cases A3; Classical priority; Avoidance and additional secondary task sentto a revolute joint q5; Line 1 (a,b,c): image points trajectories, image point error and articular valuesand joint limits of robot joint q5;; Line 2 (a,b,c): articular velocity components in cm/s and deg/sof the general task q, main task q1 (high level priority) and secondary task q2 (low level priority)respectively; q2_dot = qa + q2

pose is (0,0,0.5,0,0,100). A lower joint limit for j3 is virtually considered to be 15 degrees.

Applying (6.58) on q1 and g gives ql which is used in (7.14) to compute the avoidance task.

The obtained results depicted on Fig. 7.6 show that the system succeeds to avoid j3 at qmin03

and to keep the exponential decreasing of ‖e‖ while projecting the articular velocity task

g using the efficient redundancy-based task-priority approach. As depicted in Fig 7.6, the

motion required by g[4] appears on the corresponding component q[4] of the global task.

7.4.4 Case A4: Additional secondary task sent to prismatic joint, clas-sical task-priority

In this case, a motion that tries to move the end effector in a square of length 0.03 in xy-

plane of the articular frame is specified as additional secondary task. The vector g3 that

produces this motion is defined as g = (0.03, 0, 0, 0, 0, 0) when (1 < iter < 250), g =(0,−0.03, 0, 0, 0, 0) when (250 < iter < 500) , g = (−0.03, 0, 0, 0, 0, 0) when (500 <iter < 750), g = (0, 0.03, 0, 0, 0, 0) when (750 < iter < 1000), else g3 = (0, 0, 0, 0, 0, 0).It concerns only the two prismatic joints j1 and j2. As depicted in Fig. 7.7, q2_dot represents

the avoidance task combined with the projection of g3. As expected, secondary motions

along x-axis and y-axis are produced during the first 1000 iterations due to the projection of

the vector g3. It is clear that even if the error of each feature lost its exponential decreasing

behavior mainly due to the effect of q3, the servo system keeps the exponential decreasing

for the norm of the total error, as expected thanks to P‖e‖. Again, the visual servoing task is

Page 151: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

7.4 Results 141

-0.2

-0.1

0

0.1

0.2

-0.2 -0.1 0 0.1 0.2

InitialReached

Desired

-0.1-0.05

0 0.05 0.1

0.15 0.2

0.25 0.3

0 500 1000 1500 2000 2500 3000

P0xP0yP1xP1yP2xP2yP2xP2y

Total

0

0.1

0.2

0.3

0.4

0.5

0.6

0 500 1000 1500 2000 2500 3000

Total error

-15

-10

-5

0

5

10

15

0 500 1000 1500 2000 2500 3000

q_dot_1q_dot_2q_dot_3

-15

-10

-5

0

5

10

15

0 500 1000 1500 2000 2500 3000

q1_dot_1q1_dot_2q1_dot_3

-15

-10

-5

0

5

10

15

0 500 1000 1500 2000 2500 3000

q2_dot_1q2_dot_2q2_dot_3

-15

-10

-5

0

5

10

15

0 500 1000 1500 2000 2500 3000

q_dot_4q_dot_5q_dot_6

-15

-10

-5

0

5

10

15

0 500 1000 1500 2000 2500 3000

q1_dot_4q1_dot_5q1_dot_6

-15

-10

-5

0

5

10

15

0 500 1000 1500 2000 2500 3000

q2_dot_4q2_dot_5q2_dot_6

-140-120-100-80-60-40-20

0 20

0 500 1000 1500 2000 2500 3000

q1q2q3q4q5q6

(a)

-40

-20

0

20

40

0 500 1000 1500 2000 2500 3000

q3_zq3_min_zq3_max_z

q3_L0_min_zq3_L0_max_zq3_L1_min_z

q3_L1_max_z

(b)

0 20 40 60 80

100 120 140

0 500 1000 1500 2000 2500 3000

q5_ryq5_min_ryq5_max_ry

q5_L0_min_ryq5_L0_max_ryq5_L1_min_ryq5_L1_max_ry

(c)

Figure 7.7 – Results for case A4: Avoidance and additional secondary task sent to translationaljoints; q2_dot = q2 + q3, Line 1 (a,b,c): image points trajectories, image point error and the norm ofthe total error; Line 2 (a,b,c) and Line 3 (a,b,c): articular velocity components for translational androtational robot joints respectively in cm/s and deg/s of the general task q, main task q1 (high levelpriority) and secondary task q2 (low level priority) respectively; Line 4 (a,b,c): articular values for thesix joints, joint limits of robot joints q3 and q5;

performed successfully by avoiding the joint limit of j3.

7.4.5 Case A5: Additional secondary task sent to prismatic joint, effi-cient task-priority

In this case, as in case A3, a lower joint limit for j3 is virtually considered to be 15 degrees.

Secondary task similar to A4 is considered. As depicted in Fig. 7.8, q2_dot represents

Page 152: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

142 New joint limits avoidance strategy

-0.2

-0.1

0

0.1

0.2

-0.2 -0.1 0 0.1 0.2

InitialReached

Desired

-0.05

0

0.05

0.1

0.15

0.2

0.25

0 200 400 600 800 1000 1200 1400 1600 1800

P0xP0yP1xP1yP2xP2yP2xP2y

Total

0 0.05 0.1

0.15 0.2

0.25 0.3

0.35 0.4

0.45

0 200 400 600 800 1000 1200 1400 1600 1800

Total error

-10

-5

0

5

10

0 200 400 600 800 1000 1200 1400 1600 1800

q_dot_1q_dot_2q_dot_3

-10

-5

0

5

10

0 200 400 600 800 1000 1200 1400 1600 1800

q1_dot_1q1_dot_2q1_dot_3

-10

-5

0

5

10

0 200 400 600 800 1000 1200 1400 1600 1800

q2_dot_1q2_dot_2q2_dot_3

-10

-5

0

5

10

0 200 400 600 800 1000 1200 1400 1600 1800

q_dot_4q_dot_5q_dot_6

-10

-5

0

5

10

0 200 400 600 800 1000 1200 1400 1600 1800

q1_dot_4q1_dot_5q1_dot_6

-10

-5

0

5

10

0 200 400 600 800 1000 1200 1400 1600 1800

q2_dot_4q2_dot_5q2_dot_6

-120

-100

-80

-60

-40

-20

0

20

0 200 400 600 800 1000 1200 1400 1600 1800

q1q2q3q4q5q6

(a)

-20

-10

0

10

20

30

40

50

0 200 400 600 800 1000 1200 1400 1600 1800

q3_zq3_min_z

q3_max_zq3_L0_min_zq3_L0_max_zq3_L1_min_zq3_L1_max_z

(b)

0 20 40 60 80

100 120 140

0 200 400 600 800 1000 1200 1400 1600 1800

q5_ryq5_min_ry

q5_max_ryq5_L0_min_ryq5_L0_max_ryq5_L1_min_ryq5_L1_max_ry

(c)

Figure 7.8 – Results for case A5, Avoidance and additional secondary task sent to a revolute joint;q2_dot = q2 + q3 where q2 considers task priority

the avoidance task combined with the projection of g. As expected, Fig. 7.8 shows that

secondary motions along x-axis and y-axis are produced during the first 400 iterations due to

considering the priority when projecting the vector g. As expected, the servo system keeps

the exponential decreasing for the norm of the total error. The joint limit of j3 is avoided and

the visual servoing task is performed successfully.

Page 153: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

7.4 Results 143

Set B: New avoidance strategy integrated with kernel approachIn this set of experiments, the new avoidance strategy integrated with the kernel approach

to deal with the problem of avoiding several joint limits simultaneously is implemented to

a six DOFs viper robot arm (see Fig. 7.9). The lower and upper limits for each joint have

the following values: qmin1 = −50, qmax

1 = 50, qmin2 = −135, qmax

2 = −40, qmin3 = 40,

qmax3 = 215, qmin

4 = −190, qmax4 = 190, qmin

5 = −110, qmax5 = 110, qmin

6 = −184.125,and qmax

6 = 184.125, where q1, q2, q3, q4, q5 and q6 are in degrees. Similar to set A, we

set ρ = 0.1 and ρ1 = 0.5 to compute qmin0i

and qmin1i

. The features considered by the main

task are the image coordinates of a moving point. This means that only two DOFs of the

robot are constrained by the main task. Three experiments are performed and their results

are presented in cases B1, B2, and B3. In all cases, the motion of the point is the same. It

has been designed so that the robot moves toward several of its joint limits.

Figure 7.9 – Experimental system: Robot arm

Page 154: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

144 New joint limits avoidance strategy

-0.2-0.15-0.1

-0.05 0

0.05 0.1

0.15 0.2

0 500 1000 1500 2000 2500

q1_dot_1q1_dot_2q1_dot_3q1_dot_4q1_dot_5q1_dot_6

-0.2-0.15

-0.1-0.05

0 0.05 0.1

0.15 0.2

0 500 1000 1500 2000 2500

qa_dot_1qa_dot_2qa_dot_3qa_dot_4qa_dot_5qa_dot_6

-0.2-0.15-0.1

-0.05 0

0.05 0.1

0.15 0.2

0 500 1000 1500 2000 2500

q1_dotq2_dotq3_dotq4_dotq5_dotq6_dot

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q1q1_minq1_max

q1_L0_minq1_L0_maxq1_L1_min

q1_L1_max

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q2q2_min

q2_maxq2_L0_min

q2_L0_maxq2_L1_min

q2_L1_max

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q3q3_minq3_max

q3_L0_minq3_L0_maxq3_L1_min

q3_L1_max

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q4q4_min

q4_maxq4_L0_min

q4_L0_maxq4_L1_min

q4_L1_max

(a)

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q5q5_minq5_max

q5_L0_minq5_L0_maxq5_L1_min

q5_L1_max

(b)

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q6q6_minq6_max

q6_L0_minq6_L0_maxq6_L1_minq6_L1_max

(c)

Figure 7.10 – Results for case B1; Kernel approach, Basic method; Line 1 (a,b,c): articular velocitycomponents in rad/s of the main task q1, avoidance task qa and general task q respectively; q_dot =q1 + qa; Line 2 (a,b,c) and Line 3 (a,b,c) are the normalized articular values and joint limits of the sixrobot joints q1, q2, q3, q4, q5, q6 respectively.

7.4.6 Case B1: Basic kernel approachIn this first case, the basic method described in Section 5.4.3 is used to avoid the joint limits.

We recall that∑na

k=1 ak Eik is defined by (5.58). As can be seen in Fig. 7.10, as soon as the

values of the joints q1, q2, q3 and q5 approach their thresholds qmax01 or qmin

01 , qmin02 , qmin

03 and

qmax05 respectively, the avoidance task is activated causing the corresponding joint to suddenly

stop any motion toward its nearest limits.

Page 155: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

7.4 Results 145

-0.2-0.15-0.1

-0.05 0

0.05 0.1

0.15 0.2

0 500 1000 1500 2000 2500

q1_dot_1q1_dot_2q1_dot_3q1_dot_4q1_dot_5q1_dot_6

-0.2-0.15

-0.1-0.05

0 0.05 0.1

0.15 0.2

0 500 1000 1500 2000 2500

qa_dot_1qa_dot_2qa_dot_3qa_dot_4qa_dot_5qa_dot_6

-0.2-0.15-0.1

-0.05 0

0.05 0.1

0.15 0.2

0 500 1000 1500 2000 2500

q1_dotq2_dotq3_dotq4_dotq5_dotq6_dot

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q1q1_minq1_max

q1_L0_minq1_L0_maxq1_L1_min

q1_L1_max

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q2q2_min

q2_maxq2_L0_min

q2_L0_maxq2_L1_min

q2_L1_max

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q3q3_minq3_max

q3_L0_minq3_L0_maxq3_L1_min

q3_L1_max

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q4q4_min

q4_maxq4_L0_min

q4_L0_maxq4_L1_min

q4_L1_max

(a)

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q5q5_minq5_max

q5_L0_minq5_L0_maxq5_L1_min

q5_L1_max

(b)

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q6q6_minq6_max

q6_L0_minq6_L0_maxq6_L1_minq6_L1_max

(c)

Figure 7.11 – Results for case B2, kernel approach, basic kernel approach with gamma coefficients

7.4.7 Case B2: Basic kernel approach with gamma coefficientsThe basic kernel approach uses the coefficients γi defined in Section 5.4.3. As can be seen

in Fig. 7.11, using γi allows to smooth the initial value of the avoidance task component

qa[i] when a joint ji approaches its threshold limits qmin0i

or qmax0i

. It allows to avoid the

discontinuities appearing in previous case B1. However, a discontinuity is still obtained

when a joint becomes very close to its limits qmini or qmax

i , i.e. when γi ≈ 1. As can be seen

in Fig. 7.11, when j2 and j5 approach qmax2 and qmin

5 respectively, a discontinuity is obtained

in the behavior of the joint j1.

Page 156: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

146 New joint limits avoidance strategy

-0.2-0.15-0.1

-0.05 0

0.05 0.1

0.15 0.2

0 500 1000 1500 2000 2500

q1_dot_1q1_dot_2q1_dot_3q1_dot_4q1_dot_5q1_dot_6

-0.2-0.15

-0.1-0.05

0 0.05 0.1

0.15 0.2

0 500 1000 1500 2000 2500

qa_dot_1qa_dot_2qa_dot_3qa_dot_4qa_dot_5qa_dot_6

-0.2-0.15-0.1

-0.05 0

0.05 0.1

0.15 0.2

0 500 1000 1500 2000 2500

q1_dotq2_dotq3_dotq4_dotq5_dotq6_dot

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q1q1_minq1_max

q1_L0_minq1_L0_maxq1_L1_min

q1_L1_max

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q2q2_min

q2_maxq2_L0_min

q2_L0_maxq2_L1_min

q2_L1_max

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q3q3_minq3_max

q3_L0_minq3_L0_maxq3_L1_min

q3_L1_max

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q4q4_min

q4_maxq4_L0_min

q4_L0_maxq4_L1_min

q4_L1_max

(a)

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q5q5_minq5_max

q5_L0_minq5_L0_maxq5_L1_min

q5_L1_max

(b)

-1

-0.5

0

0.5

1

0 500 1000 1500 2000 2500

q6q6_minq6_max

q6_L0_minq6_L0_maxq6_L1_minq6_L1_max

(c)

Figure 7.12 – Results for case B3, new joint limit avoidance integerated with kernel approach

7.4.8 Case B3: New joint limits avoidance strategy integrated to kernelapproach

In this case, the results of the new avoidance strategy coupled with the kernel approach, as

proposed in Section 7.3, are presented. As can be seen in Fig. 7.12, a smooth behavior

for all joints is obtained when the joints near their limits. The avoidance task component is

smoothly activated as soon as a joint qi nears any of its thresholds qmin0i

or qmax0i

. Using the new

strategy ensures that the magnitudes for each component of the avoidance task compensate

the main task components. This ensures that no joint will reach its limit.

Page 157: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

7.5 Conclusion 147

7.5 ConclusionA new avoidance scheme for avoiding joint limits has been presented based on three func-

tions proposed: an activation function gi that activates the avoidance task and sets the di-

rection of its actions; adaptive gain functions λseciand λa

secithat controls the magnitude of

the avoidance task; and a tuning function λi(qi) that ensures the smoothness of the injection

of the avoidance task into the main task. Considering several additional secondary tasks has

also been proposed. The new avoidance strategy has also been integrated with the kernel

approaches to avoid several joint limits simultaneously.

The avoidance method proposed in Section 7.2 has been implemented and validated on a

six DOFs robot arm having three prismatic and three rovolute joints as illustrated in 3.8. The

investigation of the new large projection operator Pλ has been performed by employing Pλ

to deal with the problem of robot joint limit avoidance and projecting additional secondary

tasks. A nice behavior has been obtained that makes the system avoid the joint limits very

smoothly even when the main task constrains all the robot degrees of freedom and when ad-

ditional secondary tasks are considered. The main advantage of the new avoidance method

over classical gradient projection method is that the magnitude of the self motion is automat-

ically computed without any tuning step.

The integration of the new avoidance strategy with the kernel approach has been imple-

mented on another six DOFs robot platform. This approach allows the system to automati-

cally generate a suitable magnitude for each component of the avoidance task such that it is

greater than the magnitude of the main task near robot joint limits.

Page 158: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

148 New joint limits avoidance strategy

Page 159: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Chapter 8

General conclusions

In this chapter, we present the general conclusions of this work. The research described in

this thesis has been concerned with two main issues in sensor-based robot control. The first

issue, presented in Part I, concerned the domain of visual servoing, while the second issue,

presented in part II, concerned the domain of redundancy and avoidance in robot control.

ContributionsThe works presented in Part I and II provide several contributions as illustrated in Fig. 8.1.

For the main tasks, we can employ the new first order control scheme to obtain a better behav-

ior for a selected value of the behavior controller that can avoid local minima configurations

or employ the new second order control schemes to approach a singular configurations and

achieve more accurate positioning tasks in general. The new projection operator can be used

for adding supplementary tasks and/or avoidance tasks. For example, secondary tasks can be

defined as the proposed joint limits avoidance strategies. The contributions are presented in

two parts as to the organization of the dissertation.

Part I: Visual servoingThe control laws used in visual servoing have their respective drawbacks and strengths. In

some cases, a control law is not able to converge while the others succeed. In other cases,

all classical control laws may fail. Different behaviors may explain these failures. For ex-

ample, the camera moves to infinity, the camera moves to be too near to the object, the

system reaches a local minimum or a singular configuration. The first part of this thesis

aimed towards more robust control schemes. This was achieved by introducing two new

control schemes. None of the classical control schemes allow to enhance its behavior against

a given problem of local minima or singularity. The classical solutions for these situations

Page 160: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

150 General conclusions

Figure 8.1 – Thesis conclusions

Page 161: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

151

as mentioned in Chapter 2 were by utilizing sliding approaches, partitioning approaches,

trajectory planning approaches, or origin-shift approaches. Solutions also were provided

by defining hybrid and/or switching approaches among IBVS, PBVS and other approaches.

Finally, these different solutions allow to obtain a better control behavior for a given config-

uration.

First order control scheme: The first proposed control law is based on introducing a be-

havior controller β to the control matrix as presented in Chapter 3. The control matrix is

obtained based on the convex combination between the interaction matrices computed at the

current and the desired configurations after introducing the parameter β. Setting β = 0, 1,or 1/2 allows to switch between the three most classical schemes. Another control law has

been introduced to try to obtain the global stability of the system. We presented also an an-

alytical analysis of the new control schemes and of the most common ones in image-based

visual servoing when a required camera displacement is a combination of a translation and a

rotation with respect to the camera optical axis. We perform this analysis for all possible val-

ues of the behavior parameter β. The analysis exhibited, for the first time as far as we know

a singularity of the control scheme proposed in [Malis 04] when a pure rotation of degrees

90 around the camera optical axis is required. The experimental analysis also exhibited a

local minimum configuration for all classical control schemes. New surprising results have

also been obtained for the other classical control schemes for motion combining translation

along and rotation around the optical axis. In all considered cases (difficult configurations

subject to local minima for all classical schemes, motion along and around the optical axis),

a satisfactory behavior of the control scheme can be obtained for some values of this parame-

ter, which demonstrate the superiority of the proposed control law with a behavior controller

over all other classical control laws.

Second order control schemes: Dealing with singularity configuration is one of the most

critical problems in IBVS. In Chapter 4, we have been interested by the difficult problem of

reaching a visual singular configuration where all classical control schemes have been shown

to be unsatisfactory. To try to improve the behavior of the system near the desired singular

position, we proposed a new second order control schemes based on the second order Taylor

expansion similar to Halley’s method minimization. The experimental results obtained have

shown that it is possible to improve the accuracy of the positioning using one of these control

schemes.

Part II: Redundancy and avoidance

Redundancy formalism: We presented in Chapter 6 theoretical developments of a new

large projection operator for the redundancy framework by considering the norm of the total

error. The redundancy framework proposed is built using this projection operator. The main

contribution here is that the new projection operator enlarges the applicability domain of the

task-redundancy in robot control to all problems that can be solved by adding system con-

straints. It can also be used to manage the levels of priority among several tasks, such as the

Page 162: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

152 General conclusions

stack of tasks. A switching strategy has been proposed to ensure that the new projection op-

erator smoothly switches to the classical projection operator as soon as the error nears zero.

An adaptive gain has also been developed so that the secondary task is effective during a long

period. That may be useful when the secondary tasks have really to be considered (which is

the case for instance for joints limits and obstacle avoidance). The main interest of the new

projection operator is that it is always at least of rank n− 1. Hence it can be used even if the

main task is full rank. The new projection operator has been employed within two different

redundancy-based task-priority approaches. A comparative study of results obtained using

Pλ and Pe shows that Pλ provides more versatility when the main task is not full rank. The

effectiveness of the new projection operator has been addressed when the task is defined in

the camera space or in the articular space.

Joint limit avoidance strategies: A new avoidance scheme for avoiding joint limits has

been presented in Chapter 7 based on three functions proposed: an activation function githat activates the avoidance task and sets the direction of its actions; adaptive gain functions

λseciand λa

secithat controls the magnitude of the avoidance task; and a tuning function λi(q

i)that ensures the smoothness of the injection of the avoidance task into the main task. Consid-

ering several additional secondary tasks has also been proposed. The new avoidance strategy

has been integrated with the kernel approach to control the magnitude and ensure smooth-

ness of the avoidance task when several joint limits have to be considered simultaneously.

As for the discontinuity problem of the iterative method that occurs when a new joint enters

its critical situation, a solution proposed is by introducing an inner switching loop inside

the visual servoing loop to ensure smooth transition when needed. The avoidance method

proposed in this work has been implemented and validated on a six DOFs robot manipulator

and the investigation of the new large projection operator P‖e‖ has been performed by em-

ploying it to deal with the problem of robot joint limit avoidance and projecting additional

secondary tasks. A nice behavior has been obtained that makes the system avoid the joint

limits very smoothly even when the main task constrains all the robot degrees of freedom and

when additional secondary tasks are considered. The main advantage of the new avoidance

method over classical gradient projection method is that the magnitude of the self motion

is automatically computed. The problem of avoiding several joint limits simultaneously has

also been validated using the integrated approach presented in Section 7.3 on a six DOFs

robot arm.

Page 163: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

153

Limitations and future workThe work performed in this thesis and the contributions obtained open several research issues

to be performed in the future concerning visual servoing control schemes, redundancy and

joint limits avoidance.

First order control scheme

The future work for this control scheme will be devoted to determining how to select auto-

matically the value of the behavior controller in order to avoid singularity, local minima or

other problems and to obtain a good behavior in all cases. Modifying on line the value of the

behavior controller during the task execution will be also studied. Let us mention again that

this modeling strategy for visual servoing can be used for switching between the classical

image-based visual servoing control schemes based on specified criteria such as the current

image feature set and robot system configuration. Other future works are expressed on Fig.

8.2.

Figure 8.2 – First order control scheme

Second order control schemes

It would be interesting as a future work to see for these control schemes if it can improve

the positioning accuracy for other singular configurations, such as the case of the centered

circle for instance or when different kind of visual features such as straight lines or ellipse

are considered. It would also be interesting to see if any improvement can be obtained by

combining Halley’s method with the damped-least-squares method. Although this method is

able to position accurately the robot in the desired configuration, it is sensitive to the noise in

the image. This can be avoided by a preprocessing step for image noise removal to perfectly

tracking the accurate position of the features in the image plane (see Fig. 8.3).

Page 164: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

154 General conclusions

Figure 8.3 – Second order control scheme

Redundancy formalism and task-priority

The fact that the proposed projection operator has a singularity as the error nears zero has

been solved within this work by designing a switching strategy that make the system switch

to the classical projection operator as the error approaches zero. The theoretical achieve-

ments concerning the proposed redundancy frameworks can be improved by applying the

discretization method to ensure a higher limit for the secondary task such that the main

task is respected. The new redundancy formalism can be used to integrate several actions

when multi-sensors are used in robot control. More experiments by concerning the priorities

among several tasks can be performed by adding supplementary tasks for grasping or trajec-

tory following. The proposed redundancy framework can be used in task planning strategies

to manage levels of priority among stack of tasks for example. This would allow better

system performances especially when not enough degrees of freedom are available. Let us

finally note that, the new redundancy frameworks is applicable to a wide range of position-

ing tasks. It can be employed on mobile platforms such as helicopters, underwater vehicles

[Tang 09] [Antonelli 98], space based robots [Russkow 92] or terrestrial vehicles to guide

the robot to a desired position and orientation with respect to one or more target objects.

These issues are illustrated on Fig. 8.4.

Joint limits avoidance

Finally, the new joint limit avoidance strategy can be integrated with other avoidance ap-

proaches. The proposed joint limit avoidance and its integrated version with the kernel

iterative method can be implemented to other robot configurations including mobile robot

and humanoid robot. It can be possible to use the same avoidance strategies for different

avoidance problems (see Fig. 8.5).

Page 165: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

155

Figure 8.4 – Redundancy frameworks

Figure 8.5 – Joint limit avoidance strategies

Page 166: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

156 General conclusions

Page 167: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne
Page 168: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne
Page 169: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Bibliography

[Ahson 96] S. I. Ahson, N. E. Sharkey, B. Nicolas. – Avoiding joint limits

and obstacles for kinematically redundant manipulators: a fuzzy

logic based approach. – IEEE International Conference on FuzzySystems, vol. 3, pp. 1777–1781, September 1996.

[Allen 93] P.K. Allen, A. Timcenko, B. Yoshimi, P. Michelman. – Automated

tracking and grasping of a moving object with a robotic hand-eye

system. IEEE Transactions on Robotics and Automation, 9(2):152–165, avril 1993.

[Ansar 03] A. Ansar, K Daniilidis. – Linear pose estimation from points or

lines. IEEE Transactions on Pattern Analysis and Machine Intelli-gence, 25(5):578–589, May 2003.

[Antonelli 98] G. Antonelli, S. Chiaverini. – Task-priority redundancy resolution

for underwater vehicle-manipulator systems. – IEEE InternationalConference on Robotics and Automation, ICRA’98, vol. 1, pp. 768–773, 1998.

[Antonelli 09] G. Antonelli. – Stability analysis for prioritized closed-loop inverse

kinematic algorithms for redundant robotic systems. IEEE Transac-tions on Robotics and Automation, 25(5):985–994, October 2009.

[Assal 06a] S.F.M. Assal, K. Watanabe, K. Izumi. – Intelligent control for

avoiding the joint limits of redundant planar manipulators. – Ar-tif Life Robotics, vol. 10, pp. 141–148, 2006.

[Assal 06b] S.F.M. Assal, K. Watanabe, K. Izumi. – Neural network-based

kinematic inversion of industrial redundant robots using coopera-

tive fuzzy hint for the joint limits avoidance. IEEE/ASME Transac-tions on Mechatronics, 11(5):593–603, October 2006.

[Baerlocher 98] P. Baerlocher, R. Boulic. – Task priority formulations for the

kinematic control of highly redundant articulated structures. –

Page 170: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

160 Bibliography

IEEE/RSJ International Conference on Intelligent Robots Systems,IROS’98, vol. 98, pp. 323–329, 1998.

[Baerlocher 04] P. Baerlocher, R. Boulic. – An inverse kinematics architecture en-

forcing an arbitrary number of strict priority levels. The visual com-puter, 20(6):402–417, 2004.

[Basri 99] R. Basri, E. Rivlin, I. Shishoni. – Visual homing: Surfing on the

epipoles. International journal of Computer Vision, 33(2):117–137,February 1999.

[Benhimane 07] S. Benhimane, E. Malis. – Homography-based 2d visual track-

ing and servoing. International Journal of Robotics Research,26(7):661–676, July 2007.

[Branicky 97] M.S. Branicky. – Stability of hybrid systems: State of the art. –

IEEE Conference on Decision and Control, CDC’97, vol. 1, pp.120–126, 1997.

[Cervera 99] E. Cervera, P. Martinet. – Visual servoing with indirect image

control and a predictable camera trajectory. – IEEE/RSJ Interna-tional Conference on Intelligent Robots Systems, IROS’99, vol. 1,pp. 381–386, 1999.

[Cervera 03] E. Cervera, A. P. del Pobil, F. Berry, P. Martinet. – Improving

image-based visual servoing with three-dimensional features. Inter-national Journal of Robotics Research, 22(10-11):821–840, 2003.

[Chan 95] T.F. Chan, R.V. Dubey. – A weighted least-norm solution based

scheme for avoiding joints limits for redundant manipulators. IEEETransactions on Robotics and Automation, 11(2):286–292, April

1995.

[Chang 87] P. Chang. – A closed-form solution for inverse kinematics of robot

manipulators with redundancy. IEEE Journal of Robotics and Au-tomation, 3(5):393 –403, October 1987.

[Chaumette 93] F. Chaumette, P. Rives, B. Espiau. – Classification and realiza-

tion of the different vision-based tasks. Visual Servoing, editor

K. Hashimoto, pp. 199–228. – Singapour, World Scientific Series

in Robotics and Automated Systems, 1993.

[Chaumette 96] F. Chaumette, S. Boukir, P. Bouthemy, D. Juvin. – Structure from

controlled motion. IEEE Transactions on Pattern Analysis and Ma-chine Intelligence, 18(5):492 –504, May 1996.

Page 171: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Bibliography 161

[Chaumette 98] F. Chaumette. – Potential problems of stability and convergence in

image-based and position-based visual servoing. The Confluence ofVision and Control, editor D. Kriegman, G . Hager, A.S. Morse, pp.

66–78. – LNCIS Series, No 237, Springer-Verlag, 1998.

[Chaumette 00] F. Chaumette, E. Malis. – 2 1/2 D visual servoing: a possible solu-

tion to improve image-based and position-based visual servoings.

– IEEE International Conference on Robotics and Automation,ICRA’00, vol. 1, pp. 630–635, San Francisco, CA, April 2000.

[Chaumette 01] F. Chaumette, E. Marchand. – A redundancy-based iterative scheme

for avoiding joint limits: Application to visual servoing. IEEETransactions on Robotics and Automation, 17(5):719–730, Octo-

ber 2001.

[Chaumette 04] F. Chaumette. – Image moments: a general and useful set

of features for visual servoing. IEEE Transactions on Robotics,20(4):713–723, August 2004.

[Chaumette 06] F. Chaumette, S. Hutchinson. – Visual servo control, part I : Basic

approaches. IEEE Robotics and Automation Magazine, 13(4):82–90, December 2006.

[Chaumette 07] F. Chaumette, S. Hutchinson. – Visual servo control, part II :

Advanced approaches. IEEE Robotics and Automation Magazine,14(1):109–118, March 2007.

[Cheng 92] F.-T. Cheng, T.-H. Chen, Y.-Y Sun. – Efficient algorithm for re-

solving manipulator redundancy the compact QP method. – IEEEInternational Conference on Robotics and Automation, ICRA’92,vol. 1, pp. 508–513, Nice , France, May 1992.

[Cheng 94] F.-T. Cheng, R.-J. Sheu, T.-H. Chen, Kung F.-C. – The im-

proved compact QP method for resolving manipulator redundancy.

– IEEE/RSJ/GI International Conference on Intelligent Robots andSystems, IROS’94, vol. 2, pp. 1368–1375, Munich, Germany, 1994.

[Chesi 04a] G. Chesi, K. Hashimoto, D. Prattichizzo, A. Vicino. – Keeping fea-

tures in the field of view in eye-in-hand visual servoing: a switching

approach. IEEE Transactions on Robotics, 20(5):908 – 914, Octo-

ber 2004.

[Chesi 04b] G. Chesi, A. Vicino. – Visual servoing for large camera displace-

ments. IEEE Transactions on Robotics, 20(4):724–735, 2004.

Page 172: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

162 Bibliography

[Chesi 09a] G. Chesi. – Estimation of the camera pose from image point corre-

spondences through the essential matrix and convex optimization.

– IEEE International Conference on Robotics and Automation,ICRA’09, pp. 35 –40, 12-17 2009.

[Chesi 09b] G. Chesi. – Visual servoing path planning via homogeneous forms

and LMI optimizations. IEEE Transactions on Robotics, 25(2):281–291, April 2009.

[Chiacchio 91] P. Chiacchio, S. S. Chiaverini, L. Sciavicco, B. Siciliano. – Closed-

loop inverse kinematics schemes for constrained redundant manip-

ulators with task space augmentation and task priority strategy. In-ternational Journal of Robotics Research, 10(4):410–425, 1991.

[Chiaverini 97] S. Chiaverini. – Singularity-robust task-priority redundancy reso-

lution for real-time kinematic control of robot manipulators. IEEETransactions on Robotics, 13(3):398–410, June 1997.

[Choi 00] S.I. Choi, B.K. Kim. – Obstacle avoidance for redundant manipula-

tors using directional-collidability / temporal-collidability measure.

Journal of Intelligent and Robotic Systems, 28(3):213–229, 2000.

[Cline 64] R.E. Cline. – Representation for the generalized inverse of a par-

titioned matrix. Journal of the Society for Industrial and AppliedMathematics, 12:588–600, 1964.

[Collewet 08] C. Collewet, E. Marchand, F. Chaumette. – Visual servoing set

free from image processing. – IEEE International Conference onRobotics and Automation, ICRA’08, pp. 81–86, Pasadena, Califor-

nia, May 2008.

[Collewet 09a] C. Collewet, E. Marchand. – Colorimetry-based visual servoing. –

IEEE International Conference on Intelligent Robots and Systems,IROS’09, pp. 5438–5443, St Louis, USA, October 2009.

[Collewet 09b] C. Collewet, E. Marchand. – Photometry-based visual servoing us-

ing light reflexion models. – IEEE International Conference onRobotics and Automation, ICRA’09, pp. 701–706, Kobe, Japan,

May 2009.

[Comport 06] A.I. Comport, E. Marchand, F. Chaumette. – Statistically robust 2D

visual servoing. IEEE Transactions on Robotics, 22(2):415–421,

April 2006.

[Corke 01] P.I. Corke, S. Hutchinson. – A new partitioned approach to image-

based visual servo control. IEEE Transactions on Robotics and Au-tomation, 17(4):507–515, August 2001.

Page 173: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Bibliography 163

[Corke 09] P. I. Corke, F. Spindler, F Chaumette. – Combining cartesian and

polar coordinates in IBVS. – IEEE/RSJ International Conferenceon Intelligent Robots and Systems, IROS’09, pp. 5962–5967, St.

Louis, USA, October 2009.

[Craig 86] J.J. Craig. – Introduction to robotics: mechanics and control. –Addison-Wesley Reading, MA, 1986.

[Crétual 01] A. Crétual, F. Chaumette. – Visual servoing based on image mo-

tion. International Journal of Robotics Research, 20(11):857–877,November 2001.

[Deguchi 98] K. Deguchi. – Optimal motion control for image-based visual ser-

voing by decoupling translation and rotation. – IEEE/RSJ Inter-national Conference on Intelligent Robots and Systems, IROS’98,vol. 2, pp. 705–711, Amsterdam, October 1998.

[DeLuca 08a] A. De Luca, M. Ferri, G. Oriolo, P. R. Giordano. – Visual servoing

with exploitation of redundancy: An experimental study. – IEEEInternational Conference on Robotics and Automation, ICRA’08,pp. 3231 –3237, May 2008.

[DeLuca 08b] A. De Luca, G. Oriolo, P. R. Giordano. – Feature depth observation

for image-based visual servoing: Theory and experiments. Interna-tional Journal of Robotics Research, 27(10):1093–1116, 2008.

[Dementhon 95] D. F. Dementhon, L. S. Davis. – Model-based object pose in

25 lines of code. International Journal of Computer Vision, 15(1-2):123–141, 1995.

[Deng 02a] L. Deng, F. Janabi-Sharifi, W.Wilson. – Hybrid strategies for image

constraints avoidance in visual servoing. – IEEE/RSJ InternationalConference on Intelligent Robots and Systems, IROS’02, pp. 348–353, October 2002.

[Deng 02b] L. Deng, W. Wilson, F. Janabi-Sharifi. – Characteristics of robot

visual servoing methods and target model estimation. – IEEE Inter-national Symposium on Intelligent Control, pp. 684 – 689, October

2002.

[Deng 03] L. Deng, W. Wilson, F. Janabi-Sharifi. – Dynamic performance

of the position-based visual servoing method in the cartesian and

image spaces. – IEEE/RSJ International Conference on IntelligentRobots and Systems, IROS’03, vol. 1, pp. 510 – 515, December

2003.

Page 174: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

164 Bibliography

[Deng 05] L. Deng, F. Janabi-Sharifi, W.J. Wilson. – Hybrid motion control

and planning strategies for visual servoing. IEEE Transactions onIndustrial Electronics, 52(4):1024 – 1040, August 2005.

[Deo 92] A. Deo, I. Walker. – Robot subtask performance with singularity ro-

bustness using optimal damped least squares. – IEEE InternationalConference on Robotics and Automation, ICRA’92, pp. 434–441,Nice, France, May 1992.

[Dombre 07] E. Dombre, W. Khalil. – Robot Manipulator Modeling, perfor-mance analysis and control. – ISTE, Cambridge, Massachusetts,

2007.

[Dornaika 99] F. Dornaika, G. Garcia. – Pose estimation using point and line

correspondences. Real-Time Imaging, 5(3):215–230, 1999.

[Dubey 88] RV Dubey, JA Euler, SM Babcock. – An efficient gradient projec-

tion optimization scheme for a seven-degree-of-freedom redundant

robot with spherical wrist. – IEEE International Conference onRobotics and Automation, ICRA’88, pp. 24–29, 1988.

[Egeland 90] O. Egeland, M. Ebdrup, S. Chiaverini. – Sensory control in singu-

lar configurations-application to visual servoing. – IEEE Interna-tional Workshop on Intelligent Motion Control, pp. 401–405, Istan-bul, August 1990.

[Ellekilde 07] L.P. Ellekilde, P. Favrholdt, M. Paulin, H.G. Petersen. – Robust

control for high-speed visual servoing applications. InternationalJournal of Advanced Robotic Systems, 4(3):279–292, 2007.

[Espiau 92] B. Espiau, F. Chaumette, P. Rives. – A new approach to visual

servoing in robotics. IEEE Trans. on Robotics and Automation,8(3):313–326, June 1992.

[Espiau 93] B. Espiau. – Effect of camera calibration errors on visual servoing

in robotics. – International Symposium on experimental Robotics,ISER’93, pp. 182–192, Kyoto, Japan, 1993.

[Euler 89] JA Euler, RV Dubey. – A comparison of two real-time control

schemes for redundant manipulators with bounded joint velocities.

– IEEE International Conference on Robotics and Automation,ICRA’89, vol. 1, pp. 701–706, May 1989.

[Faugeras 93] O. Faugeras. – Three-dimensional computer vision: a geometricviewpoint. – MIT Press, Cambridge, Massachusetts, 1993.

Page 175: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Bibliography 165

[Feddema 89a] J. Feddema, O. Mitchell. – Vision-guided servoing with feature-

based trajectory generation. IEEE Trans. on Robotics and Automa-tion, 5(5):691–700, October 1989.

[Feddema 89b] J.T Feddema, C.S.G. Lee, O.R. Mitchell. – Automatic selection of

image features for visual servoing of a robot manipulator. – IEEEInternational Conference on Robotics and Automation, ICRA’89,vol. 2, pp. 832–837, Scottsdale, Arizona, May 1989.

[Feddema 91] J.T Feddema, C.S.G. Lee, O.R. Mitchell. – Weighted selection

of image features for resolved rate visual feedback control. IEEETransactions on Robotics and Automation, 7(1):31–47, February

1991.

[Ficocelli 01] M. Ficocelli, F. Janabi-Sharifi. – Adaptive filtering for pose estima-

tion in visual servoing. – IEEE/RSJ International Conference onIntelligent Robots and Systems, IROS’03, pp. 19–24, Maui, Hawaii,

USA, November 2001.

[Flandin 00] G. Flandin, F. Chaumette, E. Marchand. – Eye-in-hand / eye-to-

hand cooperation for visual servoing. – IEEE International Confer-ence on Robotics and Automation, ICRA’00, vol. 3, pp. 2741–2746,San Francisco, California, April 2000.

[Forsyth 03] D. Forsyth, J. Ponce. – Computer Vision: A Modern Approach. –Upper Saddle River, NJ: Prentice Hall„ 2003.

[Fruchard 06] M. Fruchard, P. Morin, C. Samson. – A fremawork for the con-

trol of nonholonomic mobile manipulators. International Journalof Robotics Research, 25(8):398–410, August 2006.

[Gans 03a] N. Gans, S. Hutchinson. – An asymptotically stable switched sys-

tem visual controller for eye in hand robots. – IEEE/RSJ Inter-national Conference on Intelligent Robots and Systems, IROS’03,vol. 1, pp. 735 – 742, October 2003.

[Gans 03b] N. Gans, S. Hutchinson. – An experimental study of hybrid

switched system approaches to visual servoing. – IEEE. Interna-tional Conference Robotics and Automation, ICRA’03, vol. 1, pp.3061 – 3068, September 2003.

[Gans 03c] N.R. Gans, S.A. Hutchinson, P.I. Corke. – Performance tests for

visual servo control systems, with application to partitioned ap-

proaches to visual servo control. International Journal of RoboticsResearch, 22(10-11):955–981, 2003.

Page 176: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

166 Bibliography

[Gans 07] N. Gans, S. Hutchinson. – Stable visual servoing through hy-

brid switched-systems contro. IEEE Transactions on Robotics,23(3):530–540, June 2007.

[GarciaAracil 05] N. Garcia-Aracil, E. Malis, R. Aracil-Santonja, C. Perez-Vidal. –

Continuous visual servoing despite the changes of visibility in im-

age features. IEEE Transactions on Robotics, 21(6):415–421, April

2005.

[Guoqiang 10] H. Guoqiang, N. Gans, N. Fitz-Coy, W. Dixon. – Adaptive

homography-based visual servo tracking control via a quaternion

formulation. IEEE Transactions on Control Systems Technology,18(1):128 –135, January 2010.

[HadjAbdelkader 07] H. Hadj-Abdelkader, Y. Mezouar, P. Martinet. – Decoupled Visual

Servoing from a set of points imaged by an omnidirectional cam-

era. – IEEE International Conference on Robotics and Automation,ICRA’07, pp. 1697–1697, Roma, April 2007.

[Hafez 06] A.H.A. Hafez, C.V. Jawahar. – Probabilistic integration of 2D and

3D cues for visual servoing. – IEEE International Conference onRobotics and Automation, ICRA’06, pp. 1691–1696, April 2006.

[Hafez 07a] A.H.A. Hafez, E. Cervera, C.V. Jawahar. – Optimizing image and

camera trajectories in robot vision control using on-line boosting.

– IEEE/RSJ International Conference on Intelligent Robots andSystems, IROS’07, pp. 352 –357, San Diego, USA, October 2007.

[Hafez 07b] A.H.A. Hafez, C.V. Jawahar. – Visual servoing by optimization

of a 2D/3D hybrid objective function. – IEEE International Con-ference on Robotics and Automation, ICRA’07, pp. 1691 –1696,

Roma, Italy, 2007.

[Hafez 08a] A.H.A. Hafez, S. Achar, C.V. Jawahar. – Visual servoing based

on gaussian mixture models. – IEEE International Conferenceon Robotics and Automation, ICRA’08, Pasadena, California, May

2008.

[Hafez 08b] A.H.A. Hafez, E. Cervera, C.V. Jawahar. – Hybrid visual servoing

by boosting IBVS and PBVS. – International Conference on Infor-mation and Communication Technologies: From Theory to Appli-cations,ICTTA’08, pp. 1–6, April 2008.

[Hager 97] G. Hager. – A modular system for robust positioning using feed-

back from stereo vision. IEEE Transactions on Robotics and Au-tomation, 13(4):582–595, August 1997.

Page 177: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Bibliography 167

[Hanafusa 81] H. Hanafusa, T. Yoshikawa, Y. Nakamura. – Analysis and control

of articulated robot with redundancy. – IFAC, 8th Triennal WorldCongress, vol. 4, pp. 1927–1932, 1981.

[Haralick 89] R. Haralick, H. Joo, C. Lee, X. Zhuang, V Vaidya, M. Kim. – Pose

estimation from corresponding point data. IEEE Transactions onSystems, Man and Cybernetics, 19(6):1426–1445, November 1989.

[Hartley 97] R.I. Hartley. – In defense of the eight-point algorithm. IEEE Trans-actions on Pattern Analysis and Machine Intelligence, 19(6):580–593, June 1997.

[Hashimoto 93a] K. Hashimoto. – Real time control of robot manipulators based

on visual sensory feedback. Visual Servoing, editor K. Hashimoto.

– Singapour, World Scientific Series in Robotics and Automated

Systems, 1993.

[Hashimoto 93b] K. Hashimoto, H. Kimura. – LQ optimal and nonlinear approaches

to visual servoing. Visual Serving: Real Time Control of Robot Ma-nipulators Based on Visual Sensory Feedback, 7:165–198, 1993.

[Hashimoto 96] K. Hashimoto, T. Ebine, H. Kimura. – Visual servoing with hand-

eye manipulator: Optimal control approach. IEEE Transactions onRobotics and Automation, 12(5):766–774, 1996.

[Hashimoto 00a] K. Hashimoto, T. Noritsugu. – Enlargement of stable region in

visual servo. – IEEE Conference on Decision and Control, vol. 4,pp. 3927–3932, 2000.

[Hashimoto 00b] K. Hashimoto, Noritsugu T. – Potential switching control in vi-

sual servoing. – IEEE International Conference on Robotics andAutomation, ICRA’98, vol. 3, pp. 2765–2770, San Francisco, USA,

April 2000.

[Horn 86] B.K. Horn. – Robot vision. – McGraw-Hill Higher Education,

1986.

[Hutchinson 96] S. Hutchinson, G. Hager, P. Corke. – A tutorial on visual servo con-

trol. IEEE Transactions on Robotics and Automation, 12(5):651–670, October 1996.

[Iwatsuki 05] M. Iwatsuki, N. Okiyama. – A new formulation of visual servo-

ing based on cylindrical coordinate system. IEEE Transactions onRobotics, 21(2):266–273, 2005.

Page 178: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

168 Bibliography

[JanabiSharifi 97] F. Janabi-Sharifi, W Wilson. – Automatic selection of image fea-

tures for visual servoing. IEEE Transactions on Robotics and Au-tomation, 13(6):890–903, December 1997.

[JanabiSharifi 10a] F. Janabi-Sharifi, L. Deng, W Wilson. – Comparison of basic vi-

sual servoing methods. IEEE/ASME Transactions on Mechatronics,pp(91):1–17, September 2010.

[JanabiSharifi 10b] F. Janabi-Sharifi, M. Marey. – A kalman-filter-based method for

pose estimation in visual servoing. IEEE Transactions on Robotics,26(5):939 – 947, October 2010.

[KaBerounian 88] K. KaBerounian, Z. Wang. – Global versus local optimisation in re-

dundancy resolution of robotic manipulators. International Journalof Robotics Research, 7(5):3–12, 1988.

[Kallem 07] V. Kallem, M. Dewan, J. Swensen, G. Hager, N. Cowan. – Kernel-

based visual servoing. – IEEE/RSJ International Conference onIntelligent Robots and Systems, IROS’07, pp. 1975 – 1980, San

Diego, USA, October 2007.

[Kelly 00] R. Kelly, R. Carelli, O. Nasisi, B. Kuchen, F. Reyes. – Stable visual

servoing of camera-in-hand robotic systems. IEEE/ASME Transac-tions on mechatronics, 5(1):39–48, 2000.

[Kelly 04] R. Kelly, J. Moreno, R. Campa. – Visual servoing of planar robots

via velocity fields. – IEEE Conference on Decision and Control,CDC’04, vol. 4, pp. 4028–4033, December 2004.

[Kelly 06] R. Kelly, E. Bugarin, V. Sanchez. – Image-based visual control of

nonholonomic mobile robots via velocity fields: Case of partially

calibrated inclined camera. – IEEE Conference on Decision andControl, CDC’06, pp. 3071–3076, December 2006.

[Khatib 85] O. Khatib. – Real-time obstacle avoidance for manipulators and

mobile robots. – IEEE International Conference on Robotics andAutomation, ICRA’85, vol. 2, pp. 500–505, March 1985.

[Khatib 96] O. Khatib. – The impact of redundancy on the dynamic perfor-

mance of robots. – Laboratory Robotics and Automation, vol. 8,pp. 37–48, 1996.

[Kragic 02] D. Kragic, H.I. Christensen. – Survey on visual servoing for ma-nipulation. – Research report, Centre for Autonomous Systems,

Stockholm, Sweden - Research report, 2002.

Page 179: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Bibliography 169

[Kyrki 04a] V. Kyrki, D. Kragic, H.I. Christensen. – Measurement errors in

visual servoing. – IEEE International Conference on Robotics andAutomation, ICRA’04, vol. 2, pp. 1861 – 1867, April 2004.

[Kyrki 04b] V. Kyrki, D. Kragic, H.I. Christensen. – New shortest-path ap-

proaches to visual servoing. – IEEE/RSJ International Conferenceon Intelligent Robots and Systems, IROS’04, vol. 1, pp. 349–354,2004.

[Lapresté 04] J.-T. Lapresté, Y. Mezouar. – A hessian approach to visual servoing.

– IEEE/RSJ International Conference on Intelligent Robots andSystems, IROS’04, vol. 1, pp. 998–1003, Sendai, Japan, September

2004.

[Lenarcic 98] J. Lenarcic. – Effective secondary task execution of redundant ma-

nipulators. Robotica, 16(4):457–462, 1998.

[Liegeois 77] A. Liegeois. – Automatic supervisory control of the configuration

and behavior of multibody mechanisms. IEEE Transactions on Sys-tems, Man and Cybernetics, 7(12):868–871, December 1977.

[Lippiello 04] V. Lippiello, B. Siciliano, L. Villani. – Visual motion estima-

tion of 3D objects: an adaptive extended kalman filter approach.

– IEEE/RSJ International Conference on Intelligent Robots andSystems, IROS’04, vol. 1, pp. 957 – 962, Sendai, Japan, September

2004.

[Liu 99] M. L. Liu, K. H. Wong. – Pose estimation using four corresponding

points. Pattern Recognition Letters, 20(1):69–74, 1999.

[LonguetHiggins 81] H.C. Longuet-Higgins. – A computer algorithm for reconstructing a

scene from two projections. Nature, 293:133–135, September 1981.

[Luya 01] Li. Luya, W.A. Gruver, Z. Qixian, Y. Zongxu. – Kinematic con-

trol of redundant robots and the motion optimizability measure.

IEEE Transactions on Systems, Man and Cybernetics, 31(1):155–160, February 2001.

[Malis 99] E. Malis, F. Chaumette, S. Boudet. – 2 1/2 d visual servoing. IEEETransactions on Robotics and Automation, 15(2):238–250, April

1999.

[Malis 00] E. Malis, F. Chaumette, S. Boudet. – 2 1/2 D visual servoing

with respect to unknown objects through a new estimation scheme

of camera displacement. International journal of Computer Vision,37(1):79–97, June 2000.

Page 180: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

170 Bibliography

[Malis 02] E. Malis, F. Chaumette. – Theoretical improvements in the stabil-

ity analysis of a new class of model-free visual servoing methods.

IEEE Transactions on Robotics, 18(2):176186, 2002.

[Malis 04] E. Malis. – Improving vision-based control using efficient second-

order minimization techniques. – IEEE International Conferenceon Robotics and Automation, ICRA’04, vol. 2, pp. 1843–1848, New

Orleans, April 2004.

[Malis 10] E. Malis, Y. Mezouar, P. Rives. – Robustness of image-based visual

servoing with a calibrated camera in the presence of uncertainties

in the three-dimensional structure. IEEE Transactions on Robotics,26(1):112 –120, February 2010.

[Mansard 04] N. Mansard, F. Chaumette. – Tasks sequencing for visual servoing.

– IEEE/RSJ International Conference on Intelligent Robots andSystems, IROS’04, pp. 992–997, October 2004.

[Mansard 07] N. Mansard, F. Chaumette. – Task sequencing for high-level sensor-

based control. IEEE Transactions on Robotics, 23(1):60–72, 2007.

[Mansard 09a] N. Mansard, F. Chaumette. – Directionnal redundancy for robot

control. IEEE Transactions on Automatic Control, 54(6):1179–

1192, June 2009.

[Mansard 09b] N. Mansard, A. Remazeilles, F. Chaumette. – Continuity of

varying-feature-set control laws. IEEE Transactions on AutomaticControl, 54(11):2493–2505, November 2009.

[Marcé 87] L. Marcé, P. Bouthemy. – Determination of a depth map from

an image sequence. – 3rd International Conference on AdvancedRobotics, pp. 221–232, October 1987.

[Marchand 96] E. Marchand, F. Chaumette, A. Rizzo. – Using the task function

approach to avoid robot joint limits and kinematic singularities in

visual servoing. – IEEE/RSJ International Conference on Intelli-gent Robots and Systems, IROS’96, vol. 3, pp. 1083–1090, Osaka,

Japan, November 1996.

[Marchand 98] E. Marchand, G.-D. Hager. – Dynamic sensor planning in visual

servoing. – IEEE International Conference on Robotics and Au-tomation, ICRA’98, vol. 3, pp. 1988–1993, Leuven, Belgium, May

1998.

[Marchand 02] E. Marchand, F. Chaumette. – Virtual visual servoing: A frame-

work for real-time augmented reality. Computer Graphics Forum,

21(3):289–298, September 2002.

Page 181: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Bibliography 171

[Marchand 05a] E. Marchand, F. Chaumette. – Feature tracking for visual servoing

purposes. Robotics and Autonomous Systems, 52(1):53–70, June

2005.

[Marchand 05b] E. Marchand, F. Spindler, F. Chaumette. – Visp for visual servo-

ing: A generic software platform with a wide class of robot control

skills. IEEE Robotics and Automation Magazine, 12(4), Decem-

ber 2005. – Special Issue on "Software Packages for Vision-Based

Control of Motion", P. Oh, D. Burschka (Eds.).

[Martinet 96] P. Martinet, J. Gallice, D. Khadraoui. – Vision based control law

using 3D visual features. – Econometrica Committees, WAC 96, pp.497–502, Montpellier, May 1996.

[Martinet 97] P. Martinet, N. Daucher, J. Gallice, M. Dhome. – Robot control

using monocular pose estimation. – Workshop on New Trends InImage-Based Robot Servoing, p. 112, Grenoble, France, September

1997.

[Martinet 99] P. Martinet, J. Gallice. – Position-based visual servoing using a

nonlinear approach. – IEEE/RSJ International Conference on In-telligent Robots and Systems, IROS’99, vol. 1, p. 531536, Kyongju,

Korea, October 1999.

[Matthies 89] L. Matthies, R. Szelinski, T. Kanade. – Kalman filter-based algo-

rithms for estimating depth from image sequences. Internationaljournal of Computer Vision, 3:209–236, 1989.

[Mcinroy 08] J. E. Mcinroy, Z. Qi. – A novel pose estimation algorithm based on

points to regions correspondence. Journal of Mathematical Imagingand Vision, 30(2):195–207, 2008.

[Mebarki 10] R. Mebarki, A. Krupa, F. Chaumette. – 2-d ultrasound probe

complete guidance by visual servoing using image moments. IEEETransactions on Robotics, 26(2):296 –306, April 2010.

[Mezouar 02] Y. Mezouar, F. Chaumette. – Path planning for robust image-based

control. IEEE Transactions on Robotics and Automation, 18(4):534– 549, August 2002.

[Michel 93] H. Michel, P. Rives. – Singularities in the determination of the sit-uation of a robot effector from the perspective view of three points.– Research report n1850, INRIA - Research report, February 1993.

[Myers 76] K. A. Myers, B. D. Tapley. – Adaptive sequential estimation with

unknown noise statistics. IEEE Transactions on Automatic Control,21:520–523, August 1976.

Page 182: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

172 Bibliography

[Na 08] M. Na, B. Yang, P. Jia. – Improved damped least squares solution

with joint limits, joint weights and comfortable criteria for control-

ling human-like figures. – IEEE Conference on Robotics, Automa-tion and Mechatronics, pp. 1090 –1095, September 2008.

[Nakamura 86] Y. Nakamura, H. Hanafusa. – Inverse kinematics solutions with

singularity robustness for robot manipulator control. Transac-tions ASME journal of Dynamic System, Measures and Control,108:163–171, September 1986.

[Nakamura 87] Y. Nakamura, H. Hanafusa, T. Yoshikawa. – Task-priority based

redundancy control of robot manipulators. International Journal ofRobotics Research, 6(2):3–15, 1987.

[Nayar 96] S.K. Nayar, S.A. Nene, H. Murase. – Subspace methods for robot

vision. IEEE Transactions on Robotics, 12(5):750 – 758, October

1996.

[Nelson 95] B. Nelson, P.K. Khosla. – Strategies for increasing the tracking re-

gion of an eye-in-hand system by singularity and joint limits avoid-

ance. International Journal of Robotics Research, 14(3):255–269,June 1995.

[OkamotoJr 02] J. Okamoto Jr, V. Grassi Jr. – Visual servo control of a mobile

robot using omnidirectional vision. Proceedings of Mechatronics,pp. 413–422, 2002.

[Ortega 00] J.M. Ortega, W.C. Rheinboldt. – Iterative Solution of NonlinearEquations in Several Variables. – Society for Industrial Mathemat-

ics, SIAM, 2000.

[Pages 06] J. Pages, C. Collewet, F. Chaumette, J. Salvi. – Optimizing

plane-to-plane positioning tasks by image-based visual servoing

and structured light. IEEE Transactions on Robotics, 22(5):1000

– 1010, October 2006.

[Papanikolopoulos 93] N. P. Papanikolopoulos, P. K Khosla. – Selection of features and

evaluation of visual measurements for 3D robotic visual tracking.

International Symp. on Intelligent Control., pp. 320–325, August

1993.

[Paulin 04] M. Paulin. – The VirtualWorkCell–The Missing Link Between

Robot Motion Planning and Visual Servoing. – IEEE InternationalConference on Mechatronics and Robotics, vol. 2, pp. 326–331,

2004.

Page 183: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Bibliography 173

[Pearl 20] R. Pearl, L. Reed. – On the rate of growth of the population of

the united state since 1790 and its mathematical representation. –

Proc. of the National Academy of Sciences, vol. 6, pp. 275–288,June 1920.

[Perdereau 02] V. Perdereau, C. Passi, M. Drouin. – Real-time control of redundant

robotic manipulators for mobile obstacle avoidance. Robotics andAutonomous Systems, 41(1):41–59, 2002.

[Piepmeier 99] J. Piepmeier, G. McMurray, H. Lipkin. – A dynamic quasi-newton

method for uncalibrated visual servoing. – vol. 2, pp. 1595–1600,

1999.

[Rosen 60] JB Rosen. – The gradient projection method for nonlinear program-

ming. Part I. Linear constraints. Journal of the Society for Industrialand Applied Mathematics, 8(1):181–217, 1960.

[Rosen 61] JB Rosen. – The gradient projection method for nonlinear program-

ming. Part II. Linear constraints. Journal of the Society for Indus-trial and Applied Mathematics, 9(4):514–532, 1961.

[Russkow 92] J. Russkow, O. Khatib. – A New Control Structure for Free-Flying

Space Robots. Symposium on Artificial Intelligence, Robotics andAutomation, in Space, i-SAIRAS, pp. 395–403, September 1992.

[Safaee-Rad 92] R. Safaee-Rad, I. Tchoukanov, B. Benhabib, K.C. Smith. – 3D-

pose estimation from a quadratic-curved feature in two perspective

views. – IAPR International Conference on Pattern Recognition,ICPR’92, vol. 1, pp. 341–344, Netherlands, August 1992.

[Samson 91] C. Samson, M. Le Borgne, B. Espiau. – Robot Control: The TaskFunction Approach. – Clarendon Press, Oxford, United Kingdom,

1991.

[Sanderson 80] A. C. Sanderson, L. E. Weiss. – Image based visual servo control

using relational graph error signals. – IEEE International Confer-ence on Cybernetics and Society, pp. 1074–1077, 1980.

[Shademan 05] A. Shademan, F. Janabi-Sharifi. – Sensitivity analysis of EKF and

iterated EKF pose estimation for position-based visual servoing. –

IEEE Conference on Control Applications, pp. 755 – 760, August

2005.

[Shahamiri 05] M. Shahamiri, M. Jagersand. – Uncalibrated visual servoing using

a biased newton method for on-line singularity detection and avoid-

ance. – IEEE/RSJ International Conference on Intelligent Robots

Page 184: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

174 Bibliography

and Systems, IROS’05, pp. 2682–2687, Edmonton, Canada, August

2005.

[Shanno 70] DF Shanno. – Conditioning of quasi-Newton methods for func-

tion minimization. Mathematics of Computation, 24(111):647–656,1970.

[Sharma 97] R. Sharma, S. A. Hutchinson. – Motion perceptibility and its appli-

cation to active vision-based servo control. IEEE Transactions onRobotics and Automation, 13(4):607–617, August 1997.

[Shirai 73] Y. Shirai, H. Inoue. – Guiding a robot by visual feedback in assem-

bling tasks. Pattern Recognition, 5(2):99–106, 1973.

[Siciliano 91] B. Siciliano, J-J. Slotine. – A general framework for managing

multiple tasks in highly redundant robotic systems. – IEEE In-ternational Conference on Robotics and Automation, ICRA’91, pp.1211–1216, 1991.

[Spong 06] M.W. Spong, S. Hutchinson, M. Vidyasagar. – Robot modeling andcontrol. – Wiley New Jersey, 2006.

[Tahri 05] O. Tahri, F. Chaumette. – Point-based and region-based image

moments for visual servoing of planar objects. IEEE Trans. onRobotics, 21(6):1116–1127, December 2005.

[Tahri 10] O. Tahri, Y. Mezouar. – On visual servoing based on efficient

second order minimization. Robotics and Autonomous Systems,58(5):712–719, 2010.

[Tang 09] L. Tang, Y. Li. – Study on kinematical control based on fuzzy infer-

ence for underwater work system. – Chinese Control and DecisionConference, 2009. CCDC’09, pp. 684 –688, June 2009.

[TatsambonFomena 09] R. Tatsambon Fomena, F. Chaumette. – Improvements on visual

servoing from spherical targets using a spherical projection model.

IEEE Transactions on Robotics, 25(4):874–886, August 2009.

[Taylor 00] C. J. Taylor, J. P. Ostrowski, S. H. Jung. – Robust vision-based

pose control. – IEEE International Conference on Robotics andAutomation, ICRA’00, vol. 1, p. 27342740, 2000.

[Taylor 04] G. Taylor, Kleeman L. – Hybrid position-based visual servoing with

online calibration for humanoid robot. – IEEE/RSJ InternationalConference on Intelligent Robots and Systems, IROS’04, vol. 1, pp.686 – 691, Sendai, Japan, September 2004.

Page 185: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Bibliography 175

[Thuilot 02] B. Thuilot, P. Martinet, L. Cordesses, J. Gallice. – Position based

visual servoing: keeping the object in the field of vision. – IEEEInternational Conference on Robotics and Automation, ICRA’02,vol. 2, pp. 1624 – 1629, 2002.

[Toibero 09] J.M. Toibero, C.M. Soria, F. Roberti, R. Carelli, P. Fiorini. –

Switching visual servoing approach for stable corridor navigation.

– International Conference on Advanced Robotics, ICAR’09, pp. 1–6, June 2009.

[Tsai 89] R.Y. Tsai, R.K. Lenz. – A new technique for fully autonomous and

efficient 3 D robotics hand-eye calibration. IEEE Transactions onRobotics and Automation, 5(3):345–358, 1989.

[Wampler 86] C.W. Wampler. – Manipulator inverse kinematic solutions based on

vector formulations and damped least squared method. IEEE Trans-actions on Systems, Man, and Cybernetics, 16(1):93–101, January1986.

[Wang 92] J. Wang, W.J. Wilson. – 3D relative position and orientation esti-

mation using kalman filter for robot control. – IEEE InternationalConference on Robotics and Automation, ICRA’92, vol. 3, pp. 2638– 2645, Nice, May 1992.

[Wang 10] Y. Wang, H. Lang, C. W. de Silva. – A hybrid visual servo con-

troller for robust grasping by wheeled mobile robots. IEEE/ASMETransactions on Mechatronics, PP(99):1 –13, 2010.

[Weiss 87] L.E. Weiss, A.C. Sanderson, C.P. Neuman. – Dynamic sensor-

based control of robots with visual feedback. IEEE journal ofRobotics and Automation, 3(5):404–417, October 1987.

[Wicks 94] M. Wicks, P. Peleties, R. DeCarlo. – Construction of piecewise

lyapunov functions for stabilizing switched systems. – Proc. Con-ference Decision and Control, pp. 3492–3497, 1994.

[Wilson 96] W. Wilson, C. Hulls, G. Bell. – Relative end-effector control us-

ing cartesian position-based visual servoing. IEEE Transactions onRobotics and Automation, 12(5):684–696, October 1996.

[Xiang 10] J. Xiang, C. Zhong, W. Wei. – General-weighted least-norm con-

trol for redundant manipulators. IEEE Transactions on Robotics,26(4):660 –669, August 2010.

[Xie 09] W. Xie, Z. Li, X. Tu, Perron C. – Switching control of image-

based visual servoing with laser pointer in robotic manufacturing

Page 186: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

176 Bibliography

systems. IEEE Transactions on Industrial Electronics, 56(2):520–529, February 2009.

[Yoshikawa 96] T. Yoshikawa. – Basic optimization methods of redundant manip-

ulators. – Laboratory Robotics and Automation, vol. 8, pp. 49–60,1996.

[Zanne 00] P. Zanne, G. Morel, F. Piestan. – Robust vision based 3D trajectory

tracking using sliding mode control. – IEEE International Confer-ence on Robotics and Automation, ICRA’00, vol. 3, pp. 2088 –2093,

April 2000.

[Zghal 90] H. Zghal, R. V. Dubey, J. A. Euler. – Efficient gradient projec-

tion optimization for manipulators with multiple degrees of redun-

dancy. – IEEE International Conference on Robotics and Automa-tion, ICRA’90, vol. 2, pp. 1006–1011, 1990.

Page 187: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Résumé

L’asservissement visuel est devenu une approche classique dans le cadre de la commande de

robots exploitant les informations fournies par un capteur de vision dans une boucle de com-

mande. La recherche décrite dans cette thèse vise à résoudre des problèmes d’asservissement

et à améliorer la capacité de gérer plus efficacement les tâches supplémentaires. Cette thèse

présente tout d’abord l’état de l’art en asservissement visuel, redondance et évitement des

butées articulaires. Elle propose ensuite les contributions suivantes:

Un schéma de commande est obtenu en introduisant un paramètre de comportement dans

un contrôle hybride. Il permet un meilleur comportement du système lorsque des valeurs

appropriées du paramètre sont sélectionnées. Une étude analytique des lois de commandes

les plus courantes et de la nouvelle loi proposée est effectuée dans le cas de mouvements

de translation et de rotation selon l’axe optique. De nouveaux schémas de commande sont

également proposés pour améliorer le comportement du système lorsque la configuration

désirée est singulière.

Les contributions théoriques concernant le formalisme de la redondance reposent sur l’élabor-

ation d’un opérateur de projection obtenu en ne considérant que la norme de la tâche princi-

pale. Cela conduit à un problèmemoins contraint et permet d’élargir le domaine d’application.

De nouvelles stratégies d’évitement des butées articulaires du robot fondées sur la redon-

dance sont développées. Le problème d’ajouter des tâches secondaires à la tâche principale,

tout en assurant l’évitement des butées articulaires, est également résolu.

Tous ces travaux ont été validés par des expérimentations dans le cadre d’applications d’asserv-

issement visuel.

Page 188: THÈSE / UNIVERSITÉ DE RENNES 1 sous le sceau de l’Université …rainbow-doc.irisa.fr/pdf/2010_thesis_marey.pdf · 2017-12-14 · sous le sceau de l’Université Européenne

Abstract

Visual servoing has become a popular paradigm in robot control by exploiting the informa-

tion provided by a vision sensor in a feedback control loop. The research described in this

dissertation aims mainly to solve problems in visual servoing control and to improve the

ability to consider more effectively additional tasks. This thesis presents the state of art in

visual servoing, redundancy, and joint limits avoidance and the following contributions:

A new first order control scheme obtained by introducing a behavior parameter in a hy-

brid control matrix is first proposed. It allows a better behavior of the system when suitable

values for the behavior parameter are satisfied. An analytical analysis of the most common

control scheme and the new one is performed when the camera displacement is a combina-

tion of a translation along and a rotation around the camera optical axis. New second order

control schemes are also proposed to enhance the behavior of the system when the problem

of reaching a desired singular configuration is considered.

Theoretical contributions regarding redundancy-based task-priority are achieved by develop-

ing a new projection operator obtained by considering only the norm of the main task. This

leads to a less constrained problem and enlarges the applicability domain of redundancy-

based task-priority. New strategies for redundancy-based robot joint limits avoidance are

developed. The problem of adding additional secondary tasks to the main task while ensur-

ing the joint limits avoidance is finally solved.

All these works have been investigated experimentally using an eye-in-hand system and

visual servoing applications.