Hi @darnstrom ,
I came across the term "algorithm warm start" from the publication. Just wondering if it is conceptually equivalent to leaf node caching of a balanced kD-tree?
https://robotik.informatik.uni-wuerzburg.de/telematics/download/3dim2007/node11.html
I am asking because although the C++20 solver enables aggressive instruction cache and branch predictions for modern CPUs, the algorithm must explicitly "walk" from the root of the binary decision tree to reach the leaves. The means all the dot products in the halfplane decision logic needs to be re-computed for every iteration, even though it may end up in the same leaf node.
So, is "warm start" means we search from the binary tree's leaf node in the algorithmic level?
For your reference, researchers has been trying to add a hardware accelerated L1/L2 cache to speed up the binary tree query, assuming slow drift of parameters over time:
cvxgrp/cvxpygen@22f1166#diff-98b08bebd8a86904caf39076842af293adf3f46eb239d8dbf7e8b0b7500d66be
Regards,
Antony
Hi @darnstrom ,
I came across the term "algorithm warm start" from the publication. Just wondering if it is conceptually equivalent to leaf node caching of a balanced kD-tree?
https://robotik.informatik.uni-wuerzburg.de/telematics/download/3dim2007/node11.html
I am asking because although the C++20 solver enables aggressive instruction cache and branch predictions for modern CPUs, the algorithm must explicitly "walk" from the root of the binary decision tree to reach the leaves. The means all the dot products in the halfplane decision logic needs to be re-computed for every iteration, even though it may end up in the same leaf node.
So, is "warm start" means we search from the binary tree's leaf node in the algorithmic level?
For your reference, researchers has been trying to add a hardware accelerated L1/L2 cache to speed up the binary tree query, assuming slow drift of parameters over time:
cvxgrp/cvxpygen@22f1166#diff-98b08bebd8a86904caf39076842af293adf3f46eb239d8dbf7e8b0b7500d66be
Regards,
Antony