Wondering if there is an established result for the convergence rate when solving L1 regularized optimization via coordinate descent with tiny step? By "tiny step" I mean the step is always set to a very small positive constant, involving none of those step selection technique.

*Quite slow*.

Convergence rate in LASSO-type estimators are always hard to get; mostly because you don't have a good a priori knowledge of the smoothness of $f$, thanks to the $\ell_1$ objective function being non-differentiable(Ref. 1, Sec. 5, Ref. 3, Sec. 3). There are some results on the convergence of the coordinate descent method for convex differentiable minimization that state that : "*for a strictly convex function which is twice differentiable in its effective domain the convergence is at least linear*" (Ref. 2); but you can not use that for $\ell_1$ directly. (Actually that is the caveat missing from answer you got in math.stackexchange)

Assuming you are interested in some actual application, this thread might be of interest to you: Stochastic coordinate descent for $\ell_1$ regularization; even in the case of Stochastic CD the authors present the step size is modified though.

If you are determined to use a fixed step-size you might as well try steepest descent; given you use only gradient information you might as well try to utilize it to the max. Check how a Line Search works and what Wolfe Conditions are; step-selection (in simple cases) is not that hard to include in an optimization algorithm. :)