Obviously, you need a calibration curve to convert the signal axis to a concentration axis.
It bears also mentioning that the LOD value needs to be defined for a certain confidence interval, which will determine both the number of blank replicates and the multiplier. 10 blanks and a multiplier of 3 is a commonly used quick rule of thumb, but actual multipliers should be determined by statistical t-tests using a desired confidence level and degrees of freedom, and specific organizations may require higher or lower confidence intervals for certain applications or methods. Certain assumptions are also implicit, such as a linear relationship between the signal and concentration, Gaussian distribution of the data points about the mean, and so forth. Finally, note that the LOD can change depending on how the blanks are treated. E.g., detection limits calculated from method blanks vs. instrument blanks are completely different things. I'm also sure you know that the limit of detection does not imply that a measured quantitative value is accurate or reliable at the LOD value, but it doesn't hurt to state it as a reminder.
Unfortunately these analytical terms are not frequently well-defined, and people often use the "10 blanks, multiplier of 3" rule of thumb without understanding where it comes from or what it means statistically. I think for most "casual" use (basic research, e.g.) it doesn't matter a whole lot but if you're doing analysis with regulatory or legal implications, and you're anywhere near the limits of your method in terms of analyte concentration, you can't ignore those kinds of things.