Your works are very inspirational to my work, and I would like to ask how you express cross-modal referential relations of regional information. For example, how do you cross-modally ask questions through coordinates?
For example, the BLIP model fuses image and text information after tokenization. How do you perform such fusion?
Thank you very much for taking the time out of your busy schedule to look at my question.:)