Currently, the FeatureBuilder exports all outputs at the very end --- it computes the chat, speaker, and conversation-level features, and then the export step happens once all three are done. However, under the hood, the computation happens sequentially; the utterance-level features finish first, and they are later aggregated into the conversation and then the speaker-level features.
We don't need to make the user wait until the conversation-level features are done before exporting features that have already completed. This Issue proposes shortcutting the exporting step so that users would get the exports as they are generated.
Currently, the FeatureBuilder exports all outputs at the very end --- it computes the chat, speaker, and conversation-level features, and then the export step happens once all three are done. However, under the hood, the computation happens sequentially; the utterance-level features finish first, and they are later aggregated into the conversation and then the speaker-level features.
We don't need to make the user wait until the conversation-level features are done before exporting features that have already completed. This Issue proposes shortcutting the exporting step so that users would get the exports as they are generated.