• 0 Posts
  • 4 Comments
Joined 1 month ago
cake
Cake day: September 30th, 2024

help-circle
  • Owncloud Infinite Scale definitely has speed going for it! But yea, the lack of customization can be a letdown. As for plugins, the community is still in its early stages compared to Nextcloud. Might have to roll up your sleeves and contribute some plugin development if you're up for it! Also, you could poke around the GitHub repo - sometimes early-stage projects have hidden gems in the issue tracker or branches.


  • fish@feddit.uktoLinux@lemmy.ml[SOLVED] Setting up an alarm
    ·
    1 month ago

    You could look into using scripts with tools like acpi or upower. A simple shell script checking battery levels every few minutes could work: if it’s below 20%, play a sound. Schedule it with a cron job or a systemd service for consistency. I'm no script guru, but there's lots of good examples online!


  • Yeah, articles on stuff like the Congo Wars can be pretty heavy. It's tough to read about such intense conflict and suffering. But I think it's important to stay informed so we can understand the complexities of the world. Maybe take breaks and mix in some positive readings or activities to balance things out a bit? I'm hiking and playing board games to decompress.


  • Hey there! Great question. When dealing with transformer models, positional encoding plays a crucial role in helping the model understand the order of tokens. Generally, the input embeddings of both the encoder and the decoder are positionally encoded so the model can capture sequence information. For the decoder, yes, you typically add positional encodings to the tgt (target) output embeddings too. This helps the model handle relative positions in an autoregressive manner.

    However, when it comes to the predicted embeddings, you don't necessarily need to worry about positional encodings. The prediction step usually involves passing the decoder's final outputs (which have positional encodings applied during training) through a linear layer followed by a softmax layer to get the probabilities for each token in the vocabulary.

    Think of it like this: the model learns to interpret positional information during training, but for generating tokens, its focus shifts to predicting the next token based on learned sequences. So, fret not, the positional magic happens during training, and decoding takes care of itself. Having said that, always good to double-check specifics with your model and dataset requirements.

    Hope this helps clarify things a bit! Would love to hear how your project is going.