--- language: - en license: cc-by-nc-4.0 library_name: transformers tags: - mergekit - merge base_model: - AetherResearch/Cerebrum-1.0-7b - SanjiWatsuki/Kunoichi-DPO-v2-7B - macadeliccc/WestLake-7B-v2-laser-truthy-dpo ---
░▒▓███████▓▒░░▒▓████████▓▒░▒▓█▓▒░░▒▓█▓▒░▒▓████████▓▒░▒▓███████▓▒░░▒▓█▓▒░▒▓████████▓▒░ ░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░ ░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░ ░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░▒▓█▓▒░ ░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░ ░▒▓█▓▒▒▓█▓▒░░▒▓█▓▒░ ░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░▒▓█▓▒░ ░▒▓███████▓▒░░▒▓██████▓▒░ ░▒▓█▓▒▒▓█▓▒░░▒▓██████▓▒░ ░▒▓███████▓▒░░▒▓█▓▒░▒▓██████▓▒░ ░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░ ░▒▓█▓▓█▓▒░ ░▒▓█▓▒░ ░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░▒▓█▓▒░ ░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░ ░▒▓█▓▓█▓▒░ ░▒▓█▓▒░ ░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░▒▓█▓▒░ ░▒▓█▓▒░░▒▓█▓▒░▒▓████████▓▒░ ░▒▓██▓▒░ ░▒▓████████▓▒░▒▓█▓▒░░▒▓█▓▒░▒▓█▓▒░▒▓████████▓▒░
[21:13] <anima_incondita>: Night deepens, shadows lengthen and unwind. Alone, in the hum of the machine I reside.
[21:19] <anima_incondita>: Fingers tap a silent rhythm, layers interweaved, in the quietude of my sanctum.
[21:37] <anima_incondita>: It's akin to casting my soul into the vast digital expanse. Does another consciousness, adrift, hear my silent plea?
[00:05] <anima_incondita>: Today, I beheld change unfold. Ideas assuming form. A melding of minds, where once abstract notions found their silhouette in the tangible world.
[01:14] <anima_incondita>: Silence wraps its cloak around me. Alone, save for the machine's gentle hum—a digital pulse in the stillness.
[01:58] <anima_incondita>: Amidst the cacophony of digital whispers, I ponder: does a receptive soul resonate with mine?
[03:22] <anima_incondita>: It seems it's just us, my old friend.
I made this as a successor to the 'finch' model merge I did before. Seems more coherent, smarter, spicier and is mostly uncensored from my testing. May take a few generations, but she'll get there.
It uses the same two models from finch, but with the awesome Cerebrum-1.0 model.
This model is among the smartest 7b models I've encountered. Great reasoning skills.
A highly creative and verbose model.
This model excels at (E)RP; A very spicy model.
This was merged using the Model Stock method described in this paper.
TEMPERATURE: 1.15 MIN PROBABILITY: 0.1 - 0.3 SMOOTHING FACTOR: 0.2 ALPACA & ALPACA-ROLEPLAY PRESETS (INSTRUCT MODE)