Dyora Add Listing

Your Wishlist : 0 Items

Technologique

Technical Analysis
Visit Now
  • Viewed - 112
  • Bookmark - 0

Telegram Description

Deeply involved developers about various aspects, tendencies & conceptions of programming technologies, FLOSS, Linux, security, cloud infrastructures & DevOps practices, distributed systems, data warehousing & analysis, DL/ML, web3, etc. Author: @andrcmdr

Latest Channel Posts

Channel Image
OpenClaw Agentic Payment Skill

aka OpenClaw Payments Agent with Policy Engine and Audit Trail

https://github.com/sentient-agi/openclaw-payments-agent

The hype is real! 🔥🔥🚀

Roughly two weeks of development.
My cleanest software system design and architecture so far!
And from the other side - my fastest and roughest delivery of prototype.
Thus take it with the grain of salt, cause I haven't use TypeScript/ECMAScript/Node/NPM ecosystem for the ages (in terms of ecosystem evolving)!
Thus used heavily vibe coding with Claude Agent to rewrite atomically the closest previous version written on Rust and Mojo as self-sufficient payment agent/bot.

#AI
#Agent
#Agents
#OpenClaw
2026-02-15T02:49:13+00:00
Channel Image
https://youtu.be/YFjfBk8HI5o

OpenClaw - the next big thing in AI agentic systems, that revolutionizing architecture of AI agents to make and build them as a Lego blocks, using skills and skill hub ecosystem for builders.

#AI
#Agent
#Agents
2026-02-14T23:46:36+00:00
Channel Image
My year on #GitHub, since December 31st 2024 'till December 31st 2025.
Working as a systems developer (using Rust) in AI startup company (Sentient, https://www.sentient.xyz), on confidential AI infrastructure and engines (for CVM and TEE), and on blockchain backend (L1/L2). Contributing to open source.
(Rest only at the beginning on January (New Year's holidays), and beginning of May (May holidays).)
Filter noise, focus only on signal. Be steady and stay consistent in your efforts! Everything is reachable!
Making Open Source AI/AGI Win!
2025-12-30T22:07:14+00:00
Channel Image
Technologique pinned «I've done a ton of job before New Year, on Enclaves Framework, and CDK Dev Stack (former CDK SOA Backend), closed most of the tech debt. Made new init system on Rust (systemd inspired) for enclaves internal provisioning (services, processes). Started Enclave's…»
2025-12-30T22:05:44+00:00
Channel Image
I've done a ton of job before New Year, on Enclaves Framework, and CDK Dev Stack (former CDK SOA Backend), closed most of the tech debt.

Made new init system on Rust (systemd inspired) for enclaves internal provisioning (services, processes).

Started Enclave's Engine development. This component is for Enclave's provisioning on host. (Think of it like Docker Engine with API, Docker Compose, YAML configurations, and containerd runtime, but for secure enclaves.) First iteration already published.

For now, Enclaves Framework is a turn key solution for AWS Nitro Enclaves, for making custom Nitro Enclave images (with custom kernel, init, SLC, proxies, attestation server, and other components) with reproducible builds (supply chain security).

With Enclaves Engine there's a goal to make the same level of usability for confidential VMs, based on KVM, QEMU and Firecracker VMM (think of it as of your own self-hosted Enclaves platform as turn-key solution).

So, delivering Docker like developer experience for Enclaves - this motto is evolving by recent efforts! 🙌

https://github.com/sentient-agi/Sentient-Enclaves-Framework

Some of my experiments will be here in my own profile:

https://github.com/andrcmdr/secure-enclaves-framework

https://github.com/andrcmdr/cdk-dev-stack

Covering everything with exhaustive comprehensive documentation - documentation amount (in lines) exceeded the amount of code already! That's funny! 😁

Refactored main components - Pipeline Secure Local Channel protocol (through VSock) client-server implementation, VSock TCP set of proxies, and Remote Attestation Web Server - made proper error handling and structural logging with tracing for all components, made dynamic VSock buffers allocation for Pipeline SLC, refactored the RA Web-Server to make it modular.

Published paper about multi-hop re-encryption and delegated decryption, about cryptography difficulties for content protection and DRM in application to AI content producers and consumers (for community driven AI).

And published another paper about GPU TEE, attestation, coherent and unified memory, and how it's cause current scalability difficulties for TEE systems.

https://github.com/sentient-agi/Sentient-Enclaves-Framework/blob/main/docs/multi_hop_reencryption.md

https://github.com/sentient-agi/Sentient-Enclaves-Framework/blob/main/docs/multi_hop_reencryption_for_data_protection.proto.rs.md

https://github.com/sentient-agi/Sentient-Enclaves-Framework/blob/main/docs/unified_vs_discrete_memory_for_confidential_ai_and_cvms.md

https://github.com/sentient-agi/Sentient-Enclaves-Framework/blob/main/docs/unified_vs_discrete_memory_for_confidential_ai_and_cvms_2nd_iteration.md

If some of these sparkling your interest - give me a hint and text me! I'm looking for the TEE companies, who will also adopt and use Enclaves Framework and Enclaves Engine.

I think to provide a container like (Docker grade) developer and user experience for enclaves (hardware isolation and memory encryption) technologies for AI and crypto apps and lowering the entry barrier to hardware isolation technologies - is a great mission and ultimate data security goal (especially in context of cryptography and secrets in-memory protection) for the upcoming decade.

So, feel free to reach me if this is interesting for you as well!

#Enclaves
#TEE
#AI
#Cryptography
#Crypto
2025-12-30T22:04:56+00:00
Channel Image
The best local LLM inference setup:
4x Mac Studio (M3 Ultra, 512 GB of unified RAM), 2 TB of UMA RAM with RDMA
EXO 1.0 tooling for clustering, now with tensor parallelism enabled!
RDMA (Remote Direct Memory Access) though Thunderbolt 5 - clustering bottleneck eliminated
MLX inference acceleration (now with RDMA support!)
And... Mac OS 26.2

https://www.youtube.com/watch?v=A0onppIyHEg&t=3m10s

DeepSeek v3.2 8 bit quantization (original training quantization) at 25 tokens per second! Wow!

516 Watts at the peak of power usage!

Downside: cost of 50K USD for hardware. Still better than one or several H100/H200/B200 with limited non unified discrete memory architecture! =)
And such setup will work for way much cheaper Mac Minis (no RDMA yet and Thunderbolt 5, but will be added to new generations of M chips, now available in M4 Pro and higher and M3 Ultra)!

Apple way ahead of all again!

In couple of years this will be a common consumer setup for local LLM inference, using conventional hardware, an APUs from AMD and Intel+NVidia (with integrated CPU+GPU NVLink bus - an upcoming APU architecture), while Apple and NVidia will use Intel Fabs and TSMC fabrication.

The enclaves/TEE for hardware memory encryption will be part of such setups for confidential computing over confidential sensitive data.

#CPU
#GPU
#LLM
#TEE
2025-12-20T06:01:47+00:00

GPT Description

Related Video

No video available.

Item Reviews - 0

No reviews yet.

Add Review