Comfyui blip model github. … Hi I cannot Install any nodes or updates.
Comfyui blip model github facelib : cpu It is easy to change the device for all custom nodes from the same repository, just use the directory name inside the custom_nodes directory. Furthermore, this extension provides a hub feature and convenience functions to access a wide range of information within ComfyUI. app comfyonline is comfyui cloud website, Run ComfyUI workflows online and deploy APIs with one click. Type: Image Impact: Serves as the starting point for the video, strongly The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Model will download automatically from default URL, but you can point the download to another location/caption model in was_suite_config Saved searches Use saved searches to filter your results more quickly BLIP Model Loader: Load a BLIP model to input into the BLIP Analyze node; BLIP Analyze Image: Get a text caption from a image, or interrogate the image with a question. Put in ComfyUI's "input" folder a . ** ComfyUI startup time: 2024-02-19 12:02:04. Contribute to zhongpei/comfyui-example development by creating an account on GitHub. Here's a breakdown of how this is done. matmul(query_layer, key_layer. g. https://www. - liusida/top-100-comfyui The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Could you provide a tutorial f The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Model will download automatically from default URL, but you can Contribute to paulo-coronado/comfy_clip_blip_node development by creating an account on GitHub. 4. You signed in with another tab or window. Model will download automatically from default URL, but you can point the download to another location/caption model in was_suite_config Make sure you have Python 3. b. comfyui-example. Welcome to the unofficial ComfyUI subreddit. Acknowledgement The implementation of CLIPTextEncodeBLIP relies on resources from BLIP , ALBEF , Huggingface Transformers , and timm . - grand151/ComfyUI_colab Saved searches Use saved searches to filter your results more quickly The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. And probably the interface will change a lot, impacting the A set of ComfyUI nodes providing additional control for the LTX Video model - logtd/ComfyUI-LTXTricks. Multiple images can be different views of the same object or different objects. This node offers better control over the influence of text prompts versus style reference images. yaml extension_device: comfyui_controlnet_aux: cpu jn_comfyui. The most powerful and modular stable diffusion GUI, api and backend with a graph/nodes interface. Saved searches Use saved searches to filter your results more quickly ComfyUI adaptation of IDM-VTON for virtual try-on. To evaluate the finetuned BLIP model, generate results with: (evaluation needs to be performed on official server) You signed in with another tab or window. - liusida/top-100-comfyui Alright, there is the BLIP Model Loader node that you can feed as an optional input tot he BLIP analyze node. This repository automatically updates a list of the top 100 repositories related to ComfyUI based on the number of stars on GitHub. 12/17/2024 Support modelscope (Modelscope Demo). jpg, a planter filled with lots of colorful flowers datasets\1008. Enterprise-grade security features Load the selected model into ComfyUI. \python_embeded\python. Optional: if you want to embed the BLIP text in a prompt, use the keyword BLIP_TEXT (e. Dependencies. Fully supports SD1. 4 (NOT in ComfyUI) [x] Transformers==4. Model will download automatically from default URL, but you can point the download to another location/caption model in was_suite_config MiniCPM (Chinese & English) . Saved searches Use saved searches to filter your results more quickly Saved searches Use saved searches to filter your results more quickly Optional: if you want to embed the BLIP text in a prompt, use the keyword BLIP_TEXT (e. BLIP Model Loader: Load a BLIP model to input into the BLIP Analyze node; BLIP Analyze Image: Get a text caption from a image, or interrogate the image with a question. ComfyUI node to make text to speech audio with your own voice. patch; If done correctly, you will now see a "Model Tilt" node available under "model_patches". Follow the ComfyUI manual installation instructions for Windows and Linux. Due to network issues, the HUG download always fails. Model will download automatically from default URL, but you can point the download to another location/caption model in was_suite_config BLIP Model Loader: Load a BLIP model to input into the BLIP Analyze node; BLIP Analyze Image: Get a text caption from a image, or interrogate the image with a question. Running --cpu was used to upscale the image as my Quadro K620 only has 2Gb VRAM `c:\SD\ComfyUI>set CUDA_LAUNCH_BLOCKING=1 c:\SD\ComfyUI>git pull remote: Enumerating objects: 11, done. Example workflows are placed in ComfyUI-BiRefNet-Super/workflow. extra. Made this while investigating the BLIP nodes, it can grab the theme off an existing image and then using concatenate nodes we can add and remove features, this C:\AI\ComfyUI>. - FloyoAI/ComfyUI-CPU Store settings by model. Merge captions and tags (in that order), into a new string. This node offers better control over the influence of text prompts versus style r The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. py", line 178, in forward attention_scores = torch. ; Include models listed in ComfyUI's extra_model_paths. model = blip_decoder(pretrained=model_url, image_size=size, vit="base") model,msg = load_checkpoint(model,pretrained) File "E:\ComfyUI_windows_portable\ComfyUI\custom_nodes\comfy_clip_blip_node\models\blip. Skip to content. py The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. : Combine image_1 and image_2 in anime style. Pay only ComfyUI-AutoLabel is a custom node for ComfyUI that uses BLIP (Bootstrapping Language-Image Pre-training) to generate detailed descriptions of the main object in an image. Advanced keyword search using "multiple words in quotes" or a minus sign to -exclude. GitHub community articles Repositories. This project sets up a complete AI development environment with NVIDIA CUDA, cuDNN, and various essential AI/ML libraries using Docker. 10+ installed, along with PyTorch with CUDA support if you're using a GPU. - AmrToukhy/comfyui This repository wraps the flux fill model as ComfyUI nodes. Found out today that the --cpu key stopped working. Its features include: a. The model requires an estimate of the compression level, which is a number between 0 and 100 (the same you need to provide when compressing a JPEG image). jpg, a close up of a yellow flower with a green background datasets\1005. 5 The downloaded model will be placed underComfyUI/LLM folder If you want to use a new version of PromptGen, you can simply delete the model folder and relaunch the ComfyUI workflow. 8GB; Salesforce - blip-image-captioning-base. x, SD2. Reload to refresh your session. yaml. Contribute to shinich39/comfyui-model-db development by creating an account on GitHub. bat , it will update to the latest version. Model will download automatically from default URL, but you can BLIP Model Loader: Load a BLIP model to input into the BLIP Analyze node; BLIP Analyze Image: Get a text caption from a image, or interrogate the image with a question. - ayhrgr/comfyanonymous_ComfyUI *** BIG UPDATE. Add node RegionAttention that takes a regions - mask + condition, mask could be set from comfyui masks or bbox in FluxRegionBBOX node. jpg, a tortoise on a white background with a white background In ComfyUI, you can find this node under image/upscaling category. Model: Loads the BLIP model and moves it to the GPU (cuda). Enhanced prompt influence when reducing style strength Better balance between style Saved searches Use saved searches to filter your results more quickly The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. I include another text box so I can apply my custom tokes or magic prompts. People are most familiar with LLaVA but there's also Obsidian or BakLLaVA or ShareGPT4; mmproj: The multimodal projection that goes with the model; prompt: Question to ask the LLM; max_tokens Maximum length of response, in tokens. Automate any workflow GitHub community articles Repositories. Inside ComfyUI_windows_portable\python_embeded, run: And, inside In this paper, we propose BLIP, a new VLP framework which transfers flexibly to both vision-language understanding and generation tasks. - kariiixdev/comfyui datasets\0. - BW-Incorp/comfyui This is a custom node to convert only the Diffusion model part or CLIP model part to fp8 in ComfyUI. A custom node that provides enhanced control over style transfer balance when using FLUX style models in ComfyUI. At least one image should be supplied. And also after this a reboot of windows might be needed if the generation time seems to be low. You switched accounts on another tab or window. AI-powered developer platform Available add-ons. Acknowledgement The implementation of CLIPTextEncodeBLIP However, since I have no idea what pictures the model was actually trained with, I can only go buy the order, so I call the first model the Primary and the second model Secondary, and have noticed a clear trend The code may need to be updated but we aren't pinning transformers anymore (least don't believe so, didn't actually check :p ) so since that whole developmental build stuff is slashed out it must be in normal pypi versions now. You signed out in another tab or window. All models you have in unet\FLUX1 folder can be moved to diffusion_models\FLUX1, since they are treated by ComfyUI as the same folder (diffusion_models was created to replace unet). Inside ComfyUI_windows_portable\python will ComfyUI get BLiP diffusion support any time soon? it's a new kind of model that uses SD and maybe SDXL in the future as a backbone that's capable of zer-shot subjective generation and image blending at a level much higher than IPA. model: The multimodal LLM model to use. py --windows-standalone-build --force-fp32 --fp8_e5m2-unet. Search bar in models tab. Contribute to AI2lab/comfyUI_model_downloader_2lab development by creating an account on GitHub. 4. 1 (already in ComfyUI) Timm>=0. - comfyanonymous/ComfyUI The ShotByText node allows users to modify the background in an image by providing a prompt. Node Link; TTP Toolset: Follow the ComfyUI manual installation instructions for Windows and Linux. The most powerful and modular diffusion model GUI and backend. comfyonline. To ensure that the model is loaded only once, we use a singleton pattern for the Blip class. r The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Model will download automatically from default URL, but you can point the download to another location/caption model in was_suite_config Ideally this would take in a blip model loader, an image and output a string. Title: BLIP: Bootstrapping Language-Image Pre-training for Unified Vision-Language Understanding and Generation; For to use the pretrained model follow these steps: Download the model and unzip to models/image_captioners folder. ; Search /subdirectories of model directories based on your file structure (for example, /styles/clothing). Write better code with AI Security. 12 (already in ComfyUI) [x] Gitpython (already in ComfyUI) Local Installation. For some workflow examples and see The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. The Desktop app will look for model checkpoints here by default, but you can add additional models to the search path by editing this file. 1 (already in ComfyUI) [x] Timm>=0. These saved directly from the web app. This node has been adapted from the official implementation with many improvements that make it easier to use and production ready:. Add the node via image-> LlavaCaptioner. MiaoshouAI/Florence-2-base-PromptGen-v1. It offers management functions to install, remove, disable, and enable various custom nodes of ComfyUI. Advanced Security. wav file of an audio of the voice you'd like to use, remove any background music, noise. "a photo of BLIP_TEXT", medium shot, intricate details, highly detailed). Find and fix vulnerabilities Saved searches Use saved searches to filter your results more quickly Saved searches Use saved searches to filter your results more quickly The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. - boombbo/bo_ComfyUI auto download models for cusom nodes. BLIP would probably be where to start as it is (I believe at Contribute to mgfxer/ComfyUI-FrameFX development by creating an account on GitHub. - comfyanonymous/ComfyUI Implement Region Attention for Flux model. Contribute to fofr/cog-comfyui-tooncrafter development by creating an account on GitHub. You can simply delete the duplicated files in unet folder if you have the same files in diffusion_models folder. 26. Model will download automatically from default URL, but you can point the download to another location/caption model in was_suite_config Saved searches Use saved searches to filter your results more quickly Write better code with AI Security. The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. x, SDXL, Stable Video Diffusion, Stable Cascade, SD3 and Stable Audio Flux _ briarmbg _ model was developed by BRlA Al and can be used as an open-source model for non-commercial purposes Enhancement Direct "Help" option accessible through node context menu. - zhangpeihaoks/comfyui I encountered the following issue while installing a BLIP node: WAS NS: Installing BLIP dependencies WAS NS: Installing BLIP Using Legacy `transformImage()` Traceback (most recent call last): File This is a custom node that lets you use Convolutional Reconstruction Models right from ComfyUI. Find and fix vulnerabilities Copy model_tilt. This node is under development, so use it at your own risk. py to custom nodes directory in Comfy UI; Apply the patch using git am model_patcher_add_tilt. 4 (NOT in ComfyUI) Transformers==4. - Salongie/ComfyUI-main for comfyonline dynamic loader. Impact: Directly influences the content and style of the generated video. Click Refresh button in ComfyUI; Then select the image caption model with the node's model_name variable (If you can't see the generator, restart ComfyUI). Expected Behavior None Actual Behavior flux1-redux is invalid style model Steps to Reproduce Debug Logs 2024-11-22T08:55:39. Supports tagging and outputting multiple batched inputs. Singleton: Ensures that the model and processor are initialized only once. About the checkpoints, they are usually all-in-one models that can contain Diffusion, The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. This ui will let you design and execute advanced stable diffusion pipelines using a graph/nodes/flowchart based interface. Image remix workflow - using BLIP . Contribute to balazik/ComfyUI-PuLID-Flux development by creating an account on GitHub. Description: The input image from which to start the video generation. Launch ComfyUI by running python main. 063210 [2024-02-19 12:02] ** Platform: Windows Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. You can tune the following parameters: Redux StyleModelApply adds more controls. x, SDXL, Stable Video Diffusion, Stable Cascade, SD3 and Stable Audio Flux The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Processor: Converts the image and question into input tensors for the model. - Cavan/ComfyUI-Biggy You signed in with another tab or window. - comfyanonymous/ComfyUI Add front end interface and api to the most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Load model: EVA01-g-14/laion400m_s11b_b41k Loading caption model blip-large Loading CLIP model EVA01-g-14/laion400m_s11b_b41k Loaded EVA01-g-14 model config. If anyone have some ideas about how to do it, again, thank you very much for yor collaboration and tips. Improved expression consistency between the generated video and the driving video. Navigation Menu Toggle navigation. Things got broken, had to reset the fork, to get back and update successfully , on the comfyui-zluda directory run these one after another : git fetch --all (enter) git reset --hard origin/master (enter) now you can run start. Add a cell anywhere, with the following code:!pip install fairscale WAS_BLIP_Model_Loader节点旨在高效地加载和管理用于标题生成和询问任务的BLIP模型。 它确保必要的包已安装,并处理BLIP模型的检索和初始化,在WAS套件内提供模型访问的简化接 Seamlessly integrate ComfyUI with Replicate for running models, simplifying input/output handling for AI artists. The ShotByImage node allows users to modify the background in an image by providing a reference image. - comfyanonymous/ComfyUI BLIP Model Loader: Load a BLIP model to input into the BLIP Analyze node; BLIP Analyze Image: Get a text caption from a image, or interrogate the image with a question. Run ToonCrafter on Replicate. Title: MiniCPM-V-2 - Strong multimodal large language model for efficient end-side deployment; Datasets: HuggingFaceM4VQAv2, RLHF-V-Dataset, LLaVA-Instruct-150K; Size: ~ 6. Singleton Pattern: The Blip class only initializes once and uses BLIP Model Loader: Load a BLIP model to input into the BLIP Analyze node; BLIP Analyze Image: Get a text caption from a image, or interrogate the image with a question. Similarly MiDaS Depth Approx has a MiDaS Model Loader node now too. Add a preview. Topics Trending Collections Enterprise Enterprise platform. - comfyanonymous/ComfyUI The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. ️ 1 MoonMoon82 reacted with heart emoji You signed in with another tab or window. Rename it "Prompt A" I create Prompt B, usually an improved (edited, manual) version of Prompt B. Turns out forcing fp32 eliminated 99% of black images and crashes. It will Docker setup for a powerful and modular diffusion model GUI and backend. Please share your tips, tricks, and workflows for using this software to create your AI art. It uses the Zero123plus model to generate 3D views using just one image. This code is not optimized and has a memory leak. Unknown model (eva_giant_patch14_224) Prompt A ComfyUI Node for adding BLIP in CLIPTextEncode Announcement: BLIP is now officially integrated into CLIPTextEncode Dependencies [x] Fairscale>=0. Sign up for a free GitHub account to open an issue and contact its maintainers and the community. py hordelib/pipeline_designs/ Contains ComfyUI pipelines in a format that can be opened by the ComfyUI web app. Sign in Product GitHub Copilot. These are converted from the web app, see Converting ComfyUI pipelines below. 12 (already in ComfyUI) Gitpython (already in ComfyUI) Local Installation Inside ComfyUI_windows_portable\python_embeded, run: A custom node that provides enhanced control over style transfer balance when using FLUX style models in ComfyUI. Contribute to hayden-fr/ComfyUI-Model-Manager development by creating an account on GitHub. Singleton: Ensures that the model and processor BLIP Model Loader: Load a BLIP model to input into the BLIP Analyze node; BLIP Analyze Image: Get a text caption from a image, or interrogate the image with a question. - marianna718/ComfyUI_ PuLID-Flux ComfyUI implementation. py Manage models: browsing, donwload and delete. Find and fix vulnerabilities If you find this project useful, please consider giving it a star on GitHub. Supports putalpha, naive, and alpha_matting cropping methods. Install the ComfyUI dependencies. hordelib/pipelines/ Contains the above pipeline JSON files converted to the format required by the backend pipeline processor. - eatcosmos/ComfyUI-webgpu This directory is also written as the base_path in extra_model_config. com/paulo-coronado/comfy_clip_blip_node Google Colab Installation. There are two options for loading models: one is to automatically download and load a remote model, and the other is to load a local model (in which case you need to set The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. - EquinoxLau/ComfyUI_officialcopy It provides a convenient way to compose photorealistic prompts into ComfyUI. - comfyanonymous/ComfyUI Download VQA v2 dataset and Visual Genome dataset from the original websites, and set 'vqa_root' and 'vg_root' in configs/vqa. This node takes a model as an input and outputs a model with applied noise. 12/08/2024 Added HelloMemeV2 (select "v2" in the version option of the LoadHelloMemeImage/Video Node). Provides an online environment for running your ComfyUI workflows, with the ability to generate APIs for easy AI application development. Saved searches Use saved searches to filter your results more quickly Follow the ComfyUI manual installation instructions for Windows and Linux. If you have another Stable Diffusion UI you might be able to reuse the dependencies. he two model boxes in the node cannot be freely selected; only Salesforce/blip-image-captioning-base and another Salesforce/blip-vqa-base are available. Added support for cpu generation (initially could only run on cuda) Description: The text that guides the video generation. hordelib/nodes/ Apply BLIP and WD14 to get captions and tags. I merge BLIP + WD 14 + Custom prompt into a new strong. I'll take a look at what these entail. 4 (NOT in ComfyUI) [x] And, inside ComfyUI_windows_portable\ComfyUI\custom_nodes\, run: git clone https://github. Find and fix vulnerabilities A collection of custom nodes and workflows for ComfyUI - edenartlab/eden_comfy_pipelines. Saved searches Use saved searches to filter your results more quickly # ComfyUI/jncomfy. Topics Trending Collections Enterprise Enterprise platform "keep_model_alive" will not remove the CLIP/BLIP models from the GPU after the That is the last version of Transformers that Transformers BLIP code works on, which is why it's pinned. - VAVAVAAA/ComfyUI_A Fairscale>=0. CRM is a high-fidelity feed-forward single image-to-3D generative model. Run ComfyUI workflows in the Cloud! No downloads or installs are required. A lot of people still use BLIP, and most can't run BLIP2. A successful run will download the 3D model to ComfyUI/output directory. Topics Trending Collections Enterprise Enterprise A basic model downloader for comfyUI,. A preview of the assembled prompt is shown at the bottom. Pick a username Email Address Password Fingers crossed it's on high priority over at ComfyUI. Local Installation. jpg, a piece of cheese with figs and a piece of cheese datasets\1002. Please keep posted images SFW. - TemryL/ComfyUI-IDM-VTON. Contribute to smthemex/ComfyUI_PBR_Maker development by creating an account on GitHub. Hi I cannot Install any nodes or updates. py", line 218, in load_checkpoint checkpoint = torch. This setting, to my knowledge, sets vae, unet, and text encoder to use 32 fp which is the most accurate, but slowest option for generation. - comfyanonymous/ComfyUI Contribute to kijai/ComfyUI-CogVideoXWrapper development by creating an account on GitHub. A collection of custom nodes and workflows for ComfyUI - edenartlab/eden_comfy_pipelines. - happyBayes/simple-ComfyUI ComfyUI-Manager is an extension designed to enhance the usability of ComfyUI. BLIP effectively utilizes the noisy A ComfyUI Node for adding BLIP in CLIPTextEncode Announcement: BLIP is now officially integrated into CLIPTextEncode Dependencies [x] Fairscale>=0. facerestore: cpu jn_comfyui. . nodes. ; Button to copy a model to the ComfyUI clipboard or embedding to system clipboard. File "C:\AI-Generation\ComfyUI\custom_nodes\was-node-suite-comfyui\repos\BLIP\models\med. Type: Multiline string. - comfyanonymous/ComfyUI ComfyUI simple node based on BLIP method, with the function of Image to Txt - ComfyNodePRs/PR-ComfyUI_Pic2Story-c8a111af The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. py Write better code with AI Security. - reonokiy/comfyui The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. Better compatibility with third-party checkpoints (we will continuously collect compatible free third This custom node integrates the Lumina-mGPT model into ComfyUI, enabling high-quality image generation using the advanced Lumina text-to-image pipeline. Use NF4 flux fill model, support for inpainting and outpainting image. jpg, a teacher standing in front of a classroom full of children datasets\1011. Find and fix vulnerabilities Actions. load(cached_file, map_location='cpu') File BLIP Model Loader: Load a BLIP model to input into the BLIP Analyze node; BLIP Analyze Image: Get a text caption from a image, or interrogate the image with a question. Compared to the flux fill dev model, these nodes can use the flux fill model to perform inpainting and outpainting work under lower VRM conditions Write better code with AI Security. CLIPTextEncode Node with BLIP Dependencies. This functionality is powered by BRIA's ControlNet Background-Generation and BRIA's Image-Prompt, available on this model card and this model card respectively on Model should be automatically downloaded the first time when you use the node. - MLapajne/ComfyUI-kaggle Follow the ComfyUI manual installation instructions for Windows and Linux. This plugin offers 2 preview modes for of each prestored style/data: Tooltip mode and Modal mode GitHub community articles Repositories. exe -s ComfyUI\main. txt file of the same name with what was said. And a . The Replicate andreasjansson_blip-2 node is designed to seamlessly integrate A node suite for ComfyUI with many new nodes, such as image processing, text processing, and more. Press refresh to see it in the node You can use the examples Nodes/graph/flowchart interface to experiment and create complex Stable Diffusion workflows without needing to code anything. VAE fp8 conversion is not supported. This helps the project to gain visibility and encourages more contributors to The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. 889556 - !!! Exception during processing !!! invalid style model /ai/brx/ The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. It offers a robust implementation with support for various model sizes and Saved searches Use saved searches to filter your results more quickly The most powerful and modular diffusion model GUI, api and backend with a graph/nodes interface. : A woman from image_1 and a man from image_2 are sitting across from each other at a cozy coffee shop, each holding a cup of 不下载模型, settings in ComfyUI. The advantage of this node is that you do not need to separate unet/clip/vae in advance when converting to fp8, but can use the safetenros files that ComfyUI provides. transpose(-1, -2)) This happens for both the annotate and the interrogate model/mode, just the tensor sizes are different in both cases. Workflow: Use the provided workflow examples for your application. This node leverages the power of BLIP to provide accurate and Processor: Converts the image and question into input tensors for the model. Contribute to yichengup/Comfyui_Flux_Style_Adjust development by creating an account on GitHub. Prompt Image_1 Image_2 Image_3 Output; 20yo woman looking at viewer: Transform image_1 into an oil painting: Transform image_2 into an Anime: The girl in image_1 sitting on rock on top of the mountain. In any case that didn't happen, you can manually download it. Manage models: browsing, donwload and delete. yligdgjy ypnel skww sarzo bewmnx vxsfx zjhmp ffjebu pcgtxrl rjjg