Add files via upload

This commit is contained in:
Masih Moafi 2024-04-24 10:02:59 +03:30 committed by GitHub
parent 52781c3ea4
commit 83605070a0
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194

239
LLM.ipynb Normal file
View File

@ -0,0 +1,239 @@
{
"cells": [
{
"cell_type": "code",
"execution_count": 27,
"id": "36a09b01-3bb2-4a9f-ac22-354793704d60",
"metadata": {},
"outputs": [],
"source": [
"import torch\n",
"from transformers import pipeline"
]
},
{
"cell_type": "code",
"execution_count": 28,
"id": "a8674877-8f68-40c1-8a82-7ee881777a14",
"metadata": {},
"outputs": [],
"source": [
"translator = pipeline(task=\"translation\",\n",
" model=\"facebook/nllb-200-distilled-600M\",\n",
" torch_dtype=torch.bfloat16) "
]
},
{
"cell_type": "code",
"execution_count": 29,
"id": "824e1154-a60d-46fc-987f-812999342724",
"metadata": {},
"outputs": [],
"source": [
"text = \"\"\"\\\n",
"My puppy is adorable, \\\n",
"Your kitten is cute.\n",
"Her panda is friendly.\n",
"His llama is thoughtful. \\\n",
"We all have nice pets!\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 30,
"id": "5241727b-0821-4c95-8800-6df6313c7c64",
"metadata": {},
"outputs": [],
"source": [
"text_translated = translator(text,\n",
" src_lang=\"eng_Latn\",\n",
" tgt_lang=\"fra_Latn\")"
]
},
{
"cell_type": "code",
"execution_count": 31,
"id": "cb0e5fd4-0e82-4dcf-8b2a-05d67e49e75e",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'translation_text': 'Mon chiot est adorable, ton chaton est mignon, son panda est ami, sa lamme est attentive, nous avons tous de beaux animaux de compagnie.'}]"
]
},
"execution_count": 31,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"text_translated"
]
},
{
"cell_type": "code",
"execution_count": 32,
"id": "604bd794-cb69-4226-8d09-61977034c7f6",
"metadata": {},
"outputs": [],
"source": [
"summarizer = pipeline(task=\"summarization\",\n",
" model=\"facebook/bart-large-cnn\",\n",
" torch_dtype=torch.bfloat16)"
]
},
{
"cell_type": "code",
"execution_count": 33,
"id": "c376f7cb-ecc9-43b2-bb0c-a04687795bff",
"metadata": {},
"outputs": [],
"source": [
"text = \"\"\"\t1) People throughout history have been distracted. There is nothing uniquely distracting about modern technology. We are suffering from a crisis of attention because we don't allow ourselves a mental break. In particular, we no longer daydream, which is a beneficial type of spontaneous thought.\n",
"Were bombarded by the idea that at the root of our attention issues lies a single powerful culprit: modern technology. If we truly want to focus, it seems, we need to turn off all our devices, quit social media, and retreat into the woods for a digital detox.\n",
" Heres my resistance to that idea. At an elemental level, this particular era is no different than any other. there has always been a “crisis of attention.” Historically, people have turned to meditation (and other forms of contemplative(thoughtful, pensive) practice) to deal with feelings of being overwhelmed and scattered in focus, and to refocus and reflect on priorities , our inner values, intentions, purpose. This can certainly be a spiritual process, if thats how you define it. But were discovering that mindfulness impacts the attention system and how it copes with the distractions that surround us and those that are generated internally. In part, thats what meditation practitioners have always been pursuing. Think about life long ago: people in ancient India or medieval Europe didnt have smartphones and Facebook, but they were still suffering in their own minds. They still turned to any number of practices for relief. They still described the same challenge: Im not fully present for my life.\n",
" A crisis of attention can happen anytime you dont allow yourself a break. when you dont allow your mind to rest without having any task-at-hand. Remember our distinction between mind-wandering (having off-task thoughts during a task) and daydreaming (task-free spontaneous thought and opportunity for conscious reflection, creativity, and the like)? Well, one problem today is that we are always engaged in something. With these digital tools at our fingertips, we have constant access to all these forms of communication, content, and interaction, and we dont tend to gravitate toward letting our thoughts meander(bend), unconstrained. Of the two types of spontaneous thought we discussed earlier, its the beneficial type, the daydreaming, that we barely get at all. \n",
" We all do it. I catch myself all the time, going from one type of mental engagement to the next. I call it hype tasking. Like surfing hyperlinks online (clicking from link to link as they grab your attention), we go from one task to the next and the next. You are probably doing it right now. We are “all task and no downtime.” And were asking an enormous amount from our attention systems. Your attentional capacity is not less than someones from hundreds of years ago. Its only that right now, youre using your attention in a particular focused way, all the time.\n",
" Were taxing(exacting, demanding) our focused attention to the max. Hype tasking is hyper-taxing! Even something you might think of as relaxing (scrolling through Instagram, for example, or reading an article someone shared) is more engagement. Its another task. Checking your notifications may seem “fun,” but its work for your attention. Task: check to see who posted what in response to my post. Task: check how many likes I got. Task: check who shared my funny meme. Your attention was focused on task after task after task, with no attentional downtime, not a moment for the mind to roam free.(travel aimlessly)\n",
" Its not always realistic to unplug. We cant just turn off our phones and pause our email. We cannot create a distraction-free world. The issue is not the existence of this technology; rather, its how were using it: we are not allowing our minds to pay attention differently. And this is where mindfulness comes in, as a way to steady your flashlight so you dont end up swinging it around at any and all possible distractions, digital or not.\n",
"\"\"\""
]
},
{
"cell_type": "code",
"execution_count": 34,
"id": "e1920f67-f2ad-4b65-98fd-bd36aea8b866",
"metadata": {},
"outputs": [],
"source": [
"summary = summarizer(text,\n",
" min_length=50,\n",
" max_length=250)"
]
},
{
"cell_type": "code",
"execution_count": 35,
"id": "b7675973-dd49-45a7-ae20-27fd173fb209",
"metadata": {},
"outputs": [
{
"data": {
"text/plain": [
"[{'summary_text': \"We are suffering from a crisis of attention because we don't allow ourselves a mental break. In particular, we no longer daydream, which is a beneficial type of spontaneous thought. We cant just turn off our phones and pause our email. We cannot create a distraction-free world.\"}]"
]
},
"execution_count": 35,
"metadata": {},
"output_type": "execute_result"
}
],
"source": [
"summary"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "69c7ea2b-99cb-4c92-b9da-62d3cb272541",
"metadata": {},
"outputs": [],
"source": [
"from transformers.utils import logging\n",
"logging.set_verbosity_error()\n",
"from transformers import pipeline\n",
"sam_pipe = pipeline(\"mask-generation\", \"Zigeng/SlimSAM-uniform-77\")\n",
"from PIL import Image\n",
"raw_image = Image.open('C:/Users/Josep/OneDrive/Desktop/Edit Materials/heaven (7).jpg')\n",
"raw_image = raw_image.resize((720, 375))\n",
"output = sam_pipe(raw_image)\n",
"from helper import show_pipe_masks_on_image\n",
"show_pipe_masks_on_image(raw_image, output)\n",
"from transformers import SamModel, SamProcessor\n",
"model = SamModel.from_pretrained(\"./models/Zigeng/SlimSAM-uniform-77\")\n",
"processor = SamProcessor.from_pretrained(\"./models/Zigeng/SlimSAM-uniform-77\")\n",
"raw_image = raw_image.resize((720, 375))\n",
"input_points = [[[1600, 700]]]\n",
"inputs = processor(raw_image, input_points=input_points, return_tensors=\"pt\")\n",
"import torch\n",
"with torch.no_grad():\n",
" outputs = model(**inputs)\n",
"predicted_masks = processor.image_processor.post_process_masks(\n",
" outputs.pred_masks, inputs[\"original_sizes\"], inputs[\"reshaped_input_sizes\"]\n",
")\n",
"len(predicted_masks)\n",
"predicted_mask = predicted_masks[0]\n",
"predicted_mask.shape\n",
"outputs.iou_scores\n",
"from helper import show_mask_on_image\n",
"for i in range(3):\n",
" show_mask_on_image(raw_image, predicted_mask[:, i])\n",
"depth_estimator = pipeline(task=\"depth-estimation\", model=\"./models/Intel/dpt-hybrid-midas\")\n",
"raw_image = Image.open('gradio_tamagochi_vienna.png')\n",
"raw_image = raw_image.resize((806, 621))\n",
"output = depth_estimator(raw_image)\n",
"output\n",
"output[\"predicted_depth\"].shape\n",
"output[\"predicted_depth\"].unsqueeze(1).shape\n",
"prediction = torch.nn.functional.interpolate(\n",
" output[\"predicted_depth\"].unsqueeze(1), size=raw_image.size[::-1], mode=\"bicubic\", align_corners=False,\n",
")\n",
"prediction.shape\n",
"raw_image.size[::-1]\n",
"prediction\n",
"import numpy as np\n",
"output = prediction.squeeze().numpy()\n",
"formatted = (output * 255 / np.max(output)).astype(\"uint8\")\n",
"depth = Image.fromarray(formatted)\n",
"depth\n",
"import os\n",
"import gradio as gr\n",
"from transformers import pipeline\n",
"def launch(input_image):\n",
" out = depth_estimator(input_image)\n",
"\n",
" # resize the prediction\n",
" prediction = torch.nn.functional.interpolate(\n",
" out[\"predicted_depth\"].unsqueeze(1),\n",
" size=input_image.size[::-1],\n",
" mode=\"bicubic\",\n",
" align_corners=False,\n",
" )\n",
"\n",
" # normalize the prediction\n",
" output = prediction.squeeze().numpy()\n",
" formatted = (output * 255 / np.max(output)).astype(\"uint8\")\n",
" depth = Image.fromarray(formatted)\n",
" return depth\n",
"\n",
"iface = gr.Interface(launch, \n",
" inputs=gr.Image(type='pil'), \n",
" outputs=gr.Image(type='pil'))\n",
"iface.launch(share=True, server_port=int(os.environ['PORT1']))\n",
"iface.close()"
]
}
],
"metadata": {
"kernelspec": {
"display_name": "Python 3 (ipykernel)",
"language": "python",
"name": "python3"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.8.19"
}
},
"nbformat": 4,
"nbformat_minor": 5
}