Scrum Master, one Product Owner, and Developers.The team should have all the skills needed to create the product. Having to rely on others outside of the team means dependencies on people outside of the team. This will result in delays. A knowledg
| Type | Pros | Cons |
|---|---|---|
| Native Web Tool | 1.Widely used and supported by a large community of developers. 2.Highly customizable and flexible, allowing for a wide range of solutions to be developed. 3.Can be integrated with a variety of databases and APIs. |
1.Steep learning curve, as JavaScript can be complex and requires a good understanding of programming concepts. 2.Can be difficult to maintain and debug, especially for large and complex applications. 3.Can be slow in older browsers and less optimized devices. |
| Power BI | 1.User-friendly interface, making it easy for non-technical users to create and customize reports. 2.Offers a range of built-in visualization options and tools for data analysis. 3.Integrates well with other Microsoft/Azure tools, such as Excel and SharePoint. |
1.Limited customization options compared to JavaScript. 2.Can be less flexible and may not be suitable for more complex reporting requirements. 3. |
S3 Select is focused on retrieving data from S3 using SQL:
S3 Select, enables applications to retrieve only a subset of data from an object by using simple SQL expressions. By using S3 Select to retrieve only the data needed by your application, you can achieve drastic performance increases – in many cases you can get as much as a 400% improvement compared with classic S3
test.csv
key,a,b,ca,1,,-1a,2,,a,3,,4
test.py
from pyspark.sql import SparkSessionfrom pyspark.sql import functions as Fspark = SparkSession \.builder \.appName("spark-app") \.getOrCreate()spark.sparkContext.setLogLevel("WARN")df = spark.read.csv("test.csv", header=True)res = df.groupBy(["key"]).agg(*[F.max("a"),F.max("b"),F.max("c"),F.min("a"),F.min("b"),F.min("c"),])print (res.toPandas())
spark-submit test.py
key max(a) max(b) max(c) min(a) min(b) min(c)0 a 3 None 4 1 None -1
$ export RELEASE=$(curl -s https://api.github.com/repos/kubeless/kubeless/releases/latest | grep tag_name | cut -d '"' -f 4)$ kubectl create ns kubeless$ kubectl create -f https://github.com/kubeless/kubeless/releases/download/$RELEASE/kubeless-$RELEASE.yaml
$ kubectl get pods -n kubeless$ kubectl get deployment -n kubeless$ kubectl get customresourcedefinition
def hello(event, context):print eventreturn event['data']
$ kubeless function deploy hello --runtime python2.7 \--from-file test.py \--handler test.hello
$ kubectl get functions$ kubeless function ls
$ kubeless function call hello --data 'Hello world!'
create a file
echo This is a sample text file > sample.txt
delete a file
del file_name
move a file
move stats.doc c:\statistics
combine files
copy /b file1 + file2 file3
import pandas as pdimport pyodbcimport sqlalchemyimport urllibdef get_sqlalchemy_engine(driver, server, uid, pwd, database):conn_str = 'DRIVER={};SERVER={};UID={};PWD={};DATABASE={}'.format(driver, server, uid, pwd, database)quoted = urllib.parse.quote_plus(conn_str)engine = sqlalchemy.create_engine('mssql+pyodbc:///?odbc_connect={}'.format(quoted))return engineif __name__ == '__main__':# create enginedriver = 'ODBC Driver 17 for SQL Server'server = 'xxx'uid = 'xxx'pwd = 'xxx'database = 'xxx'engine = get_sqlalchemy_engine(driver, server, uid, pwd, database)# read excelfile_path = 'xxx'df = pd.read_excel(file_path)# load into SQL Serverschema_name = 'xxx'table_name = 'xxx'df.to_sql(table_name, schema=schema_name, con=engine, index=False, if_exists='replace')
Create a network named "test"
docker network create test
Create two containers using the network
docker run --name c1 --network "test" --rm --entrypoint tail mongo -fdocker run --name c2 --network "test" --rm --entrypoint tail mongo -f
Enter one container to ping the other and it will work
docker exec -it c1 bash
apt-get install iputils-ping # install command ping
root@79568c5ce391:/usr/src/app# ping c2PING c2 (172.18.0.3) 56(84) bytes of data.64 bytes from c2.test (172.18.0.3): icmp_seq=1 ttl=64 time=0.137 ms64 bytes from c2.test (172.18.0.3): icmp_seq=2 ttl=64 time=0.221 ms64 bytes from c2.test (172.18.0.3): icmp_seq=3 ttl=64 time=0.232 ms...
Using default network or "bridge" network does not work:
docker run --name c1 --rm --entrypoint tail web_scraper:v1 -fdocker run --name c2 --rm --entrypoint tail web_scraper:v1 -f
docker run --name c1 --network "bridge" --rm --entrypoint tail web_scraper:v1
$ docker run -p 127.0.0.1:80:8080/tcp ubuntu bash
This binds port 8080 of the container to TCP port 80 on 127.0.0.1 of the host machine. You can also specify udp and sctp ports.
$ docker run --expose 80 ubuntu bash
This exposes port 80 of the container without publishing the port to the host system’s interfaces.
docker imagesdocker build -t image_name .docker rmi $(docker images | grep "^<none>" | awk "{print $3}") # remove all untagged imagesdocker save image_name > image_name.tar # save image as a tar filedocker load < busybox.tar.gz # load image
docker run -p 27017:27017 -v mongodbdata:/data/db mongodocker ps -adocker exec -it ubuntu_bash bashdocker rm container_namedocker rm $(docker ps -a -q) # remove all stopped containers
docker volume create mongodbdatadocker volume lsdocker volume inspect mongodbdata
docker network lsdocker network create network_namedocker network inspect network_namedocker network rm network_name
docker login azure --tenant-id 8432da0f-f8af-4b02-b318-4c777cfab498docker context create aci hpjacicontextdocker context use hpjacicontextdocker compose up
docker-compose up --detach --force-recreate
Sample Code: https://github.com/guangningyu/react-skeleton